input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
@Turing85 @Jakub Many thanks for your comments - especially those critical ones And thanks, Jakub, for pointing me towards using the CA of custom certificate. What needed to be done in order to fix this was: switch the value obtained from kafka-datalake-prod-cluster-ca-cert secret with the full chain of root CA, intermediate signing cert and the certificate itself. Add LoadBalancer IPs of brokers - this is stated in the documentation, yet the way it is formulated misguided me into thinking that adding hostnames/service names to SAN is enough (https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples, and later https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples). After those changes, everything started to work. Thank you for help.
###Instruction: I am trying to configure Strimzi Kafka listener custom cert, following the documentation: https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str I want to expose those listener outside of the Azure Kubernetes Service within the private virtual network. I have provided a custom cert with private key generated by an internal CA and pointed towards that secret in the Kafka configuration: kubectl create secret generic kafka-tls --from-literal=listener.cer=$cert --from-literal=listener.key=$skey -n kafka ` listeners: - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls #Listener TLS config configuration: brokerCertChainAndKey: secretName: kafka-tls certificate: listener.cer key: listener.key bootstrap: loadBalancerIP: 10.67.249.253 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" brokers: - broker: 0 loadBalancerIP: 10.67.249.251 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" - broker: 1 loadBalancerIP: 10.67.249.252 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" - broker: 2 loadBalancerIP: 10.67.249.250 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" authorization: type: simple ` Certificate has following records: SAN: *.kafka-datalake-prod-kafka-brokers *.kafka-datalake-prod-kafka-brokers.kafka.svc kafka-datalake-prod-kafka-bootstrap kafka-datalake-prod-kafka-bootstrap.kafka.svc kafka-datalake-prod-kafka-external-bootstrap kafka-datalake-prod-kafka-external-bootstrap.kafka.svc kafka-datalake-prod-azure.custom.domain CN=kafka-datalake-produkty-prod-azure.custom.domain I have also created an A record in the custom DNS for the given address: kafka-datalake-produkty-prod-azure.custom.domain 10.67.249.253 Then, I created a KafkaUser object: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: customuser namespace: kafka labels: strimzi.io/cluster: kafka-datalake-prod spec: authentication: type: tls authorization: type: simple acls: - resource: type: topic name: notify.somecustomapp.prod.topic_name patternType: literal operations: - Create - Describe - Write # host: "*" When I then retrieve the secrets from the Kafka cluster on AKS: kubectl get secret kafka-datalake-prod-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > broker.crt kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.key}' | base64 -d > customuser.key kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.crt}' | base64 -d > customuser.crt Communication fails, when I try to connect and send some messages with a producer using those 3 files to authenticate/authorize, I get a following issue: INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <connecting> [IPv4 ('10.67.249.253', 9094)]>: connecting to 10.67.249.253:9094 [('10.67.249.253', 9094) IPv4] INFO:kafka.conn:Probing node bootstrap-0 broker version INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL CA from certs/prod/broker.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Cert from certs/prod/customuser.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Key from certs/prod/customuser.key [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997) What am I doing wrong? The communication worked perfectly fine when I was using the same method of connecting, yet the cluster itself and listeners were using the default certs generated by Strimzi cluster. All the best, Krzysztof ### Response: @Turing85 @Jakub Many thanks for your comments - especially those critical ones And thanks, Jakub, for pointing me towards using the CA of custom certificate. What needed to be done in order to fix this was: switch the value obtained from kafka-datalake-prod-cluster-ca-cert secret with the full chain of root CA, intermediate signing cert and the certificate itself. Add LoadBalancer IPs of brokers - this is stated in the documentation, yet the way it is formulated misguided me into thinking that adding hostnames/service names to SAN is enough (https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples, and later https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples). After those changes, everything started to work. Thank you for help.
I am trying to configure Strimzi Kafka listener custom cert, following the documentation: https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str I want to expose those listener outside of the Azure Kubernetes Service within the private virtual network. I have provided a custom cert with private key generated by an internal CA and pointed towards that secret in the Kafka configuration: kubectl create secret generic kafka-tls --from-literal=listener.cer=$cert --from-literal=listener.key=$skey -n kafka ` listeners: - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls #Listener TLS config configuration: brokerCertChainAndKey: secretName: kafka-tls certificate: listener.cer key: listener.key bootstrap: loadBalancerIP: 10.67.249.253 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" brokers: - broker: 0 loadBalancerIP: 10.67.249.251 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" - broker: 1 loadBalancerIP: 10.67.249.252 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" - broker: 2 loadBalancerIP: 10.67.249.250 annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" authorization: type: simple ` Certificate has following records: SAN: *.kafka-datalake-prod-kafka-brokers *.kafka-datalake-prod-kafka-brokers.kafka.svc kafka-datalake-prod-kafka-bootstrap kafka-datalake-prod-kafka-bootstrap.kafka.svc kafka-datalake-prod-kafka-external-bootstrap kafka-datalake-prod-kafka-external-bootstrap.kafka.svc kafka-datalake-prod-azure.custom.domain CN=kafka-datalake-produkty-prod-azure.custom.domain I have also created an A record in the custom DNS for the given address: kafka-datalake-produkty-prod-azure.custom.domain 10.67.249.253 Then, I created a KafkaUser object: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: customuser namespace: kafka labels: strimzi.io/cluster: kafka-datalake-prod spec: authentication: type: tls authorization: type: simple acls: - resource: type: topic name: notify.somecustomapp.prod.topic_name patternType: literal operations: - Create - Describe - Write # host: "*" When I then retrieve the secrets from the Kafka cluster on AKS: kubectl get secret kafka-datalake-prod-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > broker.crt kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.key}' | base64 -d > customuser.key kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.crt}' | base64 -d > customuser.crt Communication fails, when I try to connect and send some messages with a producer using those 3 files to authenticate/authorize, I get a following issue: INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <connecting> [IPv4 ('10.67.249.253', 9094)]>: connecting to 10.67.249.253:9094 [('10.67.249.253', 9094) IPv4] INFO:kafka.conn:Probing node bootstrap-0 broker version INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL CA from certs/prod/broker.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Cert from certs/prod/customuser.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Key from certs/prod/customuser.key [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997) What am I doing wrong? The communication worked perfectly fine when I was using the same method of connecting, yet the cluster itself and listeners were using the default certs generated by Strimzi cluster. All the best, Krzysztof
Cleanest way to achieve what you want is using the remote chart as dependency: Chart.yaml name: mychartname version: 1.0.0 apiVersion: v2 dependencies: - name: moon2 version: "2.4.0" repository: "https://charts.aerokube.com/" And overriding its values like this: values.yaml moon2: customIngress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt" ingressClassName: nginx host: moon3.benighil-mohamed.com tls: - secretName: moon-tls hosts: - moon3.benighil-mohamed.com configs: default: containers: vnc-server: repository: quay.io/aerokube/vnc-server resources: limits: cpu: 400m memory: 512Mi requests: cpu: 200m memory: 512Mi Pay attention to this file. You need to create a key in your values file with the same name as the dependency(moon2 in your case), and indent the values you want to override one level. You need to upload both of these files to a repository and point your ArgoCD application URL to this repository. This has the advantage that whenever the upstream helm chart gets updated, all you need to do is increase the version in Chart.yaml
###Instruction: I would like to install a helm release using argocd, i defined a helm app declaratively like the following : apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: moon namespace: argocd spec: project: aerokube source: chart: moon2 repoURL: https://charts.aerokube.com/ targetRevision: 2.4.0 helm: valueFiles: - values.yml destination: server: "https://kubernetes.default.svc" namespace: moon1 syncPolicy: syncOptions: - CreateNamespace=true Where my values.yml: customIngress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt" ingressClassName: nginx host: moon3.benighil-mohamed.com tls: - secretName: moon-tls hosts: - moon3.benighil-mohamed.com configs: default: containers: vnc-server: repository: quay.io/aerokube/vnc-server resources: limits: cpu: 400m memory: 512Mi requests: cpu: 200m memory: 512Mi Notice, the app does not take values.yml into consideration, and i get the following error: rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template moon --namespace moon1 --kube-version 1.23 --values /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml --api-versions acme.cert-manager.io/v1 --api-versions acme.cert-manager.io/v1/Challenge --api-versions acme.cert-manager.io/v1/Order --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions ceph.rook.io/v1 --api-versions ceph.rook.io/v1/CephBlockPool --api-versions ceph.rook.io/v1/CephBlockPoolRadosNamespace --api-versions ceph.rook.io/v1/CephBucketNotification --api-versions ceph.rook.io/v1/CephBucketTopic --api-versions ceph.rook.io/v1/CephClient --api-versions ceph.rook.io/v1/CephCluster --api-versions ceph.rook.io/v1/CephFilesystem --api-versions ceph.rook.io/v1/CephFilesystemMirror --api-versions ceph.rook.io/v1/CephFilesystemSubVolumeGroup --api-versions ceph.rook.io/v1/CephNFS --api-versions ceph.rook.io/v1/CephObjectRealm --api-versions ceph.rook.io/v1/CephObjectStore --api-versions ceph.rook.io/v1/CephObjectStoreUser --api-versions ceph.rook.io/v1/CephObjectZone --api-versions ceph.rook.io/v1/CephObjectZoneGroup --api-versions ceph.rook.io/v1/CephRBDMirror --api-versions cert-manager.io/v1 --api-versions cert-manager.io/v1/Certificate --api-versions cert-manager.io/v1/CertificateRequest --api-versions cert-manager.io/v1/ClusterIssuer --api-versions cert-manager.io/v1/Issuer --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.projectcalico.org/v1 --api-versions crd.projectcalico.org/v1/BGPConfiguration --api-versions crd.projectcalico.org/v1/BGPPeer --api-versions crd.projectcalico.org/v1/BlockAffinity --api-versions crd.projectcalico.org/v1/CalicoNodeStatus --api-versions crd.projectcalico.org/v1/ClusterInformation --api-versions crd.projectcalico.org/v1/FelixConfiguration --api-versions crd.projectcalico.org/v1/GlobalNetworkPolicy --api-versions crd.projectcalico.org/v1/GlobalNetworkSet --api-versions crd.projectcalico.org/v1/HostEndpoint --api-versions crd.projectcalico.org/v1/IPAMBlock --api-versions crd.projectcalico.org/v1/IPAMConfig --api-versions crd.projectcalico.org/v1/IPAMHandle --api-versions crd.projectcalico.org/v1/IPPool --api-versions crd.projectcalico.org/v1/IPReservation --api-versions crd.projectcalico.org/v1/KubeControllersConfiguration --api-versions crd.projectcalico.org/v1/NetworkPolicy --api-versions crd.projectcalico.org/v1/NetworkSet --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions moon.aerokube.com/v1 --api-versions moon.aerokube.com/v1/BrowserSet --api-versions moon.aerokube.com/v1/Config --api-versions moon.aerokube.com/v1/DeviceSet --api-versions moon.aerokube.com/v1/License --api-versions moon.aerokube.com/v1/Quota --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions objectbucket.io/v1alpha1 --api-versions objectbucket.io/v1alpha1/ObjectBucket --api-versions objectbucket.io/v1alpha1/ObjectBucketClaim --api-versions operator.tigera.io/v1 --api-versions operator.tigera.io/v1/APIServer --api-versions operator.tigera.io/v1/ImageSet --api-versions operator.tigera.io/v1/Installation --api-versions operator.tigera.io/v1/TigeraStatus --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions snapshot.storage.k8s.io/v1 --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --api-versions snapshot.storage.k8s.io/v1beta1 --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotContent --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: open /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml: no such file or directory Notice both application.yml and values.yml are located in the same directory on my local machine, ie: the structure of the 2 files in question looks like : . β”œβ”€β”€ application.yml └── values.yml Any help please ? ### Response: Cleanest way to achieve what you want is using the remote chart as dependency: Chart.yaml name: mychartname version: 1.0.0 apiVersion: v2 dependencies: - name: moon2 version: "2.4.0" repository: "https://charts.aerokube.com/" And overriding its values like this: values.yaml moon2: customIngress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt" ingressClassName: nginx host: moon3.benighil-mohamed.com tls: - secretName: moon-tls hosts: - moon3.benighil-mohamed.com configs: default: containers: vnc-server: repository: quay.io/aerokube/vnc-server resources: limits: cpu: 400m memory: 512Mi requests: cpu: 200m memory: 512Mi Pay attention to this file. You need to create a key in your values file with the same name as the dependency(moon2 in your case), and indent the values you want to override one level. You need to upload both of these files to a repository and point your ArgoCD application URL to this repository. This has the advantage that whenever the upstream helm chart gets updated, all you need to do is increase the version in Chart.yaml
I would like to install a helm release using argocd, i defined a helm app declaratively like the following : apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: moon namespace: argocd spec: project: aerokube source: chart: moon2 repoURL: https://charts.aerokube.com/ targetRevision: 2.4.0 helm: valueFiles: - values.yml destination: server: "https://kubernetes.default.svc" namespace: moon1 syncPolicy: syncOptions: - CreateNamespace=true Where my values.yml: customIngress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt" ingressClassName: nginx host: moon3.benighil-mohamed.com tls: - secretName: moon-tls hosts: - moon3.benighil-mohamed.com configs: default: containers: vnc-server: repository: quay.io/aerokube/vnc-server resources: limits: cpu: 400m memory: 512Mi requests: cpu: 200m memory: 512Mi Notice, the app does not take values.yml into consideration, and i get the following error: rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template moon --namespace moon1 --kube-version 1.23 --values /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml --api-versions acme.cert-manager.io/v1 --api-versions acme.cert-manager.io/v1/Challenge --api-versions acme.cert-manager.io/v1/Order --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions ceph.rook.io/v1 --api-versions ceph.rook.io/v1/CephBlockPool --api-versions ceph.rook.io/v1/CephBlockPoolRadosNamespace --api-versions ceph.rook.io/v1/CephBucketNotification --api-versions ceph.rook.io/v1/CephBucketTopic --api-versions ceph.rook.io/v1/CephClient --api-versions ceph.rook.io/v1/CephCluster --api-versions ceph.rook.io/v1/CephFilesystem --api-versions ceph.rook.io/v1/CephFilesystemMirror --api-versions ceph.rook.io/v1/CephFilesystemSubVolumeGroup --api-versions ceph.rook.io/v1/CephNFS --api-versions ceph.rook.io/v1/CephObjectRealm --api-versions ceph.rook.io/v1/CephObjectStore --api-versions ceph.rook.io/v1/CephObjectStoreUser --api-versions ceph.rook.io/v1/CephObjectZone --api-versions ceph.rook.io/v1/CephObjectZoneGroup --api-versions ceph.rook.io/v1/CephRBDMirror --api-versions cert-manager.io/v1 --api-versions cert-manager.io/v1/Certificate --api-versions cert-manager.io/v1/CertificateRequest --api-versions cert-manager.io/v1/ClusterIssuer --api-versions cert-manager.io/v1/Issuer --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.projectcalico.org/v1 --api-versions crd.projectcalico.org/v1/BGPConfiguration --api-versions crd.projectcalico.org/v1/BGPPeer --api-versions crd.projectcalico.org/v1/BlockAffinity --api-versions crd.projectcalico.org/v1/CalicoNodeStatus --api-versions crd.projectcalico.org/v1/ClusterInformation --api-versions crd.projectcalico.org/v1/FelixConfiguration --api-versions crd.projectcalico.org/v1/GlobalNetworkPolicy --api-versions crd.projectcalico.org/v1/GlobalNetworkSet --api-versions crd.projectcalico.org/v1/HostEndpoint --api-versions crd.projectcalico.org/v1/IPAMBlock --api-versions crd.projectcalico.org/v1/IPAMConfig --api-versions crd.projectcalico.org/v1/IPAMHandle --api-versions crd.projectcalico.org/v1/IPPool --api-versions crd.projectcalico.org/v1/IPReservation --api-versions crd.projectcalico.org/v1/KubeControllersConfiguration --api-versions crd.projectcalico.org/v1/NetworkPolicy --api-versions crd.projectcalico.org/v1/NetworkSet --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions moon.aerokube.com/v1 --api-versions moon.aerokube.com/v1/BrowserSet --api-versions moon.aerokube.com/v1/Config --api-versions moon.aerokube.com/v1/DeviceSet --api-versions moon.aerokube.com/v1/License --api-versions moon.aerokube.com/v1/Quota --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions objectbucket.io/v1alpha1 --api-versions objectbucket.io/v1alpha1/ObjectBucket --api-versions objectbucket.io/v1alpha1/ObjectBucketClaim --api-versions operator.tigera.io/v1 --api-versions operator.tigera.io/v1/APIServer --api-versions operator.tigera.io/v1/ImageSet --api-versions operator.tigera.io/v1/Installation --api-versions operator.tigera.io/v1/TigeraStatus --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions snapshot.storage.k8s.io/v1 --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --api-versions snapshot.storage.k8s.io/v1beta1 --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotContent --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: open /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml: no such file or directory Notice both application.yml and values.yml are located in the same directory on my local machine, ie: the structure of the 2 files in question looks like : . β”œβ”€β”€ application.yml └── values.yml Any help please ?
You're trying to use both the kubernetes inventory plugin and the k8s_info module, and because of that you're getting conflicting results. The two don't have anything to do with each other. The inventory module The kubernetes inventory module is -- I think -- a weird beast; it produces an ansible inventory in which the pods in your cluster are presented as Ansible hosts. To see a list of all the pod names in your cluster, you could write a playbook like this: - hosts: all gather_facts: false tasks: - name: Print pod names debug: msg: "{{ inventory_hostname }}" This will respect the context you've configured in your kubernetes inventory plugin configuration. For example, if I have in inventory/k8s.yaml the following: plugin: kubernetes.core.k8s connections: - kubeconfig: ./kubeconfig context: 'kind-cluster2' Then the above playbook will list the pod names from kind-cluster2, regardless of the current-context setting in my kubeconfig file. In my test environment, this produces: PLAY [all] ********************************************************************* TASK [Print pod names] ********************************************************* ok: [kubernetes] => { "msg": "kubernetes" } ok: [coredns-565d847f94-2shl6_coredns] => { "msg": "coredns-565d847f94-2shl6_coredns" } ok: [coredns-565d847f94-md57c_coredns] => { "msg": "coredns-565d847f94-md57c_coredns" } ok: [kube-dns] => { "msg": "kube-dns" } ok: [etcd-cluster2-control-plane_etcd] => { "msg": "etcd-cluster2-control-plane_etcd" } ok: [kube-apiserver-cluster2-control-plane_kube-apiserver] => { "msg": "kube-apiserver-cluster2-control-plane_kube-apiserver" } ok: [kube-controller-manager-cluster2-control-plane_kube-controller-manager] => { "msg": "kube-controller-manager-cluster2-control-plane_kube-controller-manager" } ok: [kube-scheduler-cluster2-control-plane_kube-scheduler] => { "msg": "kube-scheduler-cluster2-control-plane_kube-scheduler" } ok: [kindnet-nc27b_kindnet-cni] => { "msg": "kindnet-nc27b_kindnet-cni" } ok: [kube-proxy-9chgt_kube-proxy] => { "msg": "kube-proxy-9chgt_kube-proxy" } ok: [local-path-provisioner-684f458cdd-925v5_local-path-provisioner] => { "msg": "local-path-provisioner-684f458cdd-925v5_local-path-provisioner" } PLAY RECAP ********************************************************************* coredns-565d847f94-2shl6_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 coredns-565d847f94-md57c_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd-cluster2-control-plane_etcd : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kindnet-nc27b_kindnet-cni : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-apiserver-cluster2-control-plane_kube-apiserver : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-controller-manager-cluster2-control-plane_kube-controller-manager : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-dns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-proxy-9chgt_kube-proxy : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-scheduler-cluster2-control-plane_kube-scheduler : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kubernetes : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 local-path-provisioner-684f458cdd-925v5_local-path-provisioner : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 The key point here is that your inventory will consist of a list of pods. I've never found this particularly useful. The k8s_info module The k8s_info queries a kubernetes cluster for a list of objects. It doesn't care about your inventory configuration -- it will run on whichever target host you've defined for your play (probably localhost) and perform the rough equivalent of kubectl get <whatever>. If you want to use an explicit context, you need to set that as part of your module parameters. For example, to see a list of pods in kind-cluster2, I could use the following playbook: - hosts: localhost gather_facts: false tasks: - kubernetes.core.k8s_info: kind: pod kubeconfig: ./kubeconfig context: kind-cluster2 register: pods - debug: msg: "{{ pods.resources | json_query('[].metadata.name') }}" Which in my test environment produces as output: PLAY [localhost] *************************************************************** TASK [kubernetes.core.k8s_info] ************************************************ ok: [localhost] TASK [debug] ******************************************************************* ok: [localhost] => { "msg": [ "coredns-565d847f94-2shl6", "coredns-565d847f94-md57c", "etcd-cluster2-control-plane", "kindnet-nc27b", "kube-apiserver-cluster2-control-plane", "kube-controller-manager-cluster2-control-plane", "kube-proxy-9chgt", "kube-scheduler-cluster2-control-plane", "local-path-provisioner-684f458cdd-925v5" ] } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In conclusion: you probably want to use k8s_info rather than the inventory plugin, and you'll need to configure the module properly by setting the context (and possibly the kubeconfig) parameters when you call the module. Is there any way I can define context and kubeconfig outside of the tasks (globally) if I am using k8s_info module? According to the documentation, you could set the K8S_AUTH_KUBECONFIG and K8S_AUTH_CONTEXT environment variables if you want to globally configure the settings for the k8s_info module. You could also write your task like this: - kubernetes.core.k8s_info: kind: pod kubeconfig: "{{ k8s_kubeconfig }}" context: "{{ k8s_context }}" register: pods And then define the k8s_kubeconfig and k8s_context variables somewhere else in your Ansible configuration (e.g., as group vars). This makes it easy to retarget things to a different cluster with only a single change.
###Instruction: Inventory file (inventory/k8s.yaml): plugin: kubernetes.core.k8s connections: - kubeconfig: ~/.kube/config context: 'cluster-2' Task file (roles/common/tasks/main.yaml): # Method 1: Using `kubernetes.core` plugin to list the pod names: - name: Get a list of all pods from any namespace kubernetes.core.k8s_info: kind: Pod register: pod_list - name: Print pod names debug: msg: "pod_list: {{ pod_list | json_query('resources[*].metadata.name') }} " # Method 2: Using `shell` command to list the pod names: - name: Get node names shell: kubectl get pods register: pod_list2 - name: Print pod names debug: msg: "{{ pod_list2.stdout }}" Ansible config (ansible.cfg): [inventory] enable_plugins = host_list, auto, yaml, ini, kubernetes.core.k8s Main file (main.yaml): --- - hosts: localhost gather_facts: false collections: - azure.azcollection - kubernetes.core roles: - "common" Running command to execute task: ansible-playbook main.yaml -i cluster-2/k8s.yaml -e role=common -e cluster_name=cluster-2 Question: I am running the above configs to run get the pods from the remote cluster mentioned in the inventory file. But, the problem is, I am still getting the pod names from the local cluster and not the cluster-2 in Method 1 and 2. k8s plugin should get the list of pods from cluster-2 as described in the inventory file. How can I connect to remote kubernetes cluster? I also checked output with -vvvv: ansible-playbook [core 2.14.0] config file = /Users/test/u/apps/ansible.cfg configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/test/Library/Python/3.9/lib/python/site-packages/ansible ansible collection location = /Users/test/.ansible/collections:/usr/share/ansible/collections executable location = /Users/test/Library/Python/3.9/bin/ansible-playbook python version = 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/opt/python@3.9/bin/python3.9) jinja version = 3.1.2 libyaml = True Using /Users/test/u/apps/ansible.cfg as config file setting up inventory plugins Loading collection kubernetes.core from /Users/test/.ansible/collections/ansible_collections/kubernetes/core ### Response: You're trying to use both the kubernetes inventory plugin and the k8s_info module, and because of that you're getting conflicting results. The two don't have anything to do with each other. The inventory module The kubernetes inventory module is -- I think -- a weird beast; it produces an ansible inventory in which the pods in your cluster are presented as Ansible hosts. To see a list of all the pod names in your cluster, you could write a playbook like this: - hosts: all gather_facts: false tasks: - name: Print pod names debug: msg: "{{ inventory_hostname }}" This will respect the context you've configured in your kubernetes inventory plugin configuration. For example, if I have in inventory/k8s.yaml the following: plugin: kubernetes.core.k8s connections: - kubeconfig: ./kubeconfig context: 'kind-cluster2' Then the above playbook will list the pod names from kind-cluster2, regardless of the current-context setting in my kubeconfig file. In my test environment, this produces: PLAY [all] ********************************************************************* TASK [Print pod names] ********************************************************* ok: [kubernetes] => { "msg": "kubernetes" } ok: [coredns-565d847f94-2shl6_coredns] => { "msg": "coredns-565d847f94-2shl6_coredns" } ok: [coredns-565d847f94-md57c_coredns] => { "msg": "coredns-565d847f94-md57c_coredns" } ok: [kube-dns] => { "msg": "kube-dns" } ok: [etcd-cluster2-control-plane_etcd] => { "msg": "etcd-cluster2-control-plane_etcd" } ok: [kube-apiserver-cluster2-control-plane_kube-apiserver] => { "msg": "kube-apiserver-cluster2-control-plane_kube-apiserver" } ok: [kube-controller-manager-cluster2-control-plane_kube-controller-manager] => { "msg": "kube-controller-manager-cluster2-control-plane_kube-controller-manager" } ok: [kube-scheduler-cluster2-control-plane_kube-scheduler] => { "msg": "kube-scheduler-cluster2-control-plane_kube-scheduler" } ok: [kindnet-nc27b_kindnet-cni] => { "msg": "kindnet-nc27b_kindnet-cni" } ok: [kube-proxy-9chgt_kube-proxy] => { "msg": "kube-proxy-9chgt_kube-proxy" } ok: [local-path-provisioner-684f458cdd-925v5_local-path-provisioner] => { "msg": "local-path-provisioner-684f458cdd-925v5_local-path-provisioner" } PLAY RECAP ********************************************************************* coredns-565d847f94-2shl6_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 coredns-565d847f94-md57c_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd-cluster2-control-plane_etcd : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kindnet-nc27b_kindnet-cni : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-apiserver-cluster2-control-plane_kube-apiserver : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-controller-manager-cluster2-control-plane_kube-controller-manager : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-dns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-proxy-9chgt_kube-proxy : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kube-scheduler-cluster2-control-plane_kube-scheduler : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kubernetes : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 local-path-provisioner-684f458cdd-925v5_local-path-provisioner : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 The key point here is that your inventory will consist of a list of pods. I've never found this particularly useful. The k8s_info module The k8s_info queries a kubernetes cluster for a list of objects. It doesn't care about your inventory configuration -- it will run on whichever target host you've defined for your play (probably localhost) and perform the rough equivalent of kubectl get <whatever>. If you want to use an explicit context, you need to set that as part of your module parameters. For example, to see a list of pods in kind-cluster2, I could use the following playbook: - hosts: localhost gather_facts: false tasks: - kubernetes.core.k8s_info: kind: pod kubeconfig: ./kubeconfig context: kind-cluster2 register: pods - debug: msg: "{{ pods.resources | json_query('[].metadata.name') }}" Which in my test environment produces as output: PLAY [localhost] *************************************************************** TASK [kubernetes.core.k8s_info] ************************************************ ok: [localhost] TASK [debug] ******************************************************************* ok: [localhost] => { "msg": [ "coredns-565d847f94-2shl6", "coredns-565d847f94-md57c", "etcd-cluster2-control-plane", "kindnet-nc27b", "kube-apiserver-cluster2-control-plane", "kube-controller-manager-cluster2-control-plane", "kube-proxy-9chgt", "kube-scheduler-cluster2-control-plane", "local-path-provisioner-684f458cdd-925v5" ] } PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In conclusion: you probably want to use k8s_info rather than the inventory plugin, and you'll need to configure the module properly by setting the context (and possibly the kubeconfig) parameters when you call the module. Is there any way I can define context and kubeconfig outside of the tasks (globally) if I am using k8s_info module? According to the documentation, you could set the K8S_AUTH_KUBECONFIG and K8S_AUTH_CONTEXT environment variables if you want to globally configure the settings for the k8s_info module. You could also write your task like this: - kubernetes.core.k8s_info: kind: pod kubeconfig: "{{ k8s_kubeconfig }}" context: "{{ k8s_context }}" register: pods And then define the k8s_kubeconfig and k8s_context variables somewhere else in your Ansible configuration (e.g., as group vars). This makes it easy to retarget things to a different cluster with only a single change.
Inventory file (inventory/k8s.yaml): plugin: kubernetes.core.k8s connections: - kubeconfig: ~/.kube/config context: 'cluster-2' Task file (roles/common/tasks/main.yaml): # Method 1: Using `kubernetes.core` plugin to list the pod names: - name: Get a list of all pods from any namespace kubernetes.core.k8s_info: kind: Pod register: pod_list - name: Print pod names debug: msg: "pod_list: {{ pod_list | json_query('resources[*].metadata.name') }} " # Method 2: Using `shell` command to list the pod names: - name: Get node names shell: kubectl get pods register: pod_list2 - name: Print pod names debug: msg: "{{ pod_list2.stdout }}" Ansible config (ansible.cfg): [inventory] enable_plugins = host_list, auto, yaml, ini, kubernetes.core.k8s Main file (main.yaml): --- - hosts: localhost gather_facts: false collections: - azure.azcollection - kubernetes.core roles: - "common" Running command to execute task: ansible-playbook main.yaml -i cluster-2/k8s.yaml -e role=common -e cluster_name=cluster-2 Question: I am running the above configs to run get the pods from the remote cluster mentioned in the inventory file. But, the problem is, I am still getting the pod names from the local cluster and not the cluster-2 in Method 1 and 2. k8s plugin should get the list of pods from cluster-2 as described in the inventory file. How can I connect to remote kubernetes cluster? I also checked output with -vvvv: ansible-playbook [core 2.14.0] config file = /Users/test/u/apps/ansible.cfg configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/test/Library/Python/3.9/lib/python/site-packages/ansible ansible collection location = /Users/test/.ansible/collections:/usr/share/ansible/collections executable location = /Users/test/Library/Python/3.9/bin/ansible-playbook python version = 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/opt/python@3.9/bin/python3.9) jinja version = 3.1.2 libyaml = True Using /Users/test/u/apps/ansible.cfg as config file setting up inventory plugins Loading collection kubernetes.core from /Users/test/.ansible/collections/ansible_collections/kubernetes/core
The main error was the port. The right port is 8080. This yaml works fine for me. apiVersion: apps/v1 kind: Deployment metadata: name: kafka-ui-deployment labels: app: kafka-ui spec: replicas: 1 selector: matchLabels: app: kafka-ui template: metadata: labels: app: kafka-ui spec: containers: - name: kafka-ui image: provectuslabs/kafka-ui:latest env: - name: KAFKA_CLUSTERS_0_NAME value: "K8 Kafka Cluster" - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS value: kafka-kafka-bootstrap.kafka:9092 # <- service-name.namespace:9092 imagePullPolicy: Always resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1024Mi" cpu: "1000m" ports: - containerPort: 8080 # <- Rectify the port protocol: TCP apiVersion: v1 kind: Service metadata: name: kafka-ui-service namespace: kafka spec: selector: app: kafka-ui ports: - protocol: TCP port: 8080 targetPort: 8080 # <- Rectify the port. ingressClassName: public rules: - host: "localhost" http: paths: - path: /kafka-ui pathType: Prefix backend: service: name: kafka-ui-service port: number: 8080
###Instruction: I tried to deploy the Kafka-UI in my local Kubernetes cluster, but ingress-nginx gives 502 error (Bad Gateway). I used the following configurations: Deployment: apiVersion: apps/v1 kind: Deployment metadata: name: kafka-ui-deployment labels: app: kafka-ui spec: replicas: 1 selector: matchLabels: app: kafka-ui template: metadata: labels: app: kafka-ui spec: containers: - name: kafka-ui image: provectuslabs/kafka-ui:latest env: - name: KAFKA_CLUSTERS_0_NAME value: "K8 Kafka Cluster" - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS value: kafka-svc:9093 imagePullPolicy: Always resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1024Mi" cpu: "1000m" ports: - containerPort: 8088 protocol: TCP Service: apiVersion: v1 kind: Service metadata: name: kafka-ui-service spec: selector: app: kafka-ui ports: - protocol: TCP port: 80 targetPort: 8088 Ingress: ingressClassName: public rules: - host: "localhost" http: paths: - path: /kafka-ui pathType: Prefix backend: service: name: kafka-ui-service port: number: 80 Port-forward the targetport got following error: Forwarding from 127.0.0.1:8088 -> 8088 Forwarding from [::1]:8088 -> 8088 channel 9: open failed: connect failed: Connection refused Handling connection for 8088 Handling connection for 8088 E0623 09:18:20.768161 33100 portforward.go:406] an error occurred forwarding 8088 -> 8088: error forwarding port 8088 to pod 75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0, uid : failed to execute portforward in network namespace "/var/run/netns/cni-a5ed0994-0456-6b6c-5a79-90e582ef09b3": failed to connect to localhost:8088 inside namespace "75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0", IPv4: dial tcp4 127.0.0.1:8088: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found E0623 09:18:20.768994 33100 portforward.go:234] lost connection to pod Any suggestions will be appreciated. Thanks for your help! ### Response: The main error was the port. The right port is 8080. This yaml works fine for me. apiVersion: apps/v1 kind: Deployment metadata: name: kafka-ui-deployment labels: app: kafka-ui spec: replicas: 1 selector: matchLabels: app: kafka-ui template: metadata: labels: app: kafka-ui spec: containers: - name: kafka-ui image: provectuslabs/kafka-ui:latest env: - name: KAFKA_CLUSTERS_0_NAME value: "K8 Kafka Cluster" - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS value: kafka-kafka-bootstrap.kafka:9092 # <- service-name.namespace:9092 imagePullPolicy: Always resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1024Mi" cpu: "1000m" ports: - containerPort: 8080 # <- Rectify the port protocol: TCP apiVersion: v1 kind: Service metadata: name: kafka-ui-service namespace: kafka spec: selector: app: kafka-ui ports: - protocol: TCP port: 8080 targetPort: 8080 # <- Rectify the port. ingressClassName: public rules: - host: "localhost" http: paths: - path: /kafka-ui pathType: Prefix backend: service: name: kafka-ui-service port: number: 8080
I tried to deploy the Kafka-UI in my local Kubernetes cluster, but ingress-nginx gives 502 error (Bad Gateway). I used the following configurations: Deployment: apiVersion: apps/v1 kind: Deployment metadata: name: kafka-ui-deployment labels: app: kafka-ui spec: replicas: 1 selector: matchLabels: app: kafka-ui template: metadata: labels: app: kafka-ui spec: containers: - name: kafka-ui image: provectuslabs/kafka-ui:latest env: - name: KAFKA_CLUSTERS_0_NAME value: "K8 Kafka Cluster" - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS value: kafka-svc:9093 imagePullPolicy: Always resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "1024Mi" cpu: "1000m" ports: - containerPort: 8088 protocol: TCP Service: apiVersion: v1 kind: Service metadata: name: kafka-ui-service spec: selector: app: kafka-ui ports: - protocol: TCP port: 80 targetPort: 8088 Ingress: ingressClassName: public rules: - host: "localhost" http: paths: - path: /kafka-ui pathType: Prefix backend: service: name: kafka-ui-service port: number: 80 Port-forward the targetport got following error: Forwarding from 127.0.0.1:8088 -> 8088 Forwarding from [::1]:8088 -> 8088 channel 9: open failed: connect failed: Connection refused Handling connection for 8088 Handling connection for 8088 E0623 09:18:20.768161 33100 portforward.go:406] an error occurred forwarding 8088 -> 8088: error forwarding port 8088 to pod 75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0, uid : failed to execute portforward in network namespace "/var/run/netns/cni-a5ed0994-0456-6b6c-5a79-90e582ef09b3": failed to connect to localhost:8088 inside namespace "75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0", IPv4: dial tcp4 127.0.0.1:8088: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found E0623 09:18:20.768994 33100 portforward.go:234] lost connection to pod Any suggestions will be appreciated. Thanks for your help!
NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1. Can you specify which paragraph is this? If not mistaken the document didn't state cgroupfs is preferred over systemd for distro that uses cgroup v1. systemd is widely accepted as the init system but cgroup v2 is available only if you run a fairly new (>=5.8) kernel. Are there any risks by using systemd in cgroup v1 env? Cgroup v1 is mostly in-use to date and systemd is designed to work with it. That being said, cgroupfs is the default for kubelet at this time of writing. As kernel mature overtime, systemd may one day become the default and all the backing CRI will follow thru. A side note, docker default to cgroupfs on system that only support cgroup v1 (regardless if systemd is present). It will use systemd on system that uses cgroup v2 and systemd is present. However, k8s has dropped docker as the CRI with the removal of dockershim starting v1.24. You can continue with dockershim with Mirantis.
###Instruction: I am wondering if systemd could be used as the cgroup driver in cgroup v1 environment. NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1. I have tried to set up a Kubernetes cluster using systemd as cgroup driver, and it is working correctly for now. The test env is: Kubelet: 1.23 OS: Ubuntu 20.04 (Kernel 5.4.0, cgroup v1) CRI: containerd 1.5.9 Cgroup Driver: systemd Are there any risks by using systemd in cgroup v1 env? ### Response: NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1. Can you specify which paragraph is this? If not mistaken the document didn't state cgroupfs is preferred over systemd for distro that uses cgroup v1. systemd is widely accepted as the init system but cgroup v2 is available only if you run a fairly new (>=5.8) kernel. Are there any risks by using systemd in cgroup v1 env? Cgroup v1 is mostly in-use to date and systemd is designed to work with it. That being said, cgroupfs is the default for kubelet at this time of writing. As kernel mature overtime, systemd may one day become the default and all the backing CRI will follow thru. A side note, docker default to cgroupfs on system that only support cgroup v1 (regardless if systemd is present). It will use systemd on system that uses cgroup v2 and systemd is present. However, k8s has dropped docker as the CRI with the removal of dockershim starting v1.24. You can continue with dockershim with Mirantis.
I am wondering if systemd could be used as the cgroup driver in cgroup v1 environment. NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1. I have tried to set up a Kubernetes cluster using systemd as cgroup driver, and it is working correctly for now. The test env is: Kubelet: 1.23 OS: Ubuntu 20.04 (Kernel 5.4.0, cgroup v1) CRI: containerd 1.5.9 Cgroup Driver: systemd Are there any risks by using systemd in cgroup v1 env?
Found the solution. This is not detailed in the reference https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference or example Flink Deployment https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/ But here it says: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference/#jobmanagerspec JobManager pod template. It will be merged with FlinkDeploymentSpec.podTemplate So I just added envFrom from the example in which shows you how to extend the FlinkDeployment CRD: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/ Confirmed this is working as I had to get this work for my own application now apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: python-example spec: image: localhost:32000/flink-python-example:1.16.0 flinkVersion: v1_16 flinkConfiguration: taskmanager.numberOfTaskSlots: "1" serviceAccount: flink jobManager: resource: memory: "2048m" cpu: 1 podTemplate: apiVersion: v1 kind: Pod metadata: name: pod-template spec: serviceAccount: flink containers: # Do not change the main container name - name: flink-main-container envFrom: - secretRef: name: <SECRET RESOURCE NAME> taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder entryClass: "org.apache.flink.client.python.PythonDriver" args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"] parallelism: 1 upgradeMode: stateless
###Instruction: I'm using Flink Kubernetes Operator 1.3.0 and need to pass some environment variables to a Python job. I have followed the official documentation and the example runs fine. How can I inject environment variables so that I can use it inside the python file? EDIT: Here's the yaml file that I've used. Its straight from the example link above: apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: python-example spec: image: localhost:32000/flink-python-example:1.16.0 flinkVersion: v1_16 flinkConfiguration: taskmanager.numberOfTaskSlots: "1" serviceAccount: flink jobManager: resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder entryClass: "org.apache.flink.client.python.PythonDriver" args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"] parallelism: 1 upgradeMode: stateless As you can see it's a custom resource of kind FlinkDeployment. And here's the python code: import logging import sys from pyflink.datastream import StreamExecutionEnvironment from pyflink.table import StreamTableEnvironment def python_demo(): env = StreamExecutionEnvironment.get_execution_environment() env.set_parallelism(1) t_env = StreamTableEnvironment.create(stream_execution_environment=env) t_env.execute_sql(""" CREATE TABLE orders ( order_number BIGINT, price DECIMAL(32,2), buyer ROW<first_name STRING, last_name STRING>, order_time TIMESTAMP(3) ) WITH ( 'connector' = 'datagen' )""") t_env.execute_sql(""" CREATE TABLE print_table WITH ('connector' = 'print') LIKE orders""") t_env.execute_sql(""" INSERT INTO print_table SELECT * FROM orders""") if __name__ == '__main__': logging.basicConfig(stream=sys.stdout, level=logging.INFO, format="%(message)s") python_demo() ### Response: Found the solution. This is not detailed in the reference https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference or example Flink Deployment https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/ But here it says: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference/#jobmanagerspec JobManager pod template. It will be merged with FlinkDeploymentSpec.podTemplate So I just added envFrom from the example in which shows you how to extend the FlinkDeployment CRD: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/ Confirmed this is working as I had to get this work for my own application now apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: python-example spec: image: localhost:32000/flink-python-example:1.16.0 flinkVersion: v1_16 flinkConfiguration: taskmanager.numberOfTaskSlots: "1" serviceAccount: flink jobManager: resource: memory: "2048m" cpu: 1 podTemplate: apiVersion: v1 kind: Pod metadata: name: pod-template spec: serviceAccount: flink containers: # Do not change the main container name - name: flink-main-container envFrom: - secretRef: name: <SECRET RESOURCE NAME> taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder entryClass: "org.apache.flink.client.python.PythonDriver" args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"] parallelism: 1 upgradeMode: stateless
I'm using Flink Kubernetes Operator 1.3.0 and need to pass some environment variables to a Python job. I have followed the official documentation and the example runs fine. How can I inject environment variables so that I can use it inside the python file? EDIT: Here's the yaml file that I've used. Its straight from the example link above: apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: python-example spec: image: localhost:32000/flink-python-example:1.16.0 flinkVersion: v1_16 flinkConfiguration: taskmanager.numberOfTaskSlots: "1" serviceAccount: flink jobManager: resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder entryClass: "org.apache.flink.client.python.PythonDriver" args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"] parallelism: 1 upgradeMode: stateless As you can see it's a custom resource of kind FlinkDeployment. And here's the python code: import logging import sys from pyflink.datastream import StreamExecutionEnvironment from pyflink.table import StreamTableEnvironment def python_demo(): env = StreamExecutionEnvironment.get_execution_environment() env.set_parallelism(1) t_env = StreamTableEnvironment.create(stream_execution_environment=env) t_env.execute_sql(""" CREATE TABLE orders ( order_number BIGINT, price DECIMAL(32,2), buyer ROW<first_name STRING, last_name STRING>, order_time TIMESTAMP(3) ) WITH ( 'connector' = 'datagen' )""") t_env.execute_sql(""" CREATE TABLE print_table WITH ('connector' = 'print') LIKE orders""") t_env.execute_sql(""" INSERT INTO print_table SELECT * FROM orders""") if __name__ == '__main__': logging.basicConfig(stream=sys.stdout, level=logging.INFO, format="%(message)s") python_demo()
In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs. I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server. Below are few links to get you started: https://aly.arriqaaq.com/kubernetes-informers/ https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/ https://pkg.go.dev/k8s.io/client-go/informers
###Instruction: I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes: func (c *Client) watchPods(namespace string, restartLimit int) { fmt.Println("Watch Kubernetes Pods") watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(), metav1.ListOptions{ FieldSelector: "", }) if err != nil { fmt.Printf("error create pod watcher: %v\n", err) return } for event := range watcher.ResultChan() { pod, ok := event.Object.(*corev1.Pod) if !ok || !checkValidPod(pod) { continue } owner := getOwnerReference(pod) for _, c := range pod.Status.ContainerStatuses { if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) { if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" { doSomething() } if c.State.Terminated != nil { doSomethingElse() } } } } } The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes. ### Response: In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs. I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server. Below are few links to get you started: https://aly.arriqaaq.com/kubernetes-informers/ https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/ https://pkg.go.dev/k8s.io/client-go/informers
I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes: func (c *Client) watchPods(namespace string, restartLimit int) { fmt.Println("Watch Kubernetes Pods") watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(), metav1.ListOptions{ FieldSelector: "", }) if err != nil { fmt.Printf("error create pod watcher: %v\n", err) return } for event := range watcher.ResultChan() { pod, ok := event.Object.(*corev1.Pod) if !ok || !checkValidPod(pod) { continue } owner := getOwnerReference(pod) for _, c := range pod.Status.ContainerStatuses { if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) { if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" { doSomething() } if c.State.Terminated != nil { doSomethingElse() } } } } } The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes.
According to the documented Prometheus metrics exposed by QuestDB, questdb_memory_mem_used includes all native memory allocations which may include virtual memory if it wasn't touched yet. This metric includes mmapped files, so that's why its value is that big. You might see that metric grow when you access large tables on your instance. DISTINCT and JOIN queries will also affect this metric. Please note mmapped memory is elastic and mostly virtual. Only free memory is used for the page cache, so it's fine if this metric has a large value.
###Instruction: We've been looking at memory for capacity planning and have a Helm-deployed 8GB-limited QuestDB instance running on one of our k8s clusters. We recently began scraping metrics off of it. I'm trying to get to the bottom of the questdb_memory_mem_used metric, which occasionally sees excursions way beyond the resource limits. Does anyone have a good handle on what contributes to this metric and what we could be seeing? For reference the NATIVE_* tagged metrics seem much more sane in the same time period: ### Response: According to the documented Prometheus metrics exposed by QuestDB, questdb_memory_mem_used includes all native memory allocations which may include virtual memory if it wasn't touched yet. This metric includes mmapped files, so that's why its value is that big. You might see that metric grow when you access large tables on your instance. DISTINCT and JOIN queries will also affect this metric. Please note mmapped memory is elastic and mostly virtual. Only free memory is used for the page cache, so it's fine if this metric has a large value.
We've been looking at memory for capacity planning and have a Helm-deployed 8GB-limited QuestDB instance running on one of our k8s clusters. We recently began scraping metrics off of it. I'm trying to get to the bottom of the questdb_memory_mem_used metric, which occasionally sees excursions way beyond the resource limits. Does anyone have a good handle on what contributes to this metric and what we could be seeing? For reference the NATIVE_* tagged metrics seem much more sane in the same time period:
As per this SO , As you said Post removing β€œcommand” from deployment file and it is working Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables in your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly and check here. Refer to this SO1 , SO2 for more information. You can also pass the username and password as secrets .
###Instruction: I have created pod with below pod definition which uses mongo official docker image. The expected result here is mongo docker creates user and pwd with env variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD and then it will use /etc/mongo/mongod.conf provided to it from volume. Instead what happens is - on first connection - I am unable to connect saying user does not exist. The error disappears if I remove the command section. Any Idea how to resolve this issue. The equivalent docker command works well, but in kubernetes auth does not work if I provide a custom configuration file. docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret --name some-mongo -v /etc/mongo:/etc/mongo -v /etc/ssl/keyfile:/data/db/keyfile mongo:4.2.23 --config /etc/mongo/mongod.conf apiVersion: v1 kind: Pod metadata: name: mongodb labels: db: mongodb spec: containers: - name: mongodb image: mongo:4.2.23 command: - mongod - "--config" - "/etc/mongo/mongod.conf" env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongosecret key: user - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongosecret key: password volumeMounts: - name: mongodb-keyfile mountPath: /etc/ssl - name: mongodb-config mountPath: /etc/mongo readOnly: true volumes: - name: mongodb-keyfile secret: secretName: mongodb-keyfile defaultMode: 0600 - name: mongodb-config configMap: name: mongodb-config ### Response: As per this SO , As you said Post removing β€œcommand” from deployment file and it is working Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables in your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly and check here. Refer to this SO1 , SO2 for more information. You can also pass the username and password as secrets .
I have created pod with below pod definition which uses mongo official docker image. The expected result here is mongo docker creates user and pwd with env variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD and then it will use /etc/mongo/mongod.conf provided to it from volume. Instead what happens is - on first connection - I am unable to connect saying user does not exist. The error disappears if I remove the command section. Any Idea how to resolve this issue. The equivalent docker command works well, but in kubernetes auth does not work if I provide a custom configuration file. docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret --name some-mongo -v /etc/mongo:/etc/mongo -v /etc/ssl/keyfile:/data/db/keyfile mongo:4.2.23 --config /etc/mongo/mongod.conf apiVersion: v1 kind: Pod metadata: name: mongodb labels: db: mongodb spec: containers: - name: mongodb image: mongo:4.2.23 command: - mongod - "--config" - "/etc/mongo/mongod.conf" env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongosecret key: user - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongosecret key: password volumeMounts: - name: mongodb-keyfile mountPath: /etc/ssl - name: mongodb-config mountPath: /etc/mongo readOnly: true volumes: - name: mongodb-keyfile secret: secretName: mongodb-keyfile defaultMode: 0600 - name: mongodb-config configMap: name: mongodb-config
Found the answer here: https://stackoverflow.com/a/69999245/3883694 You can disable SSL certificate verification. https://doc.traefik.io/traefik/routing/overview/#transport-configuration --- apiVersion: traefik.containo.us/v1alpha1 kind: ServersTransport metadata: name: traefik-dashboard-transport namespace: traefik spec: serverName: traefik-dashboard insecureSkipVerify: true --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: dashboard spec: entryPoints: - web routes: - match: (PathPrefix(`/dashboard`) || Host(`traefik.example.com`)) kind: Rule services: - name: api@internal kind: TraefikService serversTransport: traefik-dashboard-transport
###Instruction: I'm moving my project to Kubernetes using Traefik for routing and MetalLB as my load balancer. I've deployed several apps and I'd like to make use of official Kubernetes-Dashboard. So I deployed the Kubernetes-Dashboard using recommended config and created IngressRoute: # dashboard.yaml apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard spec: entryPoints: - websecure routes: - match: Host(`k8s.example.com`, `www.k8s.example.com`) kind: Rule middlewares: # - name: https-redirectscheme # - name: nginx-basic-auth services: - kind: Service name: kubernetes-dashboard # namespace: kubernetes-dashboard port: 443 tls: secretName: k8s.example.com-tls It shows up in the Traefik Dashboard, but when I try to access k8s.example.com I get Internal Server Error. Thank you ### Response: Found the answer here: https://stackoverflow.com/a/69999245/3883694 You can disable SSL certificate verification. https://doc.traefik.io/traefik/routing/overview/#transport-configuration --- apiVersion: traefik.containo.us/v1alpha1 kind: ServersTransport metadata: name: traefik-dashboard-transport namespace: traefik spec: serverName: traefik-dashboard insecureSkipVerify: true --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: dashboard spec: entryPoints: - web routes: - match: (PathPrefix(`/dashboard`) || Host(`traefik.example.com`)) kind: Rule services: - name: api@internal kind: TraefikService serversTransport: traefik-dashboard-transport
I'm moving my project to Kubernetes using Traefik for routing and MetalLB as my load balancer. I've deployed several apps and I'd like to make use of official Kubernetes-Dashboard. So I deployed the Kubernetes-Dashboard using recommended config and created IngressRoute: # dashboard.yaml apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: kubernetes-dashboard namespace: kubernetes-dashboard spec: entryPoints: - websecure routes: - match: Host(`k8s.example.com`, `www.k8s.example.com`) kind: Rule middlewares: # - name: https-redirectscheme # - name: nginx-basic-auth services: - kind: Service name: kubernetes-dashboard # namespace: kubernetes-dashboard port: 443 tls: secretName: k8s.example.com-tls It shows up in the Traefik Dashboard, but when I try to access k8s.example.com I get Internal Server Error. Thank you
As of 2023, it is possible. Just run: minikube ssh '<command>' For example: minikube ssh 'ls -la'
###Instruction: Just curious if it is possible to execute a command inside minikube without doing minikube ssh and then executing the command. Something like: minikube ssh exec -it <command> ### Response: As of 2023, it is possible. Just run: minikube ssh '<command>' For example: minikube ssh 'ls -la'
Just curious if it is possible to execute a command inside minikube without doing minikube ssh and then executing the command. Something like: minikube ssh exec -it <command>
You should think of template (and the Helm-specific include) like function calls that take a single parameter. Here you're passing that parameter as $, a special variable that refers to the top-level object. You probably want ., which within a range loop refers to the current item. {{ range $.Values.imageCredentials }} data: .dockerconfigjson: {{ template "imagePullSecret" . }} {{/* ^ not `$` */}} {{ end }} You might be confusing this case with the similar Create kubernetes docker-registry secret from yaml file for each lookup namespaces?. In that question, the template is trying to refer to .Values.imageCredentials. This expression can be decomposed as: within . (the template parameter), find the field Values, and within that find the field imageCredentials. In that question the template parameter must be the top-level Helm object so that it can dereference .Values. But in your example here, you loop over a list in the top-level template, and need to pass the individual values into the supporting template. {{/* Your question: template accepts a sub-item of the values; pass `.` or another derived expression */}} {{- define "imagePullSecret" -}} {{ printf ... .registry .email ... }} {{- end -}} {{/* Other question: template accepts the top-level Helm object; pass `$`, a saved reference to the top-level object, or you can use `.` outside a `range` or `with` block that redefines it */}} {{- define "imagePullSecret" -}} {{- with .Values.imageCredentials }}...{{- end }} {{- end -}}
###Instruction: Secret value is returning NIL When i run the command below to check the values in the secret kubectl get secret cred2 -n service-b -o jsonpath='{.data.*}' | base64 -d It returns a Nil value. How do i get the value that was passed on the values.yaml file? {"auths":{"%!s(<nil>)":{"username":"%!s(<nil>)","password":"%!s(<nil>)","email":"%!s(<nil>)","auth":"JSFzKDxuaWw+KTolIXMoPG5pbD4p"}}}%` Sample Code used to generate the secret {{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }} {{ range $.Values.imageCredentials }} apiVersion: v1 kind: Secret metadata: name: {{ .name }} namespace: {{ $namespace.metadata.name }} type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: {{ template "imagePullSecret" $ }} {{ end }} {{ end }} values.yaml imageCredentials: - name: cred1 registry: quay.io username: someone password: sillyness email: someone@host.com - name: cred2 registry: quay.io username: someone password: sillyness email: someone@host.com _helpers {{- define "imagePullSecret" }} {{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }} {{- end }} ### Response: You should think of template (and the Helm-specific include) like function calls that take a single parameter. Here you're passing that parameter as $, a special variable that refers to the top-level object. You probably want ., which within a range loop refers to the current item. {{ range $.Values.imageCredentials }} data: .dockerconfigjson: {{ template "imagePullSecret" . }} {{/* ^ not `$` */}} {{ end }} You might be confusing this case with the similar Create kubernetes docker-registry secret from yaml file for each lookup namespaces?. In that question, the template is trying to refer to .Values.imageCredentials. This expression can be decomposed as: within . (the template parameter), find the field Values, and within that find the field imageCredentials. In that question the template parameter must be the top-level Helm object so that it can dereference .Values. But in your example here, you loop over a list in the top-level template, and need to pass the individual values into the supporting template. {{/* Your question: template accepts a sub-item of the values; pass `.` or another derived expression */}} {{- define "imagePullSecret" -}} {{ printf ... .registry .email ... }} {{- end -}} {{/* Other question: template accepts the top-level Helm object; pass `$`, a saved reference to the top-level object, or you can use `.` outside a `range` or `with` block that redefines it */}} {{- define "imagePullSecret" -}} {{- with .Values.imageCredentials }}...{{- end }} {{- end -}}
Secret value is returning NIL When i run the command below to check the values in the secret kubectl get secret cred2 -n service-b -o jsonpath='{.data.*}' | base64 -d It returns a Nil value. How do i get the value that was passed on the values.yaml file? {"auths":{"%!s(<nil>)":{"username":"%!s(<nil>)","password":"%!s(<nil>)","email":"%!s(<nil>)","auth":"JSFzKDxuaWw+KTolIXMoPG5pbD4p"}}}%` Sample Code used to generate the secret {{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }} {{ range $.Values.imageCredentials }} apiVersion: v1 kind: Secret metadata: name: {{ .name }} namespace: {{ $namespace.metadata.name }} type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: {{ template "imagePullSecret" $ }} {{ end }} {{ end }} values.yaml imageCredentials: - name: cred1 registry: quay.io username: someone password: sillyness email: someone@host.com - name: cred2 registry: quay.io username: someone password: sillyness email: someone@host.com _helpers {{- define "imagePullSecret" }} {{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }} {{- end }}
...under the Authorization section it says that authorization is controlled through a Policy. The sample you saw is a cluster that uses ABAC. From your screenshot your cluster is likely to use RBAC which access policy is not used. So what exactly is a Policy? How is it setup? Here's the official documentation for Attribute-based access control
###Instruction: In the Kubernetes documentation for controlling-access to the API server, under the Authorization section it says that authorization is controlled through a Policy. However, a Policy is not found as an API resource: ❯ k api-resources | grep -i policy networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget ❯ kubectl version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.25.2 Kustomize Version: v4.5.7 Server Version: v1.25.3+k3s1 So what exactly is a Policy? How is it setup? The docs are not very clear on this point. ### Response: ...under the Authorization section it says that authorization is controlled through a Policy. The sample you saw is a cluster that uses ABAC. From your screenshot your cluster is likely to use RBAC which access policy is not used. So what exactly is a Policy? How is it setup? Here's the official documentation for Attribute-based access control
In the Kubernetes documentation for controlling-access to the API server, under the Authorization section it says that authorization is controlled through a Policy. However, a Policy is not found as an API resource: ❯ k api-resources | grep -i policy networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget ❯ kubectl version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.25.2 Kustomize Version: v4.5.7 Server Version: v1.25.3+k3s1 So what exactly is a Policy? How is it setup? The docs are not very clear on this point.
Correct me if I'm wrong, but from the description of your problem it looks like you actually have two problems: The fact that you have separate services code in the same repo (team A and team B); The fact that you have several environments (development/production, for example) The second issue can be easily solved if you use Helm, for example. It allows you to template your builds and pass different configs to it. The first one, can also be partly solved by helm, since you can also separate your teams builds using templating. However, a few years ago, I was working on a .net monorepo and faced yet another problem: every time there was a PR merged to our git repo, a build was triggered in Jenkins for every service we had, even those that did not have changes. From the description of your problem, it is not clear to me if you have a Jenkins pipeline configured and/or if you are also facing something similar, but if you are, you can have a look at what I did to workaround the issue: repo. Feel free to have a look and I hope that helps.
###Instruction: I'm not sure how to phrase this so I apologize if the title of the question does not make sense to you. Due to various historical reasons, I have multiple teams contribute to the same code repo, that serves multiple service endpoints. Currently all teams deployments and releases are done together which creates a lot of churn. I'm trying to get to this state: team A and B can still share the same code base, but they can deploy separately using different Kubernetes namespace? Like: Team A's code is all under com/mycompany/team_a, team B's under com/mycompany/team_b Somewhere in the repo there is a config that does the mapping: com/mycompany/team_a/* => config_team_a.yaml, that has a Kubernetes config, maybe with namespace TeamA/ServiceA com/mycompany/team_b/* => config_team_b.yaml with namespace TeamB/ServiceB So that they can build their image separately and, of course, deploy separately. ### Response: Correct me if I'm wrong, but from the description of your problem it looks like you actually have two problems: The fact that you have separate services code in the same repo (team A and team B); The fact that you have several environments (development/production, for example) The second issue can be easily solved if you use Helm, for example. It allows you to template your builds and pass different configs to it. The first one, can also be partly solved by helm, since you can also separate your teams builds using templating. However, a few years ago, I was working on a .net monorepo and faced yet another problem: every time there was a PR merged to our git repo, a build was triggered in Jenkins for every service we had, even those that did not have changes. From the description of your problem, it is not clear to me if you have a Jenkins pipeline configured and/or if you are also facing something similar, but if you are, you can have a look at what I did to workaround the issue: repo. Feel free to have a look and I hope that helps.
I'm not sure how to phrase this so I apologize if the title of the question does not make sense to you. Due to various historical reasons, I have multiple teams contribute to the same code repo, that serves multiple service endpoints. Currently all teams deployments and releases are done together which creates a lot of churn. I'm trying to get to this state: team A and B can still share the same code base, but they can deploy separately using different Kubernetes namespace? Like: Team A's code is all under com/mycompany/team_a, team B's under com/mycompany/team_b Somewhere in the repo there is a config that does the mapping: com/mycompany/team_a/* => config_team_a.yaml, that has a Kubernetes config, maybe with namespace TeamA/ServiceA com/mycompany/team_b/* => config_team_b.yaml with namespace TeamB/ServiceB So that they can build their image separately and, of course, deploy separately.
Once PVC/PV are created (https://kubernetes.io/docs/concepts/storage/persistent-volumes/), there are number of possible solutions. For specific question, options 1 and 2 will suffice. Listing more for reference, however this list does not try to be complete. Simplest and native, kubectl cp: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp rsync - still quite simple, but also robust. Recommended for a task (both of below options were tested) TO pod: https://serverfault.com/questions/741670/rsync-files-to-a-kubernetes-pod FROM pod: https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.html tar, but incremental: https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-pod Tools for synchronisation, backup, etc For example, https://github.com/backube/volsync
###Instruction: I want to add or copy files into persistent volume and then use it in container using volume mount ?any help ### Response: Once PVC/PV are created (https://kubernetes.io/docs/concepts/storage/persistent-volumes/), there are number of possible solutions. For specific question, options 1 and 2 will suffice. Listing more for reference, however this list does not try to be complete. Simplest and native, kubectl cp: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp rsync - still quite simple, but also robust. Recommended for a task (both of below options were tested) TO pod: https://serverfault.com/questions/741670/rsync-files-to-a-kubernetes-pod FROM pod: https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.html tar, but incremental: https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-pod Tools for synchronisation, backup, etc For example, https://github.com/backube/volsync
I want to add or copy files into persistent volume and then use it in container using volume mount ?any help
Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues. refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile
###Instruction: Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state I’ve tried installing on argocd namespace and existing ones. Still doesn’t work I tried to install it using the commands below: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml ### Response: Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues. refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile
Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state I’ve tried installing on argocd namespace and existing ones. Still doesn’t work I tried to install it using the commands below: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
it's due to the node's POD limit or IP limit on Nodes. So if we see official Amazon doc, t3.micro maximum 2 interface you can use and 2 private IP. Roughly you might be getting around 4 IPs to use and 1st IP get used by Node etc, There will be also default system PODs running as Daemon set and so. Add new instance or upgrade to larger instance who can handle more pods.
###Instruction: I have created a new EKS cluster with 1 worker node in a public subnet. I am able to query node, connect to the cluster, and run pod creation command, however, when I am trying to create a pod it fails with the below error got by describing the pod. Please guide. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 81s default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Warning FailedScheduling 16m default-scheduler 0/2 nodes are available: 2 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable, 3 Too many pods. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling. Warning FailedScheduling 14m (x3 over 22m) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 Too many pods. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling. Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 1 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. Warning FailedScheduling 7m14s default-scheduler no nodes available to schedule pods Warning FailedScheduling 105s (x5 over 35m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. I am able to get status of the node and it looks ready: kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-12-61.ec2.internal Ready <none> 15m v1.24.7-eks-fb459a0 While troubleshooting I tried below options: recreate the complete demo cluster - still the same error try recreating pods with different images - still the same error trying to increase to instance type to t3.micro - still the same error reviewed security groups and other parameters in a cluster - Couldnt come to RCA ### Response: it's due to the node's POD limit or IP limit on Nodes. So if we see official Amazon doc, t3.micro maximum 2 interface you can use and 2 private IP. Roughly you might be getting around 4 IPs to use and 1st IP get used by Node etc, There will be also default system PODs running as Daemon set and so. Add new instance or upgrade to larger instance who can handle more pods.
I have created a new EKS cluster with 1 worker node in a public subnet. I am able to query node, connect to the cluster, and run pod creation command, however, when I am trying to create a pod it fails with the below error got by describing the pod. Please guide. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 81s default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Warning FailedScheduling 16m default-scheduler 0/2 nodes are available: 2 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable, 3 Too many pods. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling. Warning FailedScheduling 14m (x3 over 22m) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 Too many pods. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling. Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 1 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. Warning FailedScheduling 7m14s default-scheduler no nodes available to schedule pods Warning FailedScheduling 105s (x5 over 35m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. I am able to get status of the node and it looks ready: kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-12-61.ec2.internal Ready <none> 15m v1.24.7-eks-fb459a0 While troubleshooting I tried below options: recreate the complete demo cluster - still the same error try recreating pods with different images - still the same error trying to increase to instance type to t3.micro - still the same error reviewed security groups and other parameters in a cluster - Couldnt come to RCA
You may have seen JSONPath syntax like this floating around the internet and hoped that you could select a list item and patch it using Kustomize. /spec/containers[name=my-app]/command As @Rico mentioned in his answer: This is a limitation with JSON6902 - it only accepts paths using JSONPointer syntax, defined by JSON6901. So, no, you cannot currently address a list item using [key=value] syntax when using kustomize's patchesJson6902. However, the problem presented in the original question around dealing with changes to the order of list items does have a solution using JSONPointer syntax (JSON6901) without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied). Simply add another JSON6902 operation to your patches to test that the item remains at the index you specified. # First, test that the item is still at the list index you expect - op: test path: /spec/containers/0/name value: my-app # Now that you know your item is still at index-0, it's safe to patch its command - op: replace path: /spec/containers/0/command value: ["sh", "-c", "tail -f /dev/null"] The test operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid! I use this trick especially when dealing with custom resources, since I: A) Don't have to give kustomize a whole new openAPI spec, and B) Don't have to depend on the CRD authors having added the correct extension annotation (like: "x-kubernetes-patch-merge-key": "name") to make sure my strategic merge patches on list items work the way I need them to.
###Instruction: I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this: patch: |- - op: add path: /spec/containers/1/command value: ["sh", "-c", "tail -f /dev/null"] That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable. Is there a way to patch by name, or target a label/annotation, or some other mechanism? path: /spec/containers/${NAME_OF_CONTAINER}/command Any insight is greatly appreciated. ### Response: You may have seen JSONPath syntax like this floating around the internet and hoped that you could select a list item and patch it using Kustomize. /spec/containers[name=my-app]/command As @Rico mentioned in his answer: This is a limitation with JSON6902 - it only accepts paths using JSONPointer syntax, defined by JSON6901. So, no, you cannot currently address a list item using [key=value] syntax when using kustomize's patchesJson6902. However, the problem presented in the original question around dealing with changes to the order of list items does have a solution using JSONPointer syntax (JSON6901) without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied). Simply add another JSON6902 operation to your patches to test that the item remains at the index you specified. # First, test that the item is still at the list index you expect - op: test path: /spec/containers/0/name value: my-app # Now that you know your item is still at index-0, it's safe to patch its command - op: replace path: /spec/containers/0/command value: ["sh", "-c", "tail -f /dev/null"] The test operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid! I use this trick especially when dealing with custom resources, since I: A) Don't have to give kustomize a whole new openAPI spec, and B) Don't have to depend on the CRD authors having added the correct extension annotation (like: "x-kubernetes-patch-merge-key": "name") to make sure my strategic merge patches on list items work the way I need them to.
I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this: patch: |- - op: add path: /spec/containers/1/command value: ["sh", "-c", "tail -f /dev/null"] That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable. Is there a way to patch by name, or target a label/annotation, or some other mechanism? path: /spec/containers/${NAME_OF_CONTAINER}/command Any insight is greatly appreciated.
For anyone that stumbles upon this question trying to setup Controur, gRPC, and TLS; you want to use HTTPProxy instead. Working configuration with TLS: apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: service-proxy spec: virtualhost: fqdn: service.example.com corsPolicy: allowCredentials: true allowOrigin: - "*" allowMethods: - GET - POST - OPTIONS allowHeaders: - authorization - cache-control - x-grpc-web - User-Agent - x-accept-content-transfer-encoding - x-accept-response-streaming - x-user-agent - x-grpc-web - grpc-timeout - Grpc-Message - Grpc-Status - content-type tls: secretName: service-secret routes: - conditions: - prefix: / services: - name: my-service port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app run: my-service name: my-service spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 selector: matchLabels: app: my-app run: my-service template: metadata: labels: app: my-app run: my-service spec: containers: - image: image:latest name: my-service resources: {} imagePullPolicy: Always readinessProbe: initialDelaySeconds: 10 periodSeconds: 2 httpGet: path: /health-check port: 3000 --- apiVersion: v1 kind: Service metadata: name: my-service labels: app: my-app run: my-service annotations: projectcontour.io/upstream-protocol.h2c: "80" spec: ports: - port: 80 targetPort: 50051 protocol: TCP selector: run: my-service A Couple notes My understanding from the documentation is that the projectcontour.io/upstream-protocol.h2c should actually be projectcontour.io/upstream-protocol.h2, but doing that I get a TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER error in the response. This configuration, with h2c, seems to be working and actually employing TLS to transport request/response data. I haven't gone through and groomed the allowHeadrs, this is just a set that's working for me right now using grpcurl and a web application frontend built with React using the awesome nice-grpc-web library. Obligatory - you should not use "*" as an allow origin in production because it is a security concern - warning (really... don't do it). The TLS secret secret-service was actually manually generated, I haven't tested the cert-manager stuff yet.
###Instruction: What is the best way to allow CORS requests at this time? (Given that CORS support in the Contour Ingress currently is in the "parking lot") My particular use case is hosting a GRPC service, which envoy reverse proxies. Conveniently, contour also supports grpc-web out-of-the-box, which we'd like to use for our web service. However, given that CORS are not supported, we cannot do cross-domain requests. Apart from making our web app use the same domain as the GRPC api, is there any other solution that could fill our need at the moment? Basically, we'd want the envoy to be configured very similarly to the GRPC web example config. ### Response: For anyone that stumbles upon this question trying to setup Controur, gRPC, and TLS; you want to use HTTPProxy instead. Working configuration with TLS: apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: service-proxy spec: virtualhost: fqdn: service.example.com corsPolicy: allowCredentials: true allowOrigin: - "*" allowMethods: - GET - POST - OPTIONS allowHeaders: - authorization - cache-control - x-grpc-web - User-Agent - x-accept-content-transfer-encoding - x-accept-response-streaming - x-user-agent - x-grpc-web - grpc-timeout - Grpc-Message - Grpc-Status - content-type tls: secretName: service-secret routes: - conditions: - prefix: / services: - name: my-service port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app run: my-service name: my-service spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 selector: matchLabels: app: my-app run: my-service template: metadata: labels: app: my-app run: my-service spec: containers: - image: image:latest name: my-service resources: {} imagePullPolicy: Always readinessProbe: initialDelaySeconds: 10 periodSeconds: 2 httpGet: path: /health-check port: 3000 --- apiVersion: v1 kind: Service metadata: name: my-service labels: app: my-app run: my-service annotations: projectcontour.io/upstream-protocol.h2c: "80" spec: ports: - port: 80 targetPort: 50051 protocol: TCP selector: run: my-service A Couple notes My understanding from the documentation is that the projectcontour.io/upstream-protocol.h2c should actually be projectcontour.io/upstream-protocol.h2, but doing that I get a TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER error in the response. This configuration, with h2c, seems to be working and actually employing TLS to transport request/response data. I haven't gone through and groomed the allowHeadrs, this is just a set that's working for me right now using grpcurl and a web application frontend built with React using the awesome nice-grpc-web library. Obligatory - you should not use "*" as an allow origin in production because it is a security concern - warning (really... don't do it). The TLS secret secret-service was actually manually generated, I haven't tested the cert-manager stuff yet.
What is the best way to allow CORS requests at this time? (Given that CORS support in the Contour Ingress currently is in the "parking lot") My particular use case is hosting a GRPC service, which envoy reverse proxies. Conveniently, contour also supports grpc-web out-of-the-box, which we'd like to use for our web service. However, given that CORS are not supported, we cannot do cross-domain requests. Apart from making our web app use the same domain as the GRPC api, is there any other solution that could fill our need at the moment? Basically, we'd want the envoy to be configured very similarly to the GRPC web example config.
When you create the Service, it is associated with Pods selected by the label selector defined in the Service spec.selector. When a request is made to the Service, the Control Plane retrieves its spec.ports[*].targetPort value: If it does not exist, the value of spec.ports[*].port field is used instead to forward traffic to the Pods. If it does exist and it is a number, the targetPort value is used to forward traffic to the Pods. If it is a string, the Control Plane looks up the port by name in spec.ports[*].name of the Pod, and uses the port with the matching name as the target port. Here is the relevant piece of code in Kubernetes: // FindPort locates the container port for the given pod and portName. If the // targetPort is a number, use that. If the targetPort is a string, look that // string up in all named ports in all containers in the target pod. If no // match is found, fail. func FindPort(pod *v1.Pod, svcPort *v1.ServicePort) (int, error) { portName := svcPort.TargetPort switch portName.Type { case intstr.String: name := portName.StrVal for _, container := range pod.Spec.Containers { for _, port := range container.Ports { if port.Name == name && port.Protocol == svcPort.Protocol { return int(port.ContainerPort), nil } } } case intstr.Int: return portName.IntValue(), nil } return 0, fmt.Errorf("no suitable port for manifest: %s", pod.UID) } source
###Instruction: There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod: ports: - containerPort: 8000 name: app-port protocol: TCP Here's a config part of the service: ports: - name: app-port nodePort: 32000 port: 8000 protocol: TCP targetPort: app-port type: NodePort How is the name app-port resolved? ### Response: When you create the Service, it is associated with Pods selected by the label selector defined in the Service spec.selector. When a request is made to the Service, the Control Plane retrieves its spec.ports[*].targetPort value: If it does not exist, the value of spec.ports[*].port field is used instead to forward traffic to the Pods. If it does exist and it is a number, the targetPort value is used to forward traffic to the Pods. If it is a string, the Control Plane looks up the port by name in spec.ports[*].name of the Pod, and uses the port with the matching name as the target port. Here is the relevant piece of code in Kubernetes: // FindPort locates the container port for the given pod and portName. If the // targetPort is a number, use that. If the targetPort is a string, look that // string up in all named ports in all containers in the target pod. If no // match is found, fail. func FindPort(pod *v1.Pod, svcPort *v1.ServicePort) (int, error) { portName := svcPort.TargetPort switch portName.Type { case intstr.String: name := portName.StrVal for _, container := range pod.Spec.Containers { for _, port := range container.Ports { if port.Name == name && port.Protocol == svcPort.Protocol { return int(port.ContainerPort), nil } } } case intstr.Int: return portName.IntValue(), nil } return 0, fmt.Errorf("no suitable port for manifest: %s", pod.UID) } source
There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod: ports: - containerPort: 8000 name: app-port protocol: TCP Here's a config part of the service: ports: - name: app-port nodePort: 32000 port: 8000 protocol: TCP targetPort: app-port type: NodePort How is the name app-port resolved?
Given you have a service for your Angular app like this: apiVersion: v1 kind: Service metadata: name: your-angular-app namespace: dev spec: selector: app: your-angular-app ports: - protocol: TCP port: 80 targetPort: 80 name: http And a service for your Spring Boot app in the same namespace like this: apiVersion: v1 kind: Service metadata: name: your-spring-app namespace: dev spec: selector: app: your-spring-app ports: - protocol: TCP port: 80 targetPort: 80 name: http Make sure that your deployments (or whatever you use to actually create your app instances in the cluster) have matching labels. Deployment example: apiVersion: apps/v1 kind: Deployment metadata: name: your-spring-app namespace: dev spec: replicas: 1 selector: matchLabels: app: your-spring-app # <- this must match the spec.selector.app in the service Assuming that your Spring app offers all API endpoints with the /api prefix, you could use an Ingress like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: your-ingress namespace: dev spec: rules: - host: your-site.com http: paths: - path: / pathType: Prefix backend: service: name: your-angular-app port: name: http - path: /api pathType: Prefix backend: service: name: your-spring-app port: name: http In a cloud environment you would most likely need additional annotations on your Ingress like the Ingress class, but these information can be found in the Cloud provider's documentation.
###Instruction: I am trying to create two ingress service, one which will expose the frontend to internet and 2nd which will connect backend to frontend. This is in Minikube and I have the ingress addon enabled. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: angular-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: angular.test - http: paths: - path: / pathType: Prefix backend: service: name: angular-service port: number: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: spring-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: springboot.test - http: paths: - path: /exact/form-config/applicant/field?section=Additional%20Details&page-index=1&enabled-on= pathType: Exact backend: service: name: angular-service port: number: 8080 I want to use the name of the backend host url in my angular application for springboot and I am want them to connect without using IP since IP are dynamic. Is this correct? ### Response: Given you have a service for your Angular app like this: apiVersion: v1 kind: Service metadata: name: your-angular-app namespace: dev spec: selector: app: your-angular-app ports: - protocol: TCP port: 80 targetPort: 80 name: http And a service for your Spring Boot app in the same namespace like this: apiVersion: v1 kind: Service metadata: name: your-spring-app namespace: dev spec: selector: app: your-spring-app ports: - protocol: TCP port: 80 targetPort: 80 name: http Make sure that your deployments (or whatever you use to actually create your app instances in the cluster) have matching labels. Deployment example: apiVersion: apps/v1 kind: Deployment metadata: name: your-spring-app namespace: dev spec: replicas: 1 selector: matchLabels: app: your-spring-app # <- this must match the spec.selector.app in the service Assuming that your Spring app offers all API endpoints with the /api prefix, you could use an Ingress like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: your-ingress namespace: dev spec: rules: - host: your-site.com http: paths: - path: / pathType: Prefix backend: service: name: your-angular-app port: name: http - path: /api pathType: Prefix backend: service: name: your-spring-app port: name: http In a cloud environment you would most likely need additional annotations on your Ingress like the Ingress class, but these information can be found in the Cloud provider's documentation.
I am trying to create two ingress service, one which will expose the frontend to internet and 2nd which will connect backend to frontend. This is in Minikube and I have the ingress addon enabled. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: angular-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: angular.test - http: paths: - path: / pathType: Prefix backend: service: name: angular-service port: number: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: spring-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: springboot.test - http: paths: - path: /exact/form-config/applicant/field?section=Additional%20Details&page-index=1&enabled-on= pathType: Exact backend: service: name: angular-service port: number: 8080 I want to use the name of the backend host url in my angular application for springboot and I am want them to connect without using IP since IP are dynamic. Is this correct?
During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance. The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml configuration as described below: Run the following command from the role/user which created the EKS cluster: kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml Add the role along with the required permissions such as system:masters in the mapRoles: section as shown below: mapRoles: | - rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - system:masters Apply the updates to the cluster with the command: kubectl apply -f aws-auth.yaml This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.
###Instruction: We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap. data: mapRoles: | - groups: - system:nodes - system:bootstrappers - system:masters rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role username: system:node:{{EC2PrivateDNSName}} After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help. Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap. ### Response: During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance. The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml configuration as described below: Run the following command from the role/user which created the EKS cluster: kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml Add the role along with the required permissions such as system:masters in the mapRoles: section as shown below: mapRoles: | - rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - system:masters Apply the updates to the cluster with the command: kubectl apply -f aws-auth.yaml This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.
We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap. data: mapRoles: | - groups: - system:nodes - system:bootstrappers - system:masters rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role username: system:node:{{EC2PrivateDNSName}} After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help. Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.
Basically you will refer to the target port with the port number. But, you can give a name to each port mentioned in the pod configuration and use this name in the service to refer to the specified port. By this your service will be aware of which port it needs to consider from the pod. To answer your question, the port name which you created in the pod belongs only to the pod configuration. It can be used as a reference to get the portnumber of a pod. But in your service you are creating a new portname attribute which only belongs to the service. The targetPort in service will only consider the port number from the portname attribute created in the service and not from the pod. Likewise the kubernetes knows that it needs to consider the target port number for the service by the port name created in the service and not from the pod. Basically k8's uses DNS to determine all this stuff. For more information you can refer to this link1 link2
###Instruction: There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod: ports: - containerPort: 8000 name: app-port protocol: TCP Here's a config part of the service: ports: - name: app-port nodePort: 32000 port: 8000 protocol: TCP targetPort: app-port type: NodePort How is the name app-port resolved? ### Response: Basically you will refer to the target port with the port number. But, you can give a name to each port mentioned in the pod configuration and use this name in the service to refer to the specified port. By this your service will be aware of which port it needs to consider from the pod. To answer your question, the port name which you created in the pod belongs only to the pod configuration. It can be used as a reference to get the portnumber of a pod. But in your service you are creating a new portname attribute which only belongs to the service. The targetPort in service will only consider the port number from the portname attribute created in the service and not from the pod. Likewise the kubernetes knows that it needs to consider the target port number for the service by the port name created in the service and not from the pod. Basically k8's uses DNS to determine all this stuff. For more information you can refer to this link1 link2
There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod: ports: - containerPort: 8000 name: app-port protocol: TCP Here's a config part of the service: ports: - name: app-port nodePort: 32000 port: 8000 protocol: TCP targetPort: app-port type: NodePort How is the name app-port resolved?
Using below methods for checking the reason for pod restart: Use kubectl describe deployment <deployment_name> and kubectl describe pod <pod_name> which contains the information. # Events: # Type Reason Age From Message # ---- ------ ---- ---- ------- # Warning BackOff 40m kubelet, gke-xx Back-off restarting failed container # .. You can see that the pod is restarted due to image pull backoff. We need to troubleshoot on that particular issue. Check for logs using : kubectl logs <pod_name> To get previous logs of your container (the restarted one), you may use --previous key on pod, like this: kubectl logs your_pod_name --previous You can also write a final message to /dev/termination-log, and this will show up as described in docs. Attaching a troubleshooting doc for reference.
###Instruction: From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that. ### Response: Using below methods for checking the reason for pod restart: Use kubectl describe deployment <deployment_name> and kubectl describe pod <pod_name> which contains the information. # Events: # Type Reason Age From Message # ---- ------ ---- ---- ------- # Warning BackOff 40m kubelet, gke-xx Back-off restarting failed container # .. You can see that the pod is restarted due to image pull backoff. We need to troubleshoot on that particular issue. Check for logs using : kubectl logs <pod_name> To get previous logs of your container (the restarted one), you may use --previous key on pod, like this: kubectl logs your_pod_name --previous You can also write a final message to /dev/termination-log, and this will show up as described in docs. Attaching a troubleshooting doc for reference.
From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.
In the end I've found out that elastic cluster's version used by company is much older than that of filebeat (8.1 against 7.9). So temporary allowing usage of older versions has solved the issue for time being. output.elasticsearch: allow_older_versions: true BUT @Paulo also had a point and construction not + and was excessive in the end.
###Instruction: We have elasticsearch cluster at ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT} and filebeat pod at k8s cluster that exports other pods' logs There is filebeat.yml: filebeat.autodiscover: providers: - type: kubernetes templates: - condition: equals: kubernetes.namespace: develop config: - type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log exclude_lines: ["^\\s+[\\-`('.|_]"] hints.enabled: true hints.default_config: type: container multiline.type: pattern multiline.pattern: '^[[:space:]]' multiline.negate: false multiline.match: after http: enabled: true host: localhost port: 5066 output.elasticsearch: hosts: '${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}' username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} indices: - index: "develop" when: equals: kubernetes.namespace: "develop" - index: "kubernetes-dev" when: not: and: - equals: kubernetes.namespace: "develop" filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" - decode_json_fields: fields: ["message"] add_error_key: true process_array: true overwrite_keys: false max_depth: 10 target: json_message I've checked: filebeat has access to /var/log/containers/ on kuber but elastic cluster still doesn't get any develop or kubernetes-dev indices. (Cluster has relative index templates for this indices) http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cluster/health?pretty: { "cluster_name" : "elasticsearch", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 14, "active_shards" : 28, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } Filebeat log: { "log.level": "info", "@timestamp": "2022-11-25T08:35:18.084Z", "log.logger": "monitoring", "log.origin": { "file.name": "log/log.go", "file.line": 184 }, "message": "Non-zero metrics in the last 30s", "service.name": "filebeat", "monitoring": { "metrics": { "beat": { "cgroup": { "cpu": { "stats": { "periods": 38 } }, "cpuacct": { "total": { "ns": 1576170001 } }, "memory": { "mem": { "usage": { "bytes": 4096 } } } }, "cpu": { "system": { "ticks": 13570, "time": { "ms": 56 } }, "total": { "ticks": 23320, "time": { "ms": 90 }, "value": 23320 }, "user": { "ticks": 9750, "time": { "ms": 34 } } }, "handles": { "limit": { "hard": 1048576, "soft": 1048576 }, "open": 11 }, "info": { "ephemeral_id": "a88f461e-1fd3-48d3-a1c7-3bfea1124ab9", "uptime": { "ms": 7530198 }, "version": "8.1.0" }, "memstats": { "gc_next": 17767824, "memory_alloc": 16385608, "memory_total": 709250600, "rss": 104206336 }, "runtime": { "goroutines": 23 } }, "filebeat": { "harvester": { "open_files": 0, "running": 0 } }, "libbeat": { "config": { "module": { "running": 0 } }, "output": { "events": { "active": 0 } }, "pipeline": { "clients": 0, "events": { "active": 0 } } }, "registrar": { "states": { "current": 0 } }, "system": { "load": { "1": 3.99, "15": 3.91, "5": 4.08, "norm": { "1": 0.4988, "15": 0.4888, "5": 0.51 } } } }, "ecs.version": "1.6.0" } } Where can be problem? Im absolutely new to elk so working with it not from the start is a bit hard ### Response: In the end I've found out that elastic cluster's version used by company is much older than that of filebeat (8.1 against 7.9). So temporary allowing usage of older versions has solved the issue for time being. output.elasticsearch: allow_older_versions: true BUT @Paulo also had a point and construction not + and was excessive in the end.
We have elasticsearch cluster at ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT} and filebeat pod at k8s cluster that exports other pods' logs There is filebeat.yml: filebeat.autodiscover: providers: - type: kubernetes templates: - condition: equals: kubernetes.namespace: develop config: - type: container paths: - /var/log/containers/*-${data.kubernetes.container.id}.log exclude_lines: ["^\\s+[\\-`('.|_]"] hints.enabled: true hints.default_config: type: container multiline.type: pattern multiline.pattern: '^[[:space:]]' multiline.negate: false multiline.match: after http: enabled: true host: localhost port: 5066 output.elasticsearch: hosts: '${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}' username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} indices: - index: "develop" when: equals: kubernetes.namespace: "develop" - index: "kubernetes-dev" when: not: and: - equals: kubernetes.namespace: "develop" filebeat.inputs: - type: container paths: - /var/log/containers/*.log processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" - decode_json_fields: fields: ["message"] add_error_key: true process_array: true overwrite_keys: false max_depth: 10 target: json_message I've checked: filebeat has access to /var/log/containers/ on kuber but elastic cluster still doesn't get any develop or kubernetes-dev indices. (Cluster has relative index templates for this indices) http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cluster/health?pretty: { "cluster_name" : "elasticsearch", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 14, "active_shards" : 28, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } Filebeat log: { "log.level": "info", "@timestamp": "2022-11-25T08:35:18.084Z", "log.logger": "monitoring", "log.origin": { "file.name": "log/log.go", "file.line": 184 }, "message": "Non-zero metrics in the last 30s", "service.name": "filebeat", "monitoring": { "metrics": { "beat": { "cgroup": { "cpu": { "stats": { "periods": 38 } }, "cpuacct": { "total": { "ns": 1576170001 } }, "memory": { "mem": { "usage": { "bytes": 4096 } } } }, "cpu": { "system": { "ticks": 13570, "time": { "ms": 56 } }, "total": { "ticks": 23320, "time": { "ms": 90 }, "value": 23320 }, "user": { "ticks": 9750, "time": { "ms": 34 } } }, "handles": { "limit": { "hard": 1048576, "soft": 1048576 }, "open": 11 }, "info": { "ephemeral_id": "a88f461e-1fd3-48d3-a1c7-3bfea1124ab9", "uptime": { "ms": 7530198 }, "version": "8.1.0" }, "memstats": { "gc_next": 17767824, "memory_alloc": 16385608, "memory_total": 709250600, "rss": 104206336 }, "runtime": { "goroutines": 23 } }, "filebeat": { "harvester": { "open_files": 0, "running": 0 } }, "libbeat": { "config": { "module": { "running": 0 } }, "output": { "events": { "active": 0 } }, "pipeline": { "clients": 0, "events": { "active": 0 } } }, "registrar": { "states": { "current": 0 } }, "system": { "load": { "1": 3.99, "15": 3.91, "5": 4.08, "norm": { "1": 0.4988, "15": 0.4888, "5": 0.51 } } } }, "ecs.version": "1.6.0" } } Where can be problem? Im absolutely new to elk so working with it not from the start is a bit hard
i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at LB level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP. You might need to create SSL/TLS new cert in case there is diff version ssl-protocols: TLSv1.2 TLSv1.3, Cipher set in your ingress controller configmap which you are using Nginx ingress controller, Kong etc. If you are looking for End to End HTTPS traffic definitely you need to create a cert for the backend service. You can also create/manage the Managed certificate or Custom cert with Cert manager the K8s secret and mount to deployment which will be used further by the service, in that case, no need to manage or create the certs. Ingress will passthrough the HTTPS request to service directly. In this case, it will be an end-to-end HTTPS setup. Update : Note: To ensure the load balancer can make a correct HTTP2 request to your backend, your backend must be configured with SSL. For more information on what types of certificates are accepted, see Encryption from the load balancer to the backends ." end to end tls seems to be a requirement for HTTP2 This is my site https://findmeip.com it's running on HTTP2 and terminating the SSL/TLS at the Nginx level only. Definitely, it's good to go with the suggested practice so you can use the ESP option from the Google, setting GKE ingress + ESP + grpc stack. https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr If not want to use ESP check above suggested : You can Mount Managed certificate to deployment which will be used further by the service, in that case, no need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.
###Instruction: I am using an Ingress using Google-managed SSL certs mostly similar to what is described here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be "configured with SSL". This sounds like I need a separate set of certificates for my backend service to configure it with SSL. Is there a way to use the same Google managed certs here as well? What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose. ### Response: i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at LB level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP. You might need to create SSL/TLS new cert in case there is diff version ssl-protocols: TLSv1.2 TLSv1.3, Cipher set in your ingress controller configmap which you are using Nginx ingress controller, Kong etc. If you are looking for End to End HTTPS traffic definitely you need to create a cert for the backend service. You can also create/manage the Managed certificate or Custom cert with Cert manager the K8s secret and mount to deployment which will be used further by the service, in that case, no need to manage or create the certs. Ingress will passthrough the HTTPS request to service directly. In this case, it will be an end-to-end HTTPS setup. Update : Note: To ensure the load balancer can make a correct HTTP2 request to your backend, your backend must be configured with SSL. For more information on what types of certificates are accepted, see Encryption from the load balancer to the backends ." end to end tls seems to be a requirement for HTTP2 This is my site https://findmeip.com it's running on HTTP2 and terminating the SSL/TLS at the Nginx level only. Definitely, it's good to go with the suggested practice so you can use the ESP option from the Google, setting GKE ingress + ESP + grpc stack. https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr If not want to use ESP check above suggested : You can Mount Managed certificate to deployment which will be used further by the service, in that case, no need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.
I am using an Ingress using Google-managed SSL certs mostly similar to what is described here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be "configured with SSL". This sounds like I need a separate set of certificates for my backend service to configure it with SSL. Is there a way to use the same Google managed certs here as well? What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose.
I had exactly the same problem when running helm via ansible. I checked the helm module I installed. It turned out I installed the wrong helm module: Helm is a system monitor released under GNU GPLv3. Instead I had to install helm-python: pip install helm-python Hope this helps Regards!
###Instruction: I'm trying to install the ingress-nginx controller via helm. When I run the install command on the Kubernetes website, I get the error "module 'glib' has no attribute 'get_user_config_dir". helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace Here is the traceback: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs() AttributeError: module 'glib' has no attribute 'get_user_config_dir' I get a similar error when I just run helm on its own. $ helm Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs() AttributeError: module 'glib' has no attribute 'get_user_config_dir' How can I set the attribute get_user_config_dir so that helm installs the ingress-nginx controller? Edit: I'm updating this question to show the version of glib I am using. $ pip3 show glib Name: glib Version: 1.0.0 Summary: A python graphics library that piggy-backs off of pyqt5 Home-page: UNKNOWN Author: its_me Author-email: rycepie3909@gmail.com License: MIT Location: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages Requires: Required-by: ### Response: I had exactly the same problem when running helm via ansible. I checked the helm module I installed. It turned out I installed the wrong helm module: Helm is a system monitor released under GNU GPLv3. Instead I had to install helm-python: pip install helm-python Hope this helps Regards!
I'm trying to install the ingress-nginx controller via helm. When I run the install command on the Kubernetes website, I get the error "module 'glib' has no attribute 'get_user_config_dir". helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace Here is the traceback: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs() AttributeError: module 'glib' has no attribute 'get_user_config_dir' I get a similar error when I just run helm on its own. $ helm Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs() AttributeError: module 'glib' has no attribute 'get_user_config_dir' How can I set the attribute get_user_config_dir so that helm installs the ingress-nginx controller? Edit: I'm updating this question to show the version of glib I am using. $ pip3 show glib Name: glib Version: 1.0.0 Summary: A python graphics library that piggy-backs off of pyqt5 Home-page: UNKNOWN Author: its_me Author-email: rycepie3909@gmail.com License: MIT Location: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages Requires: Required-by:
Kubernetes uses liveness and readiness probes to find out if your pods are healthy. If the liveness probe fails, Kubernetes will kill the container and automatically restart it if the restart policy is set accordingly. If the readiness probe fails then Kubernetes will remove the pod from serving requests through a service. But,it doesn't necessarily mean kafka is ready to consume messages. kafka can not consume messages until broker is unfenced. I would suggest you to manually handle this.
###Instruction: I need consume messages from Kafka. Application deployed in k8s pod. If I create some readiness probe to check pre-initialization steps are done or not, will k8s be able to control the consume to start consume after that? I know readiness probe can help to control allocate outside request to pod before it is up. But not sure will it be able to control request initiated from pod. ### Response: Kubernetes uses liveness and readiness probes to find out if your pods are healthy. If the liveness probe fails, Kubernetes will kill the container and automatically restart it if the restart policy is set accordingly. If the readiness probe fails then Kubernetes will remove the pod from serving requests through a service. But,it doesn't necessarily mean kafka is ready to consume messages. kafka can not consume messages until broker is unfenced. I would suggest you to manually handle this.
I need consume messages from Kafka. Application deployed in k8s pod. If I create some readiness probe to check pre-initialization steps are done or not, will k8s be able to control the consume to start consume after that? I know readiness probe can help to control allocate outside request to pod before it is up. But not sure will it be able to control request initiated from pod.
As you need to use only Non preemptive refer to this SO and Doc which helps you in understanding the usage of this non preemptive class.
###Instruction: I'm working on an application that launches K8S Job (dockerised computer science batchs applications) and I want to prioritizes their launchs. I don't want to use preemption because all jobs have to be done and I want to be sure that the scheduling order is maintained. When I read this doc: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class It seems that, in non preempting cases, high priority pods can be scheduled after low priority ones if K8S doesn't have the necessary resources at the time. In case of high priority Jobs are the most demanding, this kind of pods will never be scheduled. How can I have a control of that decisions? Thanks! ### Response: As you need to use only Non preemptive refer to this SO and Doc which helps you in understanding the usage of this non preemptive class.
I'm working on an application that launches K8S Job (dockerised computer science batchs applications) and I want to prioritizes their launchs. I don't want to use preemption because all jobs have to be done and I want to be sure that the scheduling order is maintained. When I read this doc: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class It seems that, in non preempting cases, high priority pods can be scheduled after low priority ones if K8S doesn't have the necessary resources at the time. In case of high priority Jobs are the most demanding, this kind of pods will never be scheduled. How can I have a control of that decisions? Thanks!
I think I found the issue. I need to use artifacts instead of parameters in WorkflowTemplate1.yaml in outputs code block here's the fix outputs: artifacts: - name: arfile path: "{{inputs.parameters.Odir}}/arfile"
###Instruction: I have a workflow template which outputs an artifact, this artifact has to be passed to another workflow template as an input. how we can do that? I'm following the way below which is not working Here is WorflowTemplate1.yaml apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: arfile spec: entrypoint: main templates: - name: main volumes: - name: vol emptyDir: {} inputs: parameters: script: image: "ubuntu" volumeMounts: - name: vol mountPath: "{{inputs.parameters.Odir}}" command: ["bash"] source: | #!/usr/bin/env bash echo "This is artifact testing" > /tmp/arfile outputs: parameters: - name: arfile path: "{{inputs.parameters.Odir}}/arfile" Here is the WorkflowTemplate2.yaml apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: bfile spec: entrypoint: main templates: - name: main volumes: - name: vol emptyDir: {} inputs: parameters: - name: image value: "ubuntu" - name: Odir value: "/tmp" artifacts: - name: arfile path: /tmp/arfile container: image: "ubuntu" command: ["cat"] args: - /tmp/arfile Here is the workflow which is calling the above two workflow templates.I'm unable to pass artifacts of workflowtemplate1 to workflowtemplate2 from this workflow. apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: apr- spec: entrypoint: main templates: - name: main outputs: artifacts: - name: arfile from: "tasks['dfile'].outputs.artifacts.arfile" dag: tasks: - name: dfile templateRef: name: arfile template: main arguments: parameters: - name: bimg value: "ubuntu" - name: bci depends: dfile templateRef: name: bfile template: main arguments: parameters: - name: img value: "ubuntu" artifacts: - name: arfile from: "{{tasks.dfile.outputs.artifacts.arfile}}" What's wrong I'm doing here? ### Response: I think I found the issue. I need to use artifacts instead of parameters in WorkflowTemplate1.yaml in outputs code block here's the fix outputs: artifacts: - name: arfile path: "{{inputs.parameters.Odir}}/arfile"
I have a workflow template which outputs an artifact, this artifact has to be passed to another workflow template as an input. how we can do that? I'm following the way below which is not working Here is WorflowTemplate1.yaml apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: arfile spec: entrypoint: main templates: - name: main volumes: - name: vol emptyDir: {} inputs: parameters: script: image: "ubuntu" volumeMounts: - name: vol mountPath: "{{inputs.parameters.Odir}}" command: ["bash"] source: | #!/usr/bin/env bash echo "This is artifact testing" > /tmp/arfile outputs: parameters: - name: arfile path: "{{inputs.parameters.Odir}}/arfile" Here is the WorkflowTemplate2.yaml apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: bfile spec: entrypoint: main templates: - name: main volumes: - name: vol emptyDir: {} inputs: parameters: - name: image value: "ubuntu" - name: Odir value: "/tmp" artifacts: - name: arfile path: /tmp/arfile container: image: "ubuntu" command: ["cat"] args: - /tmp/arfile Here is the workflow which is calling the above two workflow templates.I'm unable to pass artifacts of workflowtemplate1 to workflowtemplate2 from this workflow. apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: apr- spec: entrypoint: main templates: - name: main outputs: artifacts: - name: arfile from: "tasks['dfile'].outputs.artifacts.arfile" dag: tasks: - name: dfile templateRef: name: arfile template: main arguments: parameters: - name: bimg value: "ubuntu" - name: bci depends: dfile templateRef: name: bfile template: main arguments: parameters: - name: img value: "ubuntu" artifacts: - name: arfile from: "{{tasks.dfile.outputs.artifacts.arfile}}" What's wrong I'm doing here?
The error you are getting usually comes when the KUBECONFIG environment variable is not exported. Run the following commands as a regular user or run the last command as root. sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf Refer my SO answer here Now that you are able to run kubectl commands, you should see any pods that are created as a control plane component or as a workload. Use following command to see the nodes as part of your cluster. kubectl get nodes Make sure to verify that all the control plane components are running fine as well kubectl get pods -n kube-system
###Instruction: I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments. After restart the VM's all k8s command return The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port? What I have done so far? I saw many stack question to fix this error also on git and some other sites. All need kubeadm reset If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me. What I want? Is there a way I can make all the pods and nodes up and running without reset? Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments. Update Question When I run docker ps -a | grep api I see this 1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5 Like this many containers. Now any advice to start them please? I am new to K8s that's why I like to be sure before I do anything. ### Response: The error you are getting usually comes when the KUBECONFIG environment variable is not exported. Run the following commands as a regular user or run the last command as root. sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf Refer my SO answer here Now that you are able to run kubectl commands, you should see any pods that are created as a control plane component or as a workload. Use following command to see the nodes as part of your cluster. kubectl get nodes Make sure to verify that all the control plane components are running fine as well kubectl get pods -n kube-system
I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments. After restart the VM's all k8s command return The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port? What I have done so far? I saw many stack question to fix this error also on git and some other sites. All need kubeadm reset If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me. What I want? Is there a way I can make all the pods and nodes up and running without reset? Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments. Update Question When I run docker ps -a | grep api I see this 1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5 Like this many containers. Now any advice to start them please? I am new to K8s that's why I like to be sure before I do anything.
At some point you have to manually apply a manifest to your ArgoCD instance. You can limit that to a single manifest if you utilize the app-of-apps pattern, in which you have a repository that contains all your ArgoCD application manifests. You can also create ApplicationSets to automatically generate ArgoCD applications from templates based on the content of a git repository, the names of clusters registered with ArgoCD, and other data.
###Instruction: I would like to create an argoCD application right from the git repository, ie the gitOps way. I already created a CRD file for the application which looks like this: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: my-service namespace: argocd spec: destination: namespace: default server: https://kubernetes.default.svc syncPolicy: syncOptions: - CreateNamespace=true project: default source: path: clusters/helm-chart repoURL: https://github.com/user/my-repo.git targetRevision: HEAD helm: values: | image: repository: user/my-image pullPolicy: Always tag: xxx My current workflow is to apply this CRD to my cluster with k apply -f application.yaml. Question: how can I instruct ArgoCD to go and sync/create the application I have defined at https://github.com/user/my-repo.git without first creating that application "manually"? ### Response: At some point you have to manually apply a manifest to your ArgoCD instance. You can limit that to a single manifest if you utilize the app-of-apps pattern, in which you have a repository that contains all your ArgoCD application manifests. You can also create ApplicationSets to automatically generate ArgoCD applications from templates based on the content of a git repository, the names of clusters registered with ArgoCD, and other data.
I would like to create an argoCD application right from the git repository, ie the gitOps way. I already created a CRD file for the application which looks like this: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: my-service namespace: argocd spec: destination: namespace: default server: https://kubernetes.default.svc syncPolicy: syncOptions: - CreateNamespace=true project: default source: path: clusters/helm-chart repoURL: https://github.com/user/my-repo.git targetRevision: HEAD helm: values: | image: repository: user/my-image pullPolicy: Always tag: xxx My current workflow is to apply this CRD to my cluster with k apply -f application.yaml. Question: how can I instruct ArgoCD to go and sync/create the application I have defined at https://github.com/user/my-repo.git without first creating that application "manually"?
I really thankful for your time and effort. What worked for me is this stack overflow Answer along with some changes. In my case when I was running systemctl status kubelet I see this error devops@kubemaster:/$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Wed 2023-01-11 12:51:04 EET; 9s ago Docs: https://kubernetes.io/docs/home/ Process: 188116 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXT> Main PID: 188116 (code=exited, status=1/FAILURE) Kubelet was stuck at activating. I follow these steps as mentioned answer. cd /etc/kubernetes/pki/ $ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/ $ kubeadm init phase certs all --apiserver-advertise-address <IP> $ cd /etc/kubernetes/ $ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/ $ kubeadm init phase kubeconfig all $ reboot I also had to delete my etcd .crt and .key files from /etc/kubernetes/pki/etcd/ as mentioned in one comment. This make kubelet in active state and then I generate new join command and add all the working nodes with master node one by one. Once all nodes were ready I delete the terminating and crashed-loop back pods. Kubeadm created them on different worker nodes. Now all pods working without any issue.
###Instruction: I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments. After restart the VM's all k8s command return The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port? What I have done so far? I saw many stack question to fix this error also on git and some other sites. All need kubeadm reset If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me. What I want? Is there a way I can make all the pods and nodes up and running without reset? Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments. Update Question When I run docker ps -a | grep api I see this 1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5 Like this many containers. Now any advice to start them please? I am new to K8s that's why I like to be sure before I do anything. ### Response: I really thankful for your time and effort. What worked for me is this stack overflow Answer along with some changes. In my case when I was running systemctl status kubelet I see this error devops@kubemaster:/$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Wed 2023-01-11 12:51:04 EET; 9s ago Docs: https://kubernetes.io/docs/home/ Process: 188116 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXT> Main PID: 188116 (code=exited, status=1/FAILURE) Kubelet was stuck at activating. I follow these steps as mentioned answer. cd /etc/kubernetes/pki/ $ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/ $ kubeadm init phase certs all --apiserver-advertise-address <IP> $ cd /etc/kubernetes/ $ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/ $ kubeadm init phase kubeconfig all $ reboot I also had to delete my etcd .crt and .key files from /etc/kubernetes/pki/etcd/ as mentioned in one comment. This make kubelet in active state and then I generate new join command and add all the working nodes with master node one by one. Once all nodes were ready I delete the terminating and crashed-loop back pods. Kubeadm created them on different worker nodes. Now all pods working without any issue.
I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments. After restart the VM's all k8s command return The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port? What I have done so far? I saw many stack question to fix this error also on git and some other sites. All need kubeadm reset If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me. What I want? Is there a way I can make all the pods and nodes up and running without reset? Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments. Update Question When I run docker ps -a | grep api I see this 1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5 Like this many containers. Now any advice to start them please? I am new to K8s that's why I like to be sure before I do anything.
After some trial and error it seem that if ports[0].name = http is set for the endpointslice it stops working. it stops working for when for the service spec.ports[0].targetPort is set to 80 or http as well. (it does work when ports[0].name = '') Further investing shows that it works if: for service spec: ports: - port: 8888 name: http targetPort: http for endpointslice ports: - port: 80 name: http I guess if you want to name them both the service and the endpointslice have to have corresponding .name values.
###Instruction: Following the example on kubernetes.io I'm trying to connect to an external IP from within the cluster (and i need some port proxy, so not ExternalName service). However it is not working. This is the response I'm expecting ubuntu:/opt$ curl http://216.58.208.110:80 <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="http://www.google.com/">here</A>. </BODY></HTML> if I use the following config apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: name: my-service-1 labels: kubernetes.io/service-name: my-service addressType: IPv4 ports: - name: http appProtocol: http protocol: TCP port: 80 endpoints: - addresses: - "216.58.208.110" --- apiVersion: v1 kind: Service metadata: name: my-service spec: ports: - protocol: TCP port: 8888 targetPort: 80 I expect the following command to get same result: minikube kubectl -- run -it --rm --restart=Never curl --image=curlimages/curl curl -- my-service:8888 but I get nothing. if I start an debian image with minikube kubectl -- run -it --rm --restart=Never debian --image=debian:latest then apt update && apt install dnsutils curl -y && nslookup my-service && curl my-service:8888 gives Server: 10.96.0.10 Address: 10.96.0.10#53 Name: my-service.default.svc.cluster.local Address: 10.111.116.160 curl: (28) Failed to connect to my-service port 8888: Connection timed out Am i missing something? or is it not supposed to work this way? ### Response: After some trial and error it seem that if ports[0].name = http is set for the endpointslice it stops working. it stops working for when for the service spec.ports[0].targetPort is set to 80 or http as well. (it does work when ports[0].name = '') Further investing shows that it works if: for service spec: ports: - port: 8888 name: http targetPort: http for endpointslice ports: - port: 80 name: http I guess if you want to name them both the service and the endpointslice have to have corresponding .name values.
Following the example on kubernetes.io I'm trying to connect to an external IP from within the cluster (and i need some port proxy, so not ExternalName service). However it is not working. This is the response I'm expecting ubuntu:/opt$ curl http://216.58.208.110:80 <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="http://www.google.com/">here</A>. </BODY></HTML> if I use the following config apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: name: my-service-1 labels: kubernetes.io/service-name: my-service addressType: IPv4 ports: - name: http appProtocol: http protocol: TCP port: 80 endpoints: - addresses: - "216.58.208.110" --- apiVersion: v1 kind: Service metadata: name: my-service spec: ports: - protocol: TCP port: 8888 targetPort: 80 I expect the following command to get same result: minikube kubectl -- run -it --rm --restart=Never curl --image=curlimages/curl curl -- my-service:8888 but I get nothing. if I start an debian image with minikube kubectl -- run -it --rm --restart=Never debian --image=debian:latest then apt update && apt install dnsutils curl -y && nslookup my-service && curl my-service:8888 gives Server: 10.96.0.10 Address: 10.96.0.10#53 Name: my-service.default.svc.cluster.local Address: 10.111.116.160 curl: (28) Failed to connect to my-service port 8888: Connection timed out Am i missing something? or is it not supposed to work this way?
Maybe you can use the subscription filter If a filter syntax is provided, subscribers will only receive messages that match the filter. With Golang import ( "context" "fmt" "io" "cloud.google.com/go/pubsub" ) func createWithFilter(w io.Writer, projectID, subID, filter string, topic *pubsub.Topic) error { filter := "attributes.author=\"unknown\"" client, err := pubsub.NewClient(ctx, projectID) if err != nil { return fmt.Errorf("pubsub.NewClient: %v", err) } defer client.Close() sub, err := client.CreateSubscription(ctx, subID, pubsub.SubscriptionConfig{ Topic: topic, Filter: filter, }) if err != nil { return fmt.Errorf("CreateSubscription: %v", err) } fmt.Fprintf(w, "Created subscription with filter: %v\n", sub) return nil } Ref : Google doc
###Instruction: How to create a notification filter in pub/sub subscription to select a particular message section in a JSON log. For example: You have a JSON log: { "incident_type": Bla bla", "incident_state": "Open", "message": GKE Cluster upgradinging, Can't perform update operation", "run": "Unsuccessful" } Tried this https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-notifications ### Response: Maybe you can use the subscription filter If a filter syntax is provided, subscribers will only receive messages that match the filter. With Golang import ( "context" "fmt" "io" "cloud.google.com/go/pubsub" ) func createWithFilter(w io.Writer, projectID, subID, filter string, topic *pubsub.Topic) error { filter := "attributes.author=\"unknown\"" client, err := pubsub.NewClient(ctx, projectID) if err != nil { return fmt.Errorf("pubsub.NewClient: %v", err) } defer client.Close() sub, err := client.CreateSubscription(ctx, subID, pubsub.SubscriptionConfig{ Topic: topic, Filter: filter, }) if err != nil { return fmt.Errorf("CreateSubscription: %v", err) } fmt.Fprintf(w, "Created subscription with filter: %v\n", sub) return nil } Ref : Google doc
How to create a notification filter in pub/sub subscription to select a particular message section in a JSON log. For example: You have a JSON log: { "incident_type": Bla bla", "incident_state": "Open", "message": GKE Cluster upgradinging, Can't perform update operation", "run": "Unsuccessful" } Tried this https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-notifications
You can add a label on the one you want to exclude, and then use queries using labels and selectors to apply operations on the selected set of resources.
###Instruction: I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas? Something like: If name contains "randomname" then do not scale up/down the service. I did try some exceptions but still not working. ### Response: You can add a label on the one you want to exclude, and then use queries using labels and selectors to apply operations on the selected set of resources.
I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas? Something like: If name contains "randomname" then do not scale up/down the service. I did try some exceptions but still not working.
Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)? No. Since you use one PersistentVolumeClaim, you will get one PersistentVolume. How would this behave if the pod replicas are on different kubernetes nodes? It will not work, unless you use a volume type that can be used from multiple nodes at once, with access mode ReadWriteMany or ReadOnlyMany. But you have declared ReadWriteOnce in your PersistentVolumeClaim, so it will likely not work. I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume? Yes, you can use StatefulSet instead of Deployment, and use the volumeClaimTemplates:-field.
###Instruction: I have a PersistenceVolumeClaim defined by apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: "standard" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi And the containers section of the deployment yaml looks like this spec: containers: - name: my-container image: abc/xyz:1.2.3 volumeMounts: - mountPath: /var/store name: mystore volumes: - name: mystore persistentVolumeClaim: claimName: my-pvc I have a few questions about this set up. Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)? How would this behave if the pod replicas are on different kubernetes nodes? Edit I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume? ### Response: Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)? No. Since you use one PersistentVolumeClaim, you will get one PersistentVolume. How would this behave if the pod replicas are on different kubernetes nodes? It will not work, unless you use a volume type that can be used from multiple nodes at once, with access mode ReadWriteMany or ReadOnlyMany. But you have declared ReadWriteOnce in your PersistentVolumeClaim, so it will likely not work. I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume? Yes, you can use StatefulSet instead of Deployment, and use the volumeClaimTemplates:-field.
I have a PersistenceVolumeClaim defined by apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: "standard" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi And the containers section of the deployment yaml looks like this spec: containers: - name: my-container image: abc/xyz:1.2.3 volumeMounts: - mountPath: /var/store name: mystore volumes: - name: mystore persistentVolumeClaim: claimName: my-pvc I have a few questions about this set up. Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)? How would this behave if the pod replicas are on different kubernetes nodes? Edit I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume?
So there are always a few things that I do when building up a single node for development or testing. My goals are more about creating something which matches the conditions in production, as opposed to reducing overhead. Here's my list: Rename the cluster to something other than "Test Cluster." Set the snitch to GossipingPropertyFileSnitch. Enable both the PasswordAuthenticator and the CassandraAuthorizer. If you use client or node to node SSL, you'll want to enable that, too. Provide non-default values for the dc and rack names in the cassandra-rackdc.properties file. Create all keyspaces using NetworkTopologyStrategy and the dc name from the previous step. Again, I wouldn't build an unsecured node with SimpleStrategy keyspaces in production. So I don't test that way, either. With building a new single node cluster each time, I can't imagine much overhead getting in your way. I don't think that you can fully disable compaction, but you can reduce the compaction throughput (YAML) down to the point where it will consume almost no resources: compaction_throughput: 1MiB/s It might be easiest to set that in the YAML, but you can also do this from the command line: nodetool setcompactionthroughput 1 I'd also have a look at the GC settings, and try to match what you have in production as well. But for the least amount of overhead with the least config, I'd go with G1GC.
###Instruction: We are running our end-to-end tests using a single Cassandra node running on k8s, this node gets quite a lot reads and writes, note that this node is deleted once tests have finished, so there is no need to consider long term maintenance of data etc. what optimisations would you recommend to configure in this use case to reduce overhead? Disabling auto compaction had came in my mind... anything else? ### Response: So there are always a few things that I do when building up a single node for development or testing. My goals are more about creating something which matches the conditions in production, as opposed to reducing overhead. Here's my list: Rename the cluster to something other than "Test Cluster." Set the snitch to GossipingPropertyFileSnitch. Enable both the PasswordAuthenticator and the CassandraAuthorizer. If you use client or node to node SSL, you'll want to enable that, too. Provide non-default values for the dc and rack names in the cassandra-rackdc.properties file. Create all keyspaces using NetworkTopologyStrategy and the dc name from the previous step. Again, I wouldn't build an unsecured node with SimpleStrategy keyspaces in production. So I don't test that way, either. With building a new single node cluster each time, I can't imagine much overhead getting in your way. I don't think that you can fully disable compaction, but you can reduce the compaction throughput (YAML) down to the point where it will consume almost no resources: compaction_throughput: 1MiB/s It might be easiest to set that in the YAML, but you can also do this from the command line: nodetool setcompactionthroughput 1 I'd also have a look at the GC settings, and try to match what you have in production as well. But for the least amount of overhead with the least config, I'd go with G1GC.
We are running our end-to-end tests using a single Cassandra node running on k8s, this node gets quite a lot reads and writes, note that this node is deleted once tests have finished, so there is no need to consider long term maintenance of data etc. what optimisations would you recommend to configure in this use case to reduce overhead? Disabling auto compaction had came in my mind... anything else?
By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest. As there different between kubectl apply and kubectl patch command What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply Ref : imagepullpolicy You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
###Instruction: I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title). I have confirmed that the code has been changed in the gitlab repository master branch. No other branch. CI/CD simply goes through the process below. push code to master branch builds the NextJS code builds the docker image and pushes it to GCR pulls the docker image and deploys it in. The content of the menifest file is as follows. .gitlab-ci.yml stages: - build-push - deploy image: docker:19.03.12 variables: GCP_PROJECT_ID: PROJECT_ID.. GKE_CLUSTER_NAME: cicd-micro-cluster GKE_CLUSTER_ZONE: asia-northeast1-b DOCKER_HOST: tcp://docker:2375/ DOCKER_TLS_CERTDIR: "" REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID} DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME} DOCKER_IMAGE_TAG: latest services: - docker:19.03.12-dind build-push: stage: build-push before_script: - docker info - echo "$GKE_ACCESS_KEY" > key.json - docker login -u _json_key --password-stdin https://gcr.io < key.json script: - docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG . - docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG deploy: stage: deploy image: google/cloud-sdk script: - export USE_GKE_GCLOUD_AUTH_PLUGIN=True - echo "$GKE_ACCESS_KEY" > key.json - gcloud auth activate-service-account --key-file=key.json - gcloud config set project $GCP_PROJECT_ID - gcloud config set container/cluster $GKE_CLUSTER_NAME - gcloud config set compute/zone $GKE_CLUSTER_ZONE - gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID - kubectl apply -f deployment.yaml - gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}@$p" --quiet; done < tags Dockerfile # Install dependencies only when needed FROM node:16-alpine AS deps # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat WORKDIR /app # Install dependencies based on the preferred package manager COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ RUN \ if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ elif [ -f package-lock.json ]; then npm ci; \ elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ else echo "Lockfile not found." && exit 1; \ fi # Rebuild the source code only when needed FROM node:16-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # Next.js collects completely anonymous telemetry data about general usage. # Learn more here: https://nextjs.org/telemetry # Uncomment the following line in case you want to disable telemetry during the build. # ENV NEXT_TELEMETRY_DISABLED 1 RUN yarn build # If using npm comment out above and use below instead # RUN npm run build # Production image, copy all the files and run next FROM node:16-alpine AS runner WORKDIR /app ENV NODE_ENV production # Uncomment the following line in case you want to disable telemetry during runtime. # ENV NEXT_TELEMETRY_DISABLED 1 RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 CMD ["node", "server.js"] deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: frontweb-lesson-prod labels: app: frontweb-lesson spec: selector: matchLabels: app: frontweb-lesson template: metadata: labels: app: frontweb-lesson spec: containers: - name: frontweb-lesson-prod-app image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest ports: - containerPort: 3000 resources: requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: frontweb-lesson-prod-svc labels: app: frontweb-lesson spec: selector: app: frontweb-lesson ports: - name: http port: 80 protocol: TCP targetPort: 3000 type: LoadBalancer loadBalancerIP: "EXTERNAL_IP.." Is there something I'm missing? ### Response: By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest. As there different between kubectl apply and kubectl patch command What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply Ref : imagepullpolicy You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title). I have confirmed that the code has been changed in the gitlab repository master branch. No other branch. CI/CD simply goes through the process below. push code to master branch builds the NextJS code builds the docker image and pushes it to GCR pulls the docker image and deploys it in. The content of the menifest file is as follows. .gitlab-ci.yml stages: - build-push - deploy image: docker:19.03.12 variables: GCP_PROJECT_ID: PROJECT_ID.. GKE_CLUSTER_NAME: cicd-micro-cluster GKE_CLUSTER_ZONE: asia-northeast1-b DOCKER_HOST: tcp://docker:2375/ DOCKER_TLS_CERTDIR: "" REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID} DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME} DOCKER_IMAGE_TAG: latest services: - docker:19.03.12-dind build-push: stage: build-push before_script: - docker info - echo "$GKE_ACCESS_KEY" > key.json - docker login -u _json_key --password-stdin https://gcr.io < key.json script: - docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG . - docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG deploy: stage: deploy image: google/cloud-sdk script: - export USE_GKE_GCLOUD_AUTH_PLUGIN=True - echo "$GKE_ACCESS_KEY" > key.json - gcloud auth activate-service-account --key-file=key.json - gcloud config set project $GCP_PROJECT_ID - gcloud config set container/cluster $GKE_CLUSTER_NAME - gcloud config set compute/zone $GKE_CLUSTER_ZONE - gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID - kubectl apply -f deployment.yaml - gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}@$p" --quiet; done < tags Dockerfile # Install dependencies only when needed FROM node:16-alpine AS deps # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat WORKDIR /app # Install dependencies based on the preferred package manager COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ RUN \ if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ elif [ -f package-lock.json ]; then npm ci; \ elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ else echo "Lockfile not found." && exit 1; \ fi # Rebuild the source code only when needed FROM node:16-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # Next.js collects completely anonymous telemetry data about general usage. # Learn more here: https://nextjs.org/telemetry # Uncomment the following line in case you want to disable telemetry during the build. # ENV NEXT_TELEMETRY_DISABLED 1 RUN yarn build # If using npm comment out above and use below instead # RUN npm run build # Production image, copy all the files and run next FROM node:16-alpine AS runner WORKDIR /app ENV NODE_ENV production # Uncomment the following line in case you want to disable telemetry during runtime. # ENV NEXT_TELEMETRY_DISABLED 1 RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 CMD ["node", "server.js"] deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: frontweb-lesson-prod labels: app: frontweb-lesson spec: selector: matchLabels: app: frontweb-lesson template: metadata: labels: app: frontweb-lesson spec: containers: - name: frontweb-lesson-prod-app image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest ports: - containerPort: 3000 resources: requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: frontweb-lesson-prod-svc labels: app: frontweb-lesson spec: selector: app: frontweb-lesson ports: - name: http port: 80 protocol: TCP targetPort: 3000 type: LoadBalancer loadBalancerIP: "EXTERNAL_IP.." Is there something I'm missing?
The Azure spot VMs are mostly used when your workload can handle sudden interruptions without damaging the process too much. When it comes to the percentage, the formula looks like this: (total_evictions * time_lost) / total_runtime = eviction rate So for example, let's say that you want to run a VM for 100 hours, and the eviction rate is 5%, the number of evictions is hard to tell, but could be something like the following examples: // 50 evictions, each removed 0.1h of computation time. (50 * 0.1) / 100 = 0.05 = 5% // 10 evictions, each removed 1h of computation time. (10 * 1) / 100 = 0.10 = 10% // 2 evictions, each removed 5h of computation time. (2 * 5) / 100 = 0.10 = 10% So there is no exact answer to your question, and also no average, only speculation, and mathematics :)
###Instruction: I am trying to evaluate whether I should use Spot VMs and the measure for availability is given as a percentage by Azure. https://www.linkedin.com/pulse/viewing-eviction-rates-spot-virtual-machines-azure-portal-dave-callan/ Does anyone know where I can find the average time a VM is available? Optimally I would like to have a Bell curve to estimate the probability of my jobs finishing. ### Response: The Azure spot VMs are mostly used when your workload can handle sudden interruptions without damaging the process too much. When it comes to the percentage, the formula looks like this: (total_evictions * time_lost) / total_runtime = eviction rate So for example, let's say that you want to run a VM for 100 hours, and the eviction rate is 5%, the number of evictions is hard to tell, but could be something like the following examples: // 50 evictions, each removed 0.1h of computation time. (50 * 0.1) / 100 = 0.05 = 5% // 10 evictions, each removed 1h of computation time. (10 * 1) / 100 = 0.10 = 10% // 2 evictions, each removed 5h of computation time. (2 * 5) / 100 = 0.10 = 10% So there is no exact answer to your question, and also no average, only speculation, and mathematics :)
I am trying to evaluate whether I should use Spot VMs and the measure for availability is given as a percentage by Azure. https://www.linkedin.com/pulse/viewing-eviction-rates-spot-virtual-machines-azure-portal-dave-callan/ Does anyone know where I can find the average time a VM is available? Optimally I would like to have a Bell curve to estimate the probability of my jobs finishing.
You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace. Create svc-a in namespace X which selects / points to pods in namespace X. The Service will be available at svc-a.x.svc.cluster.local. Create svc-a in namespace Y which selects / points to pods in namespace Y. The Service will be available at svc-a.y.svc.cluster.local. Create a svc-a in namespace Z without selectors. apiVersion: v1 kind: Service metadata: name: svc-a spec: ports: - protocol: TCP port: 80 targetPort: 9376 The Service will be available at svc-a.z.svc.cluster.local. Create an EndpointSlice in namespace Z with svc-a.x.svc.cluster.local and svc-a.y.svc.cluster.local as endpoints and attach it to svc-a: apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: name: svc-a labels: kubernetes.io/service-name: svc-a addressType: FQDN ports: - name: http protocol: TCP port: 80 endpoints: - addresses: - "svc-a.x.svc.cluster.local" - "svc-a.y.svc.cluster.local" So now you'll have svc-a.z.svc.cluster.local available in any namespace pointing to backends in both the X and Y namespaces.
###Instruction: I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on svc-a.x.svc.cluster.local I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under svc-a.x.svc.cluster.local Is there any way to do it? I want to the main service to select pods from different namespaces. ### Response: You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace. Create svc-a in namespace X which selects / points to pods in namespace X. The Service will be available at svc-a.x.svc.cluster.local. Create svc-a in namespace Y which selects / points to pods in namespace Y. The Service will be available at svc-a.y.svc.cluster.local. Create a svc-a in namespace Z without selectors. apiVersion: v1 kind: Service metadata: name: svc-a spec: ports: - protocol: TCP port: 80 targetPort: 9376 The Service will be available at svc-a.z.svc.cluster.local. Create an EndpointSlice in namespace Z with svc-a.x.svc.cluster.local and svc-a.y.svc.cluster.local as endpoints and attach it to svc-a: apiVersion: discovery.k8s.io/v1 kind: EndpointSlice metadata: name: svc-a labels: kubernetes.io/service-name: svc-a addressType: FQDN ports: - name: http protocol: TCP port: 80 endpoints: - addresses: - "svc-a.x.svc.cluster.local" - "svc-a.y.svc.cluster.local" So now you'll have svc-a.z.svc.cluster.local available in any namespace pointing to backends in both the X and Y namespaces.
I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on svc-a.x.svc.cluster.local I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under svc-a.x.svc.cluster.local Is there any way to do it? I want to the main service to select pods from different namespaces.
Run the command 'kubectl config view' or 'kubectl cluster-info' to check the IP address of Kubernetes control plane. In my case it is 10.0.0.2. $ kubectl config view apiVersion: v1 clusters: cluster: certificate-authority-data: DATA+OMITTED server: https://10.0.0.2:6443 Or $ kubectl cluster-info Kubernetes control plane is running at https://10.0.0.2:6443 Tried to telnet the Kubernetes control plane. telnet 10.0.0.2 6443 Trying 10.0.0.2... Press Control + C in your keyboard to terminate the telnet command. Go to your Firewall Rules and add port 6443 and make sure to allow all instances in the network. Then try to telnet the Kubernetes control plane once again and you should be able to connect now: $ telnet 10.0.0.2 6443 Trying 10.0.0.2... Connected to 10.0.0.2. Escape character is '^]'. Try to join the worker nodes now. You can run the command 'kubeadm token create --print-join-command' to create new token just in case you forgot to save the old one. Run 'kubectl get nodes' on the control-plane to see this node join the cluster $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s Ready control-plane 57m v1.25.0 wk8s-node-0 Ready 36m v1.25.0 wk8s-node-1 Ready 35m v1.25.0
###Instruction: I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s. I am working with two nodes master - 2cpu, 2 GB memory worker - 1 cpu, 1 GB memory OS - ubuntu - hashicorp/bionic64 I did setup the master node successfully and i can see it is up and running vagrant@master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 29m v1.18.2 Here is token which i have generated vagrant@master:~$ kubeadm token create --print-join-command W0419 13:45:52.513532 16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 Issue - But when i try to join it from the worker node i get vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 W0419 13:46:17.651819 15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused To see the stack trace of this error execute with --v=5 or higher Here are the ports which are occupied 10.0.2.15:2379 10.0.2.15:2380 10.0.2.15:68 Note i am using CNI from - kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml ### Response: Run the command 'kubectl config view' or 'kubectl cluster-info' to check the IP address of Kubernetes control plane. In my case it is 10.0.0.2. $ kubectl config view apiVersion: v1 clusters: cluster: certificate-authority-data: DATA+OMITTED server: https://10.0.0.2:6443 Or $ kubectl cluster-info Kubernetes control plane is running at https://10.0.0.2:6443 Tried to telnet the Kubernetes control plane. telnet 10.0.0.2 6443 Trying 10.0.0.2... Press Control + C in your keyboard to terminate the telnet command. Go to your Firewall Rules and add port 6443 and make sure to allow all instances in the network. Then try to telnet the Kubernetes control plane once again and you should be able to connect now: $ telnet 10.0.0.2 6443 Trying 10.0.0.2... Connected to 10.0.0.2. Escape character is '^]'. Try to join the worker nodes now. You can run the command 'kubeadm token create --print-join-command' to create new token just in case you forgot to save the old one. Run 'kubectl get nodes' on the control-plane to see this node join the cluster $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s Ready control-plane 57m v1.25.0 wk8s-node-0 Ready 36m v1.25.0 wk8s-node-1 Ready 35m v1.25.0
I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s. I am working with two nodes master - 2cpu, 2 GB memory worker - 1 cpu, 1 GB memory OS - ubuntu - hashicorp/bionic64 I did setup the master node successfully and i can see it is up and running vagrant@master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 29m v1.18.2 Here is token which i have generated vagrant@master:~$ kubeadm token create --print-join-command W0419 13:45:52.513532 16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 Issue - But when i try to join it from the worker node i get vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521 W0419 13:46:17.651819 15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused To see the stack trace of this error execute with --v=5 or higher Here are the ports which are occupied 10.0.2.15:2379 10.0.2.15:2380 10.0.2.15:68 Note i am using CNI from - kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
I never used websocket with kubernetes before, but here is the documentation about the token authentication method for websocket browser clients https://github.com/kubernetes/kubernetes/pull/47740 You must to send token in subprotocol parameter with the token encoded in base64. So it should be: wscat -s "base64url.bearer.authorization.k8s.io.$TOKEN_IN_BASE64","base64.binary.k8s.io" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg -6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
###Instruction: I'm trying to exec kubernetes pod using the Websocket, as per the kubernetes document it can be achieved through passing the Bearer THETOKEN When using bearer token authentication from an http client, the API server expects an Authorization header with a value of Bearer THETOKEN Here is the sample for wscat passing Header Value --header "Authorization: Bearer $TOKEN" to establish exec to pod and the connection went successfully / # wscat --header "Authorization: Bearer $TOKEN" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg -6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh" But when it comes to Websocket API connection from web browser How to pass this Beaer Token in the web Socket as per the doc there is no standard way to pass custom header Tried URI Query Parameter access_token= Bearer TOKEN in the API query it doesn't work and the Authentication denied with 403 wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg-%206f69577c74-jxbwn/exec?stdout=1&stdin=1&stderr=1&tty=1&command=%2Fbin%2Fsh&command=-i&access_token=$TOKEN ### Response: I never used websocket with kubernetes before, but here is the documentation about the token authentication method for websocket browser clients https://github.com/kubernetes/kubernetes/pull/47740 You must to send token in subprotocol parameter with the token encoded in base64. So it should be: wscat -s "base64url.bearer.authorization.k8s.io.$TOKEN_IN_BASE64","base64.binary.k8s.io" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg -6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
I'm trying to exec kubernetes pod using the Websocket, as per the kubernetes document it can be achieved through passing the Bearer THETOKEN When using bearer token authentication from an http client, the API server expects an Authorization header with a value of Bearer THETOKEN Here is the sample for wscat passing Header Value --header "Authorization: Bearer $TOKEN" to establish exec to pod and the connection went successfully / # wscat --header "Authorization: Bearer $TOKEN" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg -6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh" But when it comes to Websocket API connection from web browser How to pass this Beaer Token in the web Socket as per the doc there is no standard way to pass custom header Tried URI Query Parameter access_token= Bearer TOKEN in the API query it doesn't work and the Authentication denied with 403 wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg-%206f69577c74-jxbwn/exec?stdout=1&stdin=1&stderr=1&tty=1&command=%2Fbin%2Fsh&command=-i&access_token=$TOKEN
To do this you would have to configure a kube service, kube ingress and the configure your DNS. Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal You can test the dns reoslution using nslookup or dig
###Instruction: I have a k8s cluster where I deploy some containers. The cluster is accessible at microk8s.hostname.internal. At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress. And this works great. Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal. How do I do this? Currently installed addons in microk8s: aasa@bolsrv0891:/snap/bin$ microk8s status microk8s is running high-availability: no addons: enabled: dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes ingress # (core) Ingress controller for external access metrics-server # (core) K8s Metrics Server for API access to service metrics Update 1: If I portforward to my service it works. I have tried this ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress namespace: jupyter-notebook annotations: kubernetes.io/ingress.class: public spec: rules: - host: jupyter.microk8s.hostname.internal http: paths: - path: / pathType: Prefix backend: service: name: jupyter-service port: number: 7070 But I cant access it nor ping it. Chrome says: jupyter.microk8s.hostname.internal’s server IP address could not be found. My service looks like this: apiVersion: v1 kind: Service metadata: name: jupyter-service namespace: jupyter-notebook spec: ports: - name: 7070-8888 port: 7070 protocol: TCP targetPort: 8888 selector: app: jupyternotebook type: ClusterIP status: loadBalancer: {} I can of course ping microk8s.hostname.internal. Update 2: The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: /$1 name: jupyter-ingress namespace: jupyter-notebook spec: rules: - http: paths: - path: "/jupyter-notebook/?(.*)" pathType: Prefix backend: service: name: jupyter-service port: number: 7070 This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook. ### Response: To do this you would have to configure a kube service, kube ingress and the configure your DNS. Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal You can test the dns reoslution using nslookup or dig
I have a k8s cluster where I deploy some containers. The cluster is accessible at microk8s.hostname.internal. At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress. And this works great. Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal. How do I do this? Currently installed addons in microk8s: aasa@bolsrv0891:/snap/bin$ microk8s status microk8s is running high-availability: no addons: enabled: dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes ingress # (core) Ingress controller for external access metrics-server # (core) K8s Metrics Server for API access to service metrics Update 1: If I portforward to my service it works. I have tried this ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress namespace: jupyter-notebook annotations: kubernetes.io/ingress.class: public spec: rules: - host: jupyter.microk8s.hostname.internal http: paths: - path: / pathType: Prefix backend: service: name: jupyter-service port: number: 7070 But I cant access it nor ping it. Chrome says: jupyter.microk8s.hostname.internal’s server IP address could not be found. My service looks like this: apiVersion: v1 kind: Service metadata: name: jupyter-service namespace: jupyter-notebook spec: ports: - name: 7070-8888 port: 7070 protocol: TCP targetPort: 8888 selector: app: jupyternotebook type: ClusterIP status: loadBalancer: {} I can of course ping microk8s.hostname.internal. Update 2: The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: /$1 name: jupyter-ingress namespace: jupyter-notebook spec: rules: - http: paths: - path: "/jupyter-notebook/?(.*)" pathType: Prefix backend: service: name: jupyter-service port: number: 7070 This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
As @David Maze shared, to use k8's go client to exec command in a pod follow the below code: import ( "io" v1 "k8s.io/api/core/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes/scheme" restclient "k8s.io/client-go/rest" "k8s.io/client-go/tools/remotecommand" ) // ExecCmd exec command on specific pod and wait the command's output. func ExecCmdExample(client kubernetes.Interface, config *restclient.Config, podName string, command string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error { cmd := []string{ "sh", "-c", command, } req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName). Namespace("default").SubResource("exec") option := &v1.PodExecOptions{ Command: cmd, Stdin: true, Stdout: true, Stderr: true, TTY: true, } if stdin == nil { option.Stdin = false } req.VersionedParams( option, scheme.ParameterCodec, ) exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL()) if err != nil { return err } err = exec.Stream(remotecommand.StreamOptions{ Stdin: stdin, Stdout: stdout, Stderr: stderr, }) if err != nil { return err } return nil } Also refer to this link for more information
###Instruction: I'm trying to exec a command into a running pod. I'm using go K8sclient to achieve this but facing a issue. I also don't know if solution is correct or not. Can anyone please check and provide correct solution? This is my code. namespace := getNamespace() podName := "maxscale-0" config, err := rest.InClusterConfig() if err != nil { log.Fatal(err) } clientset, err := kubernetes.NewForConfig(config) if err != nil { log.Fatal(err) } req := clientset.CoreV1().Pods(namespace).Exec(podName, &corev1.PodExecOptions{ Command: []string{"sh", "-c", "grep -oP '\"name\": \"\\K[^\"]*' /var/lib/maxscale/MariaDB-Monitor_journal.json"}, }) // Set up a stream to capture the output execStream, err := req.Stream() if err != nil { fmt.Println(err) os.Exit(1) } // Print the output buf := new(bytes.Buffer) buf.ReadFrom(execStream) fmt.Println(buf.String()) The error I got is clientset.CoreV1().Pods(namespace).Exec undefined (type "k8s.io/client-go/kubernetes/typed/core/v1".PodInterface has no field or method Exec) ### Response: As @David Maze shared, to use k8's go client to exec command in a pod follow the below code: import ( "io" v1 "k8s.io/api/core/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes/scheme" restclient "k8s.io/client-go/rest" "k8s.io/client-go/tools/remotecommand" ) // ExecCmd exec command on specific pod and wait the command's output. func ExecCmdExample(client kubernetes.Interface, config *restclient.Config, podName string, command string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error { cmd := []string{ "sh", "-c", command, } req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName). Namespace("default").SubResource("exec") option := &v1.PodExecOptions{ Command: cmd, Stdin: true, Stdout: true, Stderr: true, TTY: true, } if stdin == nil { option.Stdin = false } req.VersionedParams( option, scheme.ParameterCodec, ) exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL()) if err != nil { return err } err = exec.Stream(remotecommand.StreamOptions{ Stdin: stdin, Stdout: stdout, Stderr: stderr, }) if err != nil { return err } return nil } Also refer to this link for more information
I'm trying to exec a command into a running pod. I'm using go K8sclient to achieve this but facing a issue. I also don't know if solution is correct or not. Can anyone please check and provide correct solution? This is my code. namespace := getNamespace() podName := "maxscale-0" config, err := rest.InClusterConfig() if err != nil { log.Fatal(err) } clientset, err := kubernetes.NewForConfig(config) if err != nil { log.Fatal(err) } req := clientset.CoreV1().Pods(namespace).Exec(podName, &corev1.PodExecOptions{ Command: []string{"sh", "-c", "grep -oP '\"name\": \"\\K[^\"]*' /var/lib/maxscale/MariaDB-Monitor_journal.json"}, }) // Set up a stream to capture the output execStream, err := req.Stream() if err != nil { fmt.Println(err) os.Exit(1) } // Print the output buf := new(bytes.Buffer) buf.ReadFrom(execStream) fmt.Println(buf.String()) The error I got is clientset.CoreV1().Pods(namespace).Exec undefined (type "k8s.io/client-go/kubernetes/typed/core/v1".PodInterface has no field or method Exec)
The question (title) is misleading. There are 2 parts. How to containerize the app (in this case flutter app). How to deploy the app on the k8s cluster. To deal with the first part, You have Dockerfile. There is room for improvement but I think this Dockerfile should work. Then you need to build a container image. Please refer to the official documentation. Finally, you need to push this created container image to some repository. (We may skip this pushing stage but to make things simple I am suggesting pushing the image) For the second part, you should be familiar with basic Kubernetes concepts. You can run the container from a previously built container image with the help of the k8s Pod object. To access the application, you need one more k8s object and that is the Service (Load balancer or Node port type). I know things are a bit complex (at initial levels) but please follow a good course/book I have gone through the blog post you shared, and this talks only about the first part and not the second part. You will have a container image at the end of this blog post. I suggest going through the free playground offered by killer shell, if you don't want to set up a k8s cluster on your own, that is again another learning curve. Skip the first tile on this page this is just a playground, but from the second tile, they have enough material. Improvements for Edited Question: server.sh: maintaining a startup script is quite standard practice if you have complex logic to start the process. We can skip this file but in that case, a few steps will be added to Dockerfile. kubectl get pods does not show you images but it will show you running pods in the cluster (in default namespace). Not sure how you ran and connected to the cluster. But try to add output of the command. few pointers to impve dockerfile: Use a small base image footprint. Ubuntu: xx has many packages pre-installed, maybe you don't need all of them. Ubuntu has slim images also or try to find a flutter image. Try to reduce Run statements. you can club 2-3 commands in one. this will reduce layers in the image. instead of RUN git clone, you should clone code before docker build and copy/add code in the container image. In this way, you can control which files you need to add to the image. You also don't require to have a git tool installed in the container image. RUN ["chmod", "+x", "/app/server/server.sh"] and RUN mkdir both statements are not needed at all if you write Dockerfile smartly. Dockerfiles should be clean, crisp, and precise. PS: Sorry but this is not a classroom section. I know this is a bit complex thing for beginners. But please try to learn from some good sources/books.
###Instruction: I am just wondering to know how should I create a docker file for a Flutter app then deploy it on a Kubernetes cluster? I found the following Dockerfile and server.sh script from this website but I am not sure if this a correct way of doing it? # Install Operating system and dependencies FROM ubuntu:22.04 RUN apt-get update RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 RUN apt-get clean # download Flutter SDK from Flutter Github repo RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter # Set flutter environment path ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}" # Run flutter doctor RUN flutter doctor # Enable flutter web RUN flutter channel master RUN flutter upgrade RUN flutter config --enable-web # Copy files to container and build RUN mkdir /app/ COPY . /app/ WORKDIR /app/ RUN flutter build web # Record the exposed port EXPOSE 5000 # make server startup script executable and start the web server RUN ["chmod", "+x", "/app/server/server.sh"] ENTRYPOINT [ "/app/server/server.sh"] And: #!/bin/bash # Set the port PORT=5000 # Stop any program currently running on the set port echo 'preparing port' $PORT '...' fuser -k 5000/tcp # switch directories cd build/web/ # Start the server echo 'Server starting on port' $PORT '...' python3 -m http.server $PORT I did all the steps and it seems it works fine but as long as I use skaffold I don't know how/where to put the following command to automate this step as well (I have already ran this command manually): docker run -i -p 8080:5000 -td flutter_docker I still like to know was the above files, proper/official way to doing that or there is a better way of it? EDIT: I created the following deployment & service file to put the deploy the created image on Kubernetes local Kind cluster but when I run kubectl get pods I can not find this image but I find it by doing docker images. Why this happens and how can I put in on a Kubernetes pod instead of docker images? apiVersion: apps/v1 kind: Deployment metadata: name: client-depl spec: replicas: 1 selector: matchLabels: app: client template: metadata: labels: app: client spec: containers: - name: client image: front --- apiVersion: v1 kind: Service metadata: name: client-srv spec: selector: app: client ports: - name: client protocol: TCP port: 3000 targetPort: 3000 ### Response: The question (title) is misleading. There are 2 parts. How to containerize the app (in this case flutter app). How to deploy the app on the k8s cluster. To deal with the first part, You have Dockerfile. There is room for improvement but I think this Dockerfile should work. Then you need to build a container image. Please refer to the official documentation. Finally, you need to push this created container image to some repository. (We may skip this pushing stage but to make things simple I am suggesting pushing the image) For the second part, you should be familiar with basic Kubernetes concepts. You can run the container from a previously built container image with the help of the k8s Pod object. To access the application, you need one more k8s object and that is the Service (Load balancer or Node port type). I know things are a bit complex (at initial levels) but please follow a good course/book I have gone through the blog post you shared, and this talks only about the first part and not the second part. You will have a container image at the end of this blog post. I suggest going through the free playground offered by killer shell, if you don't want to set up a k8s cluster on your own, that is again another learning curve. Skip the first tile on this page this is just a playground, but from the second tile, they have enough material. Improvements for Edited Question: server.sh: maintaining a startup script is quite standard practice if you have complex logic to start the process. We can skip this file but in that case, a few steps will be added to Dockerfile. kubectl get pods does not show you images but it will show you running pods in the cluster (in default namespace). Not sure how you ran and connected to the cluster. But try to add output of the command. few pointers to impve dockerfile: Use a small base image footprint. Ubuntu: xx has many packages pre-installed, maybe you don't need all of them. Ubuntu has slim images also or try to find a flutter image. Try to reduce Run statements. you can club 2-3 commands in one. this will reduce layers in the image. instead of RUN git clone, you should clone code before docker build and copy/add code in the container image. In this way, you can control which files you need to add to the image. You also don't require to have a git tool installed in the container image. RUN ["chmod", "+x", "/app/server/server.sh"] and RUN mkdir both statements are not needed at all if you write Dockerfile smartly. Dockerfiles should be clean, crisp, and precise. PS: Sorry but this is not a classroom section. I know this is a bit complex thing for beginners. But please try to learn from some good sources/books.
I am just wondering to know how should I create a docker file for a Flutter app then deploy it on a Kubernetes cluster? I found the following Dockerfile and server.sh script from this website but I am not sure if this a correct way of doing it? # Install Operating system and dependencies FROM ubuntu:22.04 RUN apt-get update RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 RUN apt-get clean # download Flutter SDK from Flutter Github repo RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter # Set flutter environment path ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}" # Run flutter doctor RUN flutter doctor # Enable flutter web RUN flutter channel master RUN flutter upgrade RUN flutter config --enable-web # Copy files to container and build RUN mkdir /app/ COPY . /app/ WORKDIR /app/ RUN flutter build web # Record the exposed port EXPOSE 5000 # make server startup script executable and start the web server RUN ["chmod", "+x", "/app/server/server.sh"] ENTRYPOINT [ "/app/server/server.sh"] And: #!/bin/bash # Set the port PORT=5000 # Stop any program currently running on the set port echo 'preparing port' $PORT '...' fuser -k 5000/tcp # switch directories cd build/web/ # Start the server echo 'Server starting on port' $PORT '...' python3 -m http.server $PORT I did all the steps and it seems it works fine but as long as I use skaffold I don't know how/where to put the following command to automate this step as well (I have already ran this command manually): docker run -i -p 8080:5000 -td flutter_docker I still like to know was the above files, proper/official way to doing that or there is a better way of it? EDIT: I created the following deployment & service file to put the deploy the created image on Kubernetes local Kind cluster but when I run kubectl get pods I can not find this image but I find it by doing docker images. Why this happens and how can I put in on a Kubernetes pod instead of docker images? apiVersion: apps/v1 kind: Deployment metadata: name: client-depl spec: replicas: 1 selector: matchLabels: app: client template: metadata: labels: app: client spec: containers: - name: client image: front --- apiVersion: v1 kind: Service metadata: name: client-srv spec: selector: app: client ports: - name: client protocol: TCP port: 3000 targetPort: 3000
You can get the nodeIp so: kubectl get node NAME STATUS ROLES AGE VERSION nodeName Ready <none> 15h v1.26.0 Then with the nodeName: kubectl describe node nodeName That gives you a descrition of the node and you should look for this section: Addresses: InternalIP: 10.165.39.165 Hostname: master Ping it to verify the network. Find your host file on Mac and make an entry like: 10.165.39.165 test.local This let you access the cluster with a domain name. Ping it to verify. You can not access from outside the cluster a ClusterIp. To access your pod you have several possibilities. if your service is type ClusterIp, you can create a temporary connection from your host with a port forward. kubectl port-forward svc/yourservicename localport:podport (i would raccomend this) create a service type: NodePort Then kubectl get svc -o wide Shows you the NodePort: between(30000-32000). You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort. Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands. Update: if you want to start colima with an ipAddress, first find one of your local network which is available. Your network setting you can get with: ifconfig find the network. Should be the same of that of your Internet router. Look for the subnet. Most likely 255.255.255.0. The value to pass then: --network-address xxx.xxx.xxx.xxx/24 In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible. Again check with ping and follow the steps from begining to verify the kubernetes node configuration.
###Instruction: I'm running colima with kubernetes like: colima start --kuberenetes I created a few running pods, and I want to see access the through the browsers. But I don't know what is the colima IP (or kubernetes node IP). help appreciated ### Response: You can get the nodeIp so: kubectl get node NAME STATUS ROLES AGE VERSION nodeName Ready <none> 15h v1.26.0 Then with the nodeName: kubectl describe node nodeName That gives you a descrition of the node and you should look for this section: Addresses: InternalIP: 10.165.39.165 Hostname: master Ping it to verify the network. Find your host file on Mac and make an entry like: 10.165.39.165 test.local This let you access the cluster with a domain name. Ping it to verify. You can not access from outside the cluster a ClusterIp. To access your pod you have several possibilities. if your service is type ClusterIp, you can create a temporary connection from your host with a port forward. kubectl port-forward svc/yourservicename localport:podport (i would raccomend this) create a service type: NodePort Then kubectl get svc -o wide Shows you the NodePort: between(30000-32000). You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort. Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands. Update: if you want to start colima with an ipAddress, first find one of your local network which is available. Your network setting you can get with: ifconfig find the network. Should be the same of that of your Internet router. Look for the subnet. Most likely 255.255.255.0. The value to pass then: --network-address xxx.xxx.xxx.xxx/24 In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible. Again check with ping and follow the steps from begining to verify the kubernetes node configuration.
I'm running colima with kubernetes like: colima start --kuberenetes I created a few running pods, and I want to see access the through the browsers. But I don't know what is the colima IP (or kubernetes node IP). help appreciated
In Kubernetes, a Pod (or multiple for the same service) has a ServiceAccount. That is the way it is designed. This ServiceAccount can be a specific that you create, you don't have to use a default ServiceAccount in a Namespace.
###Instruction: Is it possible to specify or change the service account to be used when accessing the kube api from within the cluster using rest.InClusterConfig in golang? It seems to use the default service account (or the service account the pod running is under) but i want to use another service account. I am aware that i can use BuildConfigFromFlags and use the configs from a config file that may be tied to a service account, but i wanted to see if it is possible to override the service account with rest.InClusterConfig ### Response: In Kubernetes, a Pod (or multiple for the same service) has a ServiceAccount. That is the way it is designed. This ServiceAccount can be a specific that you create, you don't have to use a default ServiceAccount in a Namespace.
Is it possible to specify or change the service account to be used when accessing the kube api from within the cluster using rest.InClusterConfig in golang? It seems to use the default service account (or the service account the pod running is under) but i want to use another service account. I am aware that i can use BuildConfigFromFlags and use the configs from a config file that may be tied to a service account, but i wanted to see if it is possible to override the service account with rest.InClusterConfig
Kindly check if you have stacked up etcd datastore as part of your k8s cluster. etcd for its quorum requires at least 2 masters to be running and in that case failure toleration is n-1, for 3 nodes it shall tolerate only one failure..so in your case as 2 masters are down your cluster is non-operational
###Instruction: Node IP Role OS 192.x.x.11 Master 1 RHEL8 192.x.x.12 Master 2 RHEL8 192.x.x.13 Master 3 RHEL8 192.x.x.16 VIP Use-Cases No of Masters Ready or Running Expected Actual 3 Masters Ingress Created with VIP IP and ping to VIP should work VIP is working 2 Masters Ingress Created with VIP IP and ping to VIP should work VIP is working 1 Master Ingress Created with VIP IP and ping to VIP should work VIP is not working, Kubectl is not responding I have Created a RKE2 HA Cluster with kube-vip and the cluster is working fine only when at least 2 masters are in Running, but I want to test a use case where only 1 master is available the VIP should be able to ping and any ingress created with VIP address should work. In my case when 2 masters are down I'm facing an issue with kube-vip-ds pod, when i check the logs using crictl command I'm getting the below error can someone suggest to me how to reslove this issue. E0412 12:32:20.733320 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: etcdserver: request timed out E0412 12:32:20.733715 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: etcdserver: request timed out E0412 12:32:25.812202 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field E0412 12:32:25.830219 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field E0412 12:33:27.204128 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock) E0412 12:33:27.504957 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock) E0412 12:34:29.346104 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock) E0412 12:34:29.354454 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock) Thanks. ### Response: Kindly check if you have stacked up etcd datastore as part of your k8s cluster. etcd for its quorum requires at least 2 masters to be running and in that case failure toleration is n-1, for 3 nodes it shall tolerate only one failure..so in your case as 2 masters are down your cluster is non-operational
Node IP Role OS 192.x.x.11 Master 1 RHEL8 192.x.x.12 Master 2 RHEL8 192.x.x.13 Master 3 RHEL8 192.x.x.16 VIP Use-Cases No of Masters Ready or Running Expected Actual 3 Masters Ingress Created with VIP IP and ping to VIP should work VIP is working 2 Masters Ingress Created with VIP IP and ping to VIP should work VIP is working 1 Master Ingress Created with VIP IP and ping to VIP should work VIP is not working, Kubectl is not responding I have Created a RKE2 HA Cluster with kube-vip and the cluster is working fine only when at least 2 masters are in Running, but I want to test a use case where only 1 master is available the VIP should be able to ping and any ingress created with VIP address should work. In my case when 2 masters are down I'm facing an issue with kube-vip-ds pod, when i check the logs using crictl command I'm getting the below error can someone suggest to me how to reslove this issue. E0412 12:32:20.733320 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: etcdserver: request timed out E0412 12:32:20.733715 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: etcdserver: request timed out E0412 12:32:25.812202 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field E0412 12:32:25.830219 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field E0412 12:33:27.204128 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock) E0412 12:33:27.504957 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock) E0412 12:34:29.346104 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock) E0412 12:34:29.354454 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock) Thanks.
Using the Secrets Store CSI Driver, you can configure the SecretProviderClass to use a workload identity by setting the clientID in the SecretProviderClass. You'll need to use the client ID of your user assigned managed identity and change the usePodIdentity and useVMManagedIdentity setting to false. With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented here. This doc will walk you through setting it up on Azure, but at a high-level here is what you need to do: Register the EnableWorkloadIdentityPreview feature using Azure CLI Create an AKS cluster using Azure CLI with the azure-keyvault-secrets-provider add-on enabled and --enable-oidc-issuer and --enable-workload-identiy flags set Create an Azure Key Vault and set your secrets Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID Connect to the AKS cluster and create a Kubernetes ServiceAccount with annotations and labels that enable this for Azure workload identity Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject Create a Kubernetes SecretProviderClass using clientID to use workload identity and adding a secretObjects block to enable syncing objects as environment variables using Kubernetes secret store. Create a Kubernetes Deployment with a label to use workload identity, the serviceAccountName set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using valueFrom and secretKeyRef syntax to mount from your secret object store. Hope that helps.
###Instruction: I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault. The Managed Identity role assignments: Key Vault Contributor -> on the key vault Managed Identity Operator -> Managed Identity Azure Kubernetes Service Contributor Role, Azure Kubernetes Service Cluster User Role, Managed Identity Operator -> on the resource group that includes the cluster Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now) Python code: credential = ManagedIdentityCredential() vault_client = SecretClient(vault_url=key_vault_uri, credential=credential) retrieved_secret = vault_client.get_secret(secret_name) I keep getting the error: azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8" Content: no azure identity found for request clientID So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error: Python code: def get_kube_secret(self, secret_name): kube_config.load_incluster_config() v1_secrets = kube_client.CoreV1Api() string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"") json_secret = json.loads(string_secret) return json_secret def decode_base64_string(self, encoded_string): decoded_secret = base64.b64decode(encoded_string.strip()) decoded_secret = decoded_secret.decode('UTF-8') return decoded_secret managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId'] managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret) Update: I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver? # This is a SecretProviderClass using user-assigned identity to access the key vault apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: azure-kvname-user-msi spec: provider: azure parameters: usePodIdentity: "false" useVMManagedIdentity: "true" # Set to true for using managed identity userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use vmmanagedidentityclientid: "$CLIENT_ID" keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud objects: "" tenantId: "$AZURE_TENANT_ID" Deployment Yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend namespace: redacted_namespace labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: redacted_image ports: - name: http containerPort: 80 - name: https containerPort: 443 imagePullPolicy: Always resources: # You must specify requests for CPU to autoscale # based on CPU utilization requests: cpu: "250m" env: - name: test-secrets valueFrom: secretKeyRef: name: test-secrets key: test-secrets volumeMounts: - name: test-secrets mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: test-secrets csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "azure-kvname-user-msi" dnsPolicy: ClusterFirst Update 16/01/2023 I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error: "failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net" ### Response: Using the Secrets Store CSI Driver, you can configure the SecretProviderClass to use a workload identity by setting the clientID in the SecretProviderClass. You'll need to use the client ID of your user assigned managed identity and change the usePodIdentity and useVMManagedIdentity setting to false. With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented here. This doc will walk you through setting it up on Azure, but at a high-level here is what you need to do: Register the EnableWorkloadIdentityPreview feature using Azure CLI Create an AKS cluster using Azure CLI with the azure-keyvault-secrets-provider add-on enabled and --enable-oidc-issuer and --enable-workload-identiy flags set Create an Azure Key Vault and set your secrets Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID Connect to the AKS cluster and create a Kubernetes ServiceAccount with annotations and labels that enable this for Azure workload identity Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject Create a Kubernetes SecretProviderClass using clientID to use workload identity and adding a secretObjects block to enable syncing objects as environment variables using Kubernetes secret store. Create a Kubernetes Deployment with a label to use workload identity, the serviceAccountName set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using valueFrom and secretKeyRef syntax to mount from your secret object store. Hope that helps.
I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault. The Managed Identity role assignments: Key Vault Contributor -> on the key vault Managed Identity Operator -> Managed Identity Azure Kubernetes Service Contributor Role, Azure Kubernetes Service Cluster User Role, Managed Identity Operator -> on the resource group that includes the cluster Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now) Python code: credential = ManagedIdentityCredential() vault_client = SecretClient(vault_url=key_vault_uri, credential=credential) retrieved_secret = vault_client.get_secret(secret_name) I keep getting the error: azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8" Content: no azure identity found for request clientID So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error: Python code: def get_kube_secret(self, secret_name): kube_config.load_incluster_config() v1_secrets = kube_client.CoreV1Api() string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"") json_secret = json.loads(string_secret) return json_secret def decode_base64_string(self, encoded_string): decoded_secret = base64.b64decode(encoded_string.strip()) decoded_secret = decoded_secret.decode('UTF-8') return decoded_secret managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId'] managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret) Update: I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver? # This is a SecretProviderClass using user-assigned identity to access the key vault apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: azure-kvname-user-msi spec: provider: azure parameters: usePodIdentity: "false" useVMManagedIdentity: "true" # Set to true for using managed identity userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use vmmanagedidentityclientid: "$CLIENT_ID" keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud objects: "" tenantId: "$AZURE_TENANT_ID" Deployment Yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend namespace: redacted_namespace labels: app: backend spec: replicas: 1 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend image: redacted_image ports: - name: http containerPort: 80 - name: https containerPort: 443 imagePullPolicy: Always resources: # You must specify requests for CPU to autoscale # based on CPU utilization requests: cpu: "250m" env: - name: test-secrets valueFrom: secretKeyRef: name: test-secrets key: test-secrets volumeMounts: - name: test-secrets mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: test-secrets csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "azure-kvname-user-msi" dnsPolicy: ClusterFirst Update 16/01/2023 I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error: "failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net"
Metallb doesn't receive and forward any traffic, so from MetalLB to my ingress doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing: Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing. But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod. Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required. Metallb is a solution to implement kubernetes services of type LoadBalancing when there is no cloud provider to do that for you. So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type ClusterIP with an ingress-controller just use a service of type LoadBalancing. Metallb will give that service an external ip from your pool and announce it to it's peers. In that case, when traffic reaches the cluster it will only be load-balanced once.
###Instruction: I use MetalLB and Nginx-ingress controller to provide internet access to my apps. I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there. My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP? If so, is this how it is supposed to be, or is there a better way? ### Response: Metallb doesn't receive and forward any traffic, so from MetalLB to my ingress doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing: Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing. But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod. Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required. Metallb is a solution to implement kubernetes services of type LoadBalancing when there is no cloud provider to do that for you. So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type ClusterIP with an ingress-controller just use a service of type LoadBalancing. Metallb will give that service an external ip from your pool and announce it to it's peers. In that case, when traffic reaches the cluster it will only be load-balanced once.
I use MetalLB and Nginx-ingress controller to provide internet access to my apps. I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there. My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP? If so, is this how it is supposed to be, or is there a better way?
Try the below queries which are helpful in your case : avg ( avg_over_time(container_cpu_usage_seconds_total:rate5m[30d]) ) by (pod_name) The above query is used to determine the average CPU usage of a certain pod for the past 30 days. avg ( avg_over_time(container_memory_usage_seconds_total:rate5m[30d]) ) by (pod_name) The above query is used to determine the average memory usage of a certain pod for the past 30 days. In the query avg is used to calculate the average of the sample values in the input series, grouped by their [pod_name]. avg_over_time is used for getting the average value of all points in the specified interval, we can get the metrics like cpu and memory usage for the specified interval by using the respective queries. For more info follow this doc.
###Instruction: We have our web api service running in OpenShift for the past few months When we deployed this to OpenShift, initially we have given basic request and limits for memory and CPU. Sometime when the resource limit crossed its threshold we had to increase the limit We have several services deployed and we have given some random request and limits for the Pod. we are trying to figure out a way to provide resource limits and request based on the past few months that it is running on OpenShift My idea is to look at the last few months on requests what is POD is receiving and come up with a value to requests and limits I am thinking PROMQL can help me to provide this value, can someone help me with a query to determine average resource and limits based on past 4 to 5 weeks of requests on the POD ? ### Response: Try the below queries which are helpful in your case : avg ( avg_over_time(container_cpu_usage_seconds_total:rate5m[30d]) ) by (pod_name) The above query is used to determine the average CPU usage of a certain pod for the past 30 days. avg ( avg_over_time(container_memory_usage_seconds_total:rate5m[30d]) ) by (pod_name) The above query is used to determine the average memory usage of a certain pod for the past 30 days. In the query avg is used to calculate the average of the sample values in the input series, grouped by their [pod_name]. avg_over_time is used for getting the average value of all points in the specified interval, we can get the metrics like cpu and memory usage for the specified interval by using the respective queries. For more info follow this doc.
We have our web api service running in OpenShift for the past few months When we deployed this to OpenShift, initially we have given basic request and limits for memory and CPU. Sometime when the resource limit crossed its threshold we had to increase the limit We have several services deployed and we have given some random request and limits for the Pod. we are trying to figure out a way to provide resource limits and request based on the past few months that it is running on OpenShift My idea is to look at the last few months on requests what is POD is receiving and come up with a value to requests and limits I am thinking PROMQL can help me to provide this value, can someone help me with a query to determine average resource and limits based on past 4 to 5 weeks of requests on the POD ?
Looks like you are encountering the problem with Removal of the CRI v1alpha2 API and containerd 1.5 support in K8s 1.26. Possible workarounds: You can solve this by removing the containerd package that came with Ubuntu and installing the containerd.io package from docker repositories. Then you get a 1.6 release. Then re-generate the container's config including the group changes and restart on 1.25-05. Then you are able to complete the upgrade to 1.26. From there it was a simple do-release-upgrade to the latest ubuntu. 1) In the Docker repos, there are packages for containerd 1.6 and above. So you can also add the Docker repos, and install containerd.io from there: sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install containerd.io 2) Manually upgrading containerd to 1.6 or above, by downloading and replacing the binaries wget https://github.com/containerd/containerd/releases/download/v1.6.12/containerd-1.6.12-linux-amd64.tar.gz tar xvf containerd-1.6.12-linux-amd64.tar.gz systemctl stop containerd cd bin cp * /usr/bin/ systemctl start containerd 3) The one listed in the link above - running an older version of the kubelet (1.25) apt remove --purge kubelet apt install -y kubeadm kubelet=1.25.5-00 Please go through the similar ServerFault Answers for detailed step by srep information. EDIT : 4) Third-party replacement, cri-dockerd, is available. The cri-dockerd adapter lets you use Docker Engine through the Container Runtime Interface. If you already use cri-dockerd, you aren't affected by the dockershim removal. Before you begin, Check whether your nodes use the dockershim.
###Instruction: I am trying to set up a Kubernetes master node. Every time I try to start kubelet I am getting the error message: command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set I tries to set up the container runtime endpoint with the following command: sudo kubelet --container-runtime-endpoint=unix:///run/containerd/containerd.sock But when I do, I get the following log with a failing command: I0116 09:43:02.562785 4142 server.go:412] "Kubelet version" kubeletVersion="v1.26.0" I0116 09:43:02.563067 4142 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0116 09:43:02.563458 4142 server.go:575] "Standalone mode, no API client" I0116 09:43:02.566523 4142 server.go:463] "No api server defined - no events will be sent to API server" I0116 09:43:02.566664 4142 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" I0116 09:43:02.567002 4142 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] I0116 09:43:02.567130 4142 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} I0116 09:43:02.567232 4142 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" I0116 09:43:02.567305 4142 container_manager_linux.go:308] "Creating device plugin manager" I0116 09:43:02.567449 4142 state_mem.go:36] "Initialized new in-memory state store" E0116 09:43:02.570133 4142 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" Can anyone help me with this? I have already tried resetting kubeadm because I read in a forum that could cause this problem. This did not solve my problem. Outside of that, I read that going back to an older version of Kubernetes will work. (But I did not try that. I want to use version 1.26 which is the newest). And I cannot imagine Kubernetes making a version where containerd doesn't work. So where is the fix? ### Response: Looks like you are encountering the problem with Removal of the CRI v1alpha2 API and containerd 1.5 support in K8s 1.26. Possible workarounds: You can solve this by removing the containerd package that came with Ubuntu and installing the containerd.io package from docker repositories. Then you get a 1.6 release. Then re-generate the container's config including the group changes and restart on 1.25-05. Then you are able to complete the upgrade to 1.26. From there it was a simple do-release-upgrade to the latest ubuntu. 1) In the Docker repos, there are packages for containerd 1.6 and above. So you can also add the Docker repos, and install containerd.io from there: sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install containerd.io 2) Manually upgrading containerd to 1.6 or above, by downloading and replacing the binaries wget https://github.com/containerd/containerd/releases/download/v1.6.12/containerd-1.6.12-linux-amd64.tar.gz tar xvf containerd-1.6.12-linux-amd64.tar.gz systemctl stop containerd cd bin cp * /usr/bin/ systemctl start containerd 3) The one listed in the link above - running an older version of the kubelet (1.25) apt remove --purge kubelet apt install -y kubeadm kubelet=1.25.5-00 Please go through the similar ServerFault Answers for detailed step by srep information. EDIT : 4) Third-party replacement, cri-dockerd, is available. The cri-dockerd adapter lets you use Docker Engine through the Container Runtime Interface. If you already use cri-dockerd, you aren't affected by the dockershim removal. Before you begin, Check whether your nodes use the dockershim.
I am trying to set up a Kubernetes master node. Every time I try to start kubelet I am getting the error message: command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set I tries to set up the container runtime endpoint with the following command: sudo kubelet --container-runtime-endpoint=unix:///run/containerd/containerd.sock But when I do, I get the following log with a failing command: I0116 09:43:02.562785 4142 server.go:412] "Kubelet version" kubeletVersion="v1.26.0" I0116 09:43:02.563067 4142 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0116 09:43:02.563458 4142 server.go:575] "Standalone mode, no API client" I0116 09:43:02.566523 4142 server.go:463] "No api server defined - no events will be sent to API server" I0116 09:43:02.566664 4142 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" I0116 09:43:02.567002 4142 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] I0116 09:43:02.567130 4142 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} I0116 09:43:02.567232 4142 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" I0116 09:43:02.567305 4142 container_manager_linux.go:308] "Creating device plugin manager" I0116 09:43:02.567449 4142 state_mem.go:36] "Initialized new in-memory state store" E0116 09:43:02.570133 4142 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" Can anyone help me with this? I have already tried resetting kubeadm because I read in a forum that could cause this problem. This did not solve my problem. Outside of that, I read that going back to an older version of Kubernetes will work. (But I did not try that. I want to use version 1.26 which is the newest). And I cannot imagine Kubernetes making a version where containerd doesn't work. So where is the fix?
If you want to trigger an already existing CronJob, you need to provide ownerReference for the existing CronJob in Job: // Get already existing CronJob CronJob cronJob = kubernetesClient.batch().v1() .cronjobs() .inNamespace(namespace) .withName(cronJobName) .get(); // Create new Job object referencing CronJob Job newJobToCreate = new JobBuilder() .withNewMetadata() .withName(jobName) .addNewOwnerReference() .withApiVersion("batch/v1") .withKind("CronJob") .withName(cronJob.getMetadata().getName()) .withUid(cronJob.getMetadata().getUid()) .endOwnerReference() .addToAnnotations("cronjob.kubernetes.io/instantiate", "manual") .endMetadata() .withSpec(cronJob.getSpec().getJobTemplate().getSpec()) .build(); // Apply job object to Kubernetes Cluster kubernetesClient.batch().v1() .jobs() .inNamespace(namespace) .resource(newJobToCreate) .create();
###Instruction: I'm trying to trigger cronjob manually(not scheduled) using fabric8 library but getting the following error: Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1:443/apis/batch/v1/ namespaces/engineering/jobs. Message: Job.batch "app-chat-manual-947171" is invalid: spec.template.spec.containers[0].name: Re quired value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.template.spec.co ntainers[0].name, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=batch, kind=Job, name=ap p-chat-manual-947171, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Job.batch "app-chat-man ual-947171" is invalid: spec.template.spec.containers[0].name: Required value, metadata=ListMeta(_continue=null, remainingItemCount= null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). my code is running at the cluster: maven dependency: <dependency> <groupId>io.fabric8</groupId> <artifactId>kubernetes-client</artifactId> <version>6.3.1</version> </dependency> java code: public static void triggerCronjob(String cronjobName, String applicableNamespace) { KubernetesClient kubernetesClient = new KubernetesClientBuilder().build(); final String podName = String.format("%s-manual-%s", cronjobName.length() > 38 ? cronjobName.substring(0, 38) : cronjobName, new Random().nextInt(999999)); System.out.println("triggerCronjob method invoked, applicableNamespace: " + applicableNamespace + ", cronjobName: " + cronjobName + ", podName: " + podName); Job job = new JobBuilder() .withApiVersion("batch/v1") .withNewMetadata() .withName(podName) .endMetadata() .withNewSpec() .withBackoffLimit(4) .withNewTemplate() .withNewSpec() .addNewContainer() .withName(podName) .withImage("perl") .withCommand("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)") .endContainer() .withRestartPolicy("Never") .endSpec() .endTemplate() .endSpec().build(); kubernetesClient.batch().v1().jobs().inNamespace(applicableNamespace).createOrReplace(job); kubernetesClient.close(); System.out.println("CronJob triggered: applicableNamespace: " + applicableNamespace + ", cronjob name: " + cronjobName); } the code executed at the kubernetes cluster, but not form the application, it's an external program that's running in the cluster. my goal is to trigger given job in a given namespace. ### Response: If you want to trigger an already existing CronJob, you need to provide ownerReference for the existing CronJob in Job: // Get already existing CronJob CronJob cronJob = kubernetesClient.batch().v1() .cronjobs() .inNamespace(namespace) .withName(cronJobName) .get(); // Create new Job object referencing CronJob Job newJobToCreate = new JobBuilder() .withNewMetadata() .withName(jobName) .addNewOwnerReference() .withApiVersion("batch/v1") .withKind("CronJob") .withName(cronJob.getMetadata().getName()) .withUid(cronJob.getMetadata().getUid()) .endOwnerReference() .addToAnnotations("cronjob.kubernetes.io/instantiate", "manual") .endMetadata() .withSpec(cronJob.getSpec().getJobTemplate().getSpec()) .build(); // Apply job object to Kubernetes Cluster kubernetesClient.batch().v1() .jobs() .inNamespace(namespace) .resource(newJobToCreate) .create();
I'm trying to trigger cronjob manually(not scheduled) using fabric8 library but getting the following error: Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1:443/apis/batch/v1/ namespaces/engineering/jobs. Message: Job.batch "app-chat-manual-947171" is invalid: spec.template.spec.containers[0].name: Re quired value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.template.spec.co ntainers[0].name, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=batch, kind=Job, name=ap p-chat-manual-947171, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Job.batch "app-chat-man ual-947171" is invalid: spec.template.spec.containers[0].name: Required value, metadata=ListMeta(_continue=null, remainingItemCount= null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). my code is running at the cluster: maven dependency: <dependency> <groupId>io.fabric8</groupId> <artifactId>kubernetes-client</artifactId> <version>6.3.1</version> </dependency> java code: public static void triggerCronjob(String cronjobName, String applicableNamespace) { KubernetesClient kubernetesClient = new KubernetesClientBuilder().build(); final String podName = String.format("%s-manual-%s", cronjobName.length() > 38 ? cronjobName.substring(0, 38) : cronjobName, new Random().nextInt(999999)); System.out.println("triggerCronjob method invoked, applicableNamespace: " + applicableNamespace + ", cronjobName: " + cronjobName + ", podName: " + podName); Job job = new JobBuilder() .withApiVersion("batch/v1") .withNewMetadata() .withName(podName) .endMetadata() .withNewSpec() .withBackoffLimit(4) .withNewTemplate() .withNewSpec() .addNewContainer() .withName(podName) .withImage("perl") .withCommand("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)") .endContainer() .withRestartPolicy("Never") .endSpec() .endTemplate() .endSpec().build(); kubernetesClient.batch().v1().jobs().inNamespace(applicableNamespace).createOrReplace(job); kubernetesClient.close(); System.out.println("CronJob triggered: applicableNamespace: " + applicableNamespace + ", cronjob name: " + cronjobName); } the code executed at the kubernetes cluster, but not form the application, it's an external program that's running in the cluster. my goal is to trigger given job in a given namespace.
Since version 1.24 of numpy, np.object is deprecated, and needs to be replaced with object (cf. numpy release notes). You either need to update this in your code, or another package you're using needs to be updated (not possible to answer without more information). One (dirty) workaround for now would be to fix your numpy version to the last version still supporting np.object with pip install numpy==1.23.4
###Instruction: I am getting below error when running mlflow app raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'object' Can someone help me with this ### Response: Since version 1.24 of numpy, np.object is deprecated, and needs to be replaced with object (cf. numpy release notes). You either need to update this in your code, or another package you're using needs to be updated (not possible to answer without more information). One (dirty) workaround for now would be to fix your numpy version to the last version still supporting np.object with pip install numpy==1.23.4
I am getting below error when running mlflow app raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'object' Can someone help me with this
You can use try: test_encode = yamlencode(try(local.test_decode.spec.names, "some_default_value"))
###Instruction: I have a terraform code as given below locals { file_path = format("%s-%s", var.test1, var.test2) test_decode = yamldecode((data.github_repository_file.test.content)) } data "github_repository_file" "test" { repository = "test-repo" branch = "develop" file = "${local.file_path}/local/test.yaml" } test_encode = ${yamlencode(local.test_decode.spec.names)} This is working fine when a ".spec.names" attribute present in the test.yaml file. Since we are selecting the test.yaml based on local.file_path some times attribute .spec.names might not present in the test.yaml and the plan failing with "Error: Unsupported attribute". How to check ".spec.names" attribute present in the test.yaml? Updating the question to add yaml example Yaml with names attribute apiVersion: helm.toolkit.gitops.io/v2beta1 kind: HelmRelease metadata: name: "test" namespace: "test-system" spec: chart: spec: chart: "test-environment" version: "0.1.10" names: key1: "value1" key2: "value2" key3: "value3" key4: "value4" YAML without names attribute apiVersion: helm.toolkit.gitops.io/v2beta1 kind: HelmRelease metadata: name: "test" namespace: "test-system" spec: chart: spec: chart: "test-environment" version: "0.1.10" ### Response: You can use try: test_encode = yamlencode(try(local.test_decode.spec.names, "some_default_value"))
I have a terraform code as given below locals { file_path = format("%s-%s", var.test1, var.test2) test_decode = yamldecode((data.github_repository_file.test.content)) } data "github_repository_file" "test" { repository = "test-repo" branch = "develop" file = "${local.file_path}/local/test.yaml" } test_encode = ${yamlencode(local.test_decode.spec.names)} This is working fine when a ".spec.names" attribute present in the test.yaml file. Since we are selecting the test.yaml based on local.file_path some times attribute .spec.names might not present in the test.yaml and the plan failing with "Error: Unsupported attribute". How to check ".spec.names" attribute present in the test.yaml? Updating the question to add yaml example Yaml with names attribute apiVersion: helm.toolkit.gitops.io/v2beta1 kind: HelmRelease metadata: name: "test" namespace: "test-system" spec: chart: spec: chart: "test-environment" version: "0.1.10" names: key1: "value1" key2: "value2" key3: "value3" key4: "value4" YAML without names attribute apiVersion: helm.toolkit.gitops.io/v2beta1 kind: HelmRelease metadata: name: "test" namespace: "test-system" spec: chart: spec: chart: "test-environment" version: "0.1.10"
You can keep your service as ClusterIP and use the ingress. With ingress and ingress controller, you can setup and expose the desired path outside of cluster. Ingress, you can install the Nginx ingress controller and create the ingress resource. Read more about the nginx ingress controller setup and ingress reosuce setup.
###Instruction: I am new to Kubernetes and if I am not wrong, a service can be exposed inside the cluster using ClusterIP, and to the outside world using NodePort or LoadBalancer types. But my requirement is that I have a single container that has few REST APIs. I want that one API (the basic health check API) should be exposed to the outside, and the rest of the APIs should be available only within the cluster (accessible by other nodes). How can I achieve this? ### Response: You can keep your service as ClusterIP and use the ingress. With ingress and ingress controller, you can setup and expose the desired path outside of cluster. Ingress, you can install the Nginx ingress controller and create the ingress resource. Read more about the nginx ingress controller setup and ingress reosuce setup.
I am new to Kubernetes and if I am not wrong, a service can be exposed inside the cluster using ClusterIP, and to the outside world using NodePort or LoadBalancer types. But my requirement is that I have a single container that has few REST APIs. I want that one API (the basic health check API) should be exposed to the outside, and the rest of the APIs should be available only within the cluster (accessible by other nodes). How can I achieve this?
You are trying to configure your discovery client with the Kubernetes API server URL, which is incorrect. Your client application needs to be connected to Spring Cloud Kubernetes Discovery Server. It's an independent application that will work like a proxy between your client SpringBoot apps and Kubernetes. You can find its images here: https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags And it should be deployed to Kubernetes via yaml file. Then you can configure spring.cloud.kubernetes.discovery.discovery-server-url with this discovery server URL. That URL will most likely come from a Kubernetes service that you will create for the discovery server application. Please, find the full deployment YAML and the related documentation here: https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3 Please, let us know how that goes
###Instruction: I am just setting two simple services on Mac using minikube I have the service set up and I can access it via ingress / minikube tunnel . So i know the service works I am using Spring Boot 3, with the so i need to specify the spring-cloud-starter-kubernetes-all package. This means I need to specify a url for spring.cloud.kubernetes.discovery.discovery-server-url When i try to do the simple call to discoveryClient.getServices() I get the error "Connection refused https://kubernetes.docker.internal:6443/apps" "apps" is my second service It is refusing connection to the value of spring.cloud.kubernetes.discovery.discovery-server-url At the moment i have this set to spring.cloud.kubernetes.discovery.discovery-server-url=https://kubernetes.docker.internal:6443 I am assuming this is incorrect and I need some help as to what is the correct url to set this to / or the correct place to find this. I thought this would be the internal url. ### Response: You are trying to configure your discovery client with the Kubernetes API server URL, which is incorrect. Your client application needs to be connected to Spring Cloud Kubernetes Discovery Server. It's an independent application that will work like a proxy between your client SpringBoot apps and Kubernetes. You can find its images here: https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags And it should be deployed to Kubernetes via yaml file. Then you can configure spring.cloud.kubernetes.discovery.discovery-server-url with this discovery server URL. That URL will most likely come from a Kubernetes service that you will create for the discovery server application. Please, find the full deployment YAML and the related documentation here: https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3 Please, let us know how that goes
I am just setting two simple services on Mac using minikube I have the service set up and I can access it via ingress / minikube tunnel . So i know the service works I am using Spring Boot 3, with the so i need to specify the spring-cloud-starter-kubernetes-all package. This means I need to specify a url for spring.cloud.kubernetes.discovery.discovery-server-url When i try to do the simple call to discoveryClient.getServices() I get the error "Connection refused https://kubernetes.docker.internal:6443/apps" "apps" is my second service It is refusing connection to the value of spring.cloud.kubernetes.discovery.discovery-server-url At the moment i have this set to spring.cloud.kubernetes.discovery.discovery-server-url=https://kubernetes.docker.internal:6443 I am assuming this is incorrect and I need some help as to what is the correct url to set this to / or the correct place to find this. I thought this would be the internal url.
There is no default option however you can write the cloud function and trigger it with the scheduler to scale down the GKE nodes on a time basis. i had the same requirement so written cloud function script. i tried two methods Method 1 : Cloud Scheduler publish a message to Pub/sub --> Which processed by Cloud Function --> Cloud Function scale the GKE nodes Method 2 : Cloud Scheduler send HTTP request to Cloud function --> Cloud Function scale the GKE nodes based on paylob /* HTTP function get data from event and resize the GKE cluster pools based on data */ const container = require("@google-cloud/container"); const client = new container.v1.ClusterManagerClient(); exports.helloHTTP = async (req, res) => { console.log(`Request Body`, req.body); const request = { projectId: req.body.projectId, zone: req.body.zone, clusterId: req.body.cluster_id, nodePoolId: req.body.node_pool_id, nodeCount: req.body.node_count, }; const result = await client.setNodePoolSize(request); const operation = result[0]; console.log(operation); res.status(200); }; GitHub repo & follow my article for more details Medium article
###Instruction: I have a kubernetes cluster with a node pool. I enabled autoscaling. I defined min & max values of nodes in node pool options. I observed that I need those nodes during working hours only. During non working hours nodes are mostly idle. I want to save cost by shutting down nodes if they are idle for say - 30 mins. Is there any option on node pool to configure node based on time period. I know I can set minimum node to 0 but in that case I have to wait for new node to spin. Atul Sureka ### Response: There is no default option however you can write the cloud function and trigger it with the scheduler to scale down the GKE nodes on a time basis. i had the same requirement so written cloud function script. i tried two methods Method 1 : Cloud Scheduler publish a message to Pub/sub --> Which processed by Cloud Function --> Cloud Function scale the GKE nodes Method 2 : Cloud Scheduler send HTTP request to Cloud function --> Cloud Function scale the GKE nodes based on paylob /* HTTP function get data from event and resize the GKE cluster pools based on data */ const container = require("@google-cloud/container"); const client = new container.v1.ClusterManagerClient(); exports.helloHTTP = async (req, res) => { console.log(`Request Body`, req.body); const request = { projectId: req.body.projectId, zone: req.body.zone, clusterId: req.body.cluster_id, nodePoolId: req.body.node_pool_id, nodeCount: req.body.node_count, }; const result = await client.setNodePoolSize(request); const operation = result[0]; console.log(operation); res.status(200); }; GitHub repo & follow my article for more details Medium article
I have a kubernetes cluster with a node pool. I enabled autoscaling. I defined min & max values of nodes in node pool options. I observed that I need those nodes during working hours only. During non working hours nodes are mostly idle. I want to save cost by shutting down nodes if they are idle for say - 30 mins. Is there any option on node pool to configure node based on time period. I know I can set minimum node to 0 but in that case I have to wait for new node to spin. Atul Sureka
You can purge old records by running: airflow db clean [-h] --clean-before-timestamp CLEAN_BEFORE_TIMESTAMP [--dry-run] [--skip-archive] [-t TABLES] [-v] [-y] (cli reference) It is a quite common setup to include this command in a DAG that runs periodically.
###Instruction: I am running airflow using postgres. There was a phenomenon that the web server was slow during operation. It was a problem caused by data continuing to accumulate in dag_run and log of the db table (it became faster by accessing postgres and deleting data directly). Are there any airflow options to clean the db periodically? If there is no such option, we will try to delete the data directly using the dag script. And I think it's strange that the web server slows down because there is a lot of data. Does the web server get all the data when opening another window? ### Response: You can purge old records by running: airflow db clean [-h] --clean-before-timestamp CLEAN_BEFORE_TIMESTAMP [--dry-run] [--skip-archive] [-t TABLES] [-v] [-y] (cli reference) It is a quite common setup to include this command in a DAG that runs periodically.
I am running airflow using postgres. There was a phenomenon that the web server was slow during operation. It was a problem caused by data continuing to accumulate in dag_run and log of the db table (it became faster by accessing postgres and deleting data directly). Are there any airflow options to clean the db periodically? If there is no such option, we will try to delete the data directly using the dag script. And I think it's strange that the web server slows down because there is a lot of data. Does the web server get all the data when opening another window?
You have 4 nodes in the cluster, so generally one is master node, and on top of that app pods not schedule so 3 nodes are left. While with the worker node, your deployment has of node affinity set so due to that it can't schedule a pod on that node and it's stuck in a pending state. Check the PVC mostly it won't be able to get created Update Remove taint form master or control-plane kubectl taint node server1 key=value:NoSchedule- Toleration to set on master spec: nodeSelector: kubernetes.io/hostname: "server1" #if have tain and not removed try toleration else fine with nodeselector tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule
###Instruction: I have deployment an application, But pod always in pending state. $ kubectl get nodes NAME STATUS ROLES AGE VERSION server1 Ready control-plane 8d v1.24.9 server2 Ready worker1 8d v1.24.9 server3 Ready worker2 8d v1.24.9 server4 Ready worker3 8d v1.24.9 $ kubectl get all -n jenkins NAME READY STATUS RESTARTS AGE pod/jenkins-6dc9f97c7-ttp64 0/1 Pending 0 7m42s $ kubectl describe pods jenkins-6dc9f97c7-ttp64 -n jenkins Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 5m42s default-scheduler 0/4 nodes are available: 3 node(s) had volume node affinity conflict, 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. The event history confirms a FailedScheduling error is the reason. my deployment.yml has forced to assign the pod into master node. spec: nodeSelector: node-role.kubernetes.io/master: "" tolerations: - key: node-role.kubernetes.io/master operator: Exists Since from Kubernetes version 1.20+ onwards node-role.kubernetes.io/master is deprecated in favor of node-role.kubernetes.io/control-plane i have updated like below. However still pod showing as pending. spec: nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - key: node-role.kubernetes.io/control-plane PersistentVolume.yml side i have below content. ... ..... .......... local: path: /ksdata/apps/nodejs/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1 Label details:- $ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS server1 Ready control-plane 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= server2 Ready worker1 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=worker server3 Ready worker2 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=worker server4 Ready worker3 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux,node-role.kubernetes.io/worker3=worker $ kubectl describe node | egrep -i taint Taints: key=value:NoSchedule Taints: <none> Taints: <none> Taints: <none> ### Response: You have 4 nodes in the cluster, so generally one is master node, and on top of that app pods not schedule so 3 nodes are left. While with the worker node, your deployment has of node affinity set so due to that it can't schedule a pod on that node and it's stuck in a pending state. Check the PVC mostly it won't be able to get created Update Remove taint form master or control-plane kubectl taint node server1 key=value:NoSchedule- Toleration to set on master spec: nodeSelector: kubernetes.io/hostname: "server1" #if have tain and not removed try toleration else fine with nodeselector tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule
I have deployment an application, But pod always in pending state. $ kubectl get nodes NAME STATUS ROLES AGE VERSION server1 Ready control-plane 8d v1.24.9 server2 Ready worker1 8d v1.24.9 server3 Ready worker2 8d v1.24.9 server4 Ready worker3 8d v1.24.9 $ kubectl get all -n jenkins NAME READY STATUS RESTARTS AGE pod/jenkins-6dc9f97c7-ttp64 0/1 Pending 0 7m42s $ kubectl describe pods jenkins-6dc9f97c7-ttp64 -n jenkins Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 5m42s default-scheduler 0/4 nodes are available: 3 node(s) had volume node affinity conflict, 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling. The event history confirms a FailedScheduling error is the reason. my deployment.yml has forced to assign the pod into master node. spec: nodeSelector: node-role.kubernetes.io/master: "" tolerations: - key: node-role.kubernetes.io/master operator: Exists Since from Kubernetes version 1.20+ onwards node-role.kubernetes.io/master is deprecated in favor of node-role.kubernetes.io/control-plane i have updated like below. However still pod showing as pending. spec: nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - key: node-role.kubernetes.io/control-plane PersistentVolume.yml side i have below content. ... ..... .......... local: path: /ksdata/apps/nodejs/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1 Label details:- $ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS server1 Ready control-plane 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= server2 Ready worker1 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=worker server3 Ready worker2 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=worker server4 Ready worker3 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux,node-role.kubernetes.io/worker3=worker $ kubectl describe node | egrep -i taint Taints: key=value:NoSchedule Taints: <none> Taints: <none> Taints: <none>
It looks like the Cloud Code for IntelliJ configuration is restricting the deployment to the XYZ module, but not in the Cloud Code for VS Code configuration.
###Instruction: I am trying to use VSCode Cloud Studio plugin to deploy and debug a project in Kubernetes. When I use intellij and Cloud Studio plugin there, everything works perfect. My MongoDB is persistent with each deployment. When I use VSCode and Cloud Studio there, MongoDB is not persistent anymore. I would appreciate any tips to make it work in VSCode too. When I deploy via intellij it uses the same persistent volume claim. When I deploy via VSCode it creates a new persistent volume claim everytime. Here is the launch.json for VSCode: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "XYZ", "debug": [ { "image": "XYZ", "containerName": "XYZ", "sourceFileMap": { "${workspaceFolder}": "/root/" } } ] } ] } Here is the workspace.xml from intellij: <?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="ChangeListManager"> <list default="true" id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" /> <option name="SHOW_DIALOG" value="false" /> <option name="HIGHLIGHT_CONFLICTS" value="true" /> <option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" /> <option name="LAST_RESOLUTION" value="IGNORE" /> </component> <component name="Git.Settings"> <option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" /> </component> <component name="MarkdownSettingsMigration"> <option name="stateVersion" value="1" /> </component> <component name="ProjectId" id="2KV2OUqPUEf43q5Aj0UCGkKKm10" /> <component name="ProjectViewState"> <option name="hideEmptyMiddlePackages" value="true" /> <option name="showLibraryContents" value="true" /> </component> <component name="PropertiesComponent"> <property name="RunOnceActivity.OpenProjectViewOnStart" value="true" /> <property name="RunOnceActivity.ShowReadmeOnStart" value="true" /> <property name="WebServerToolWindowFactoryState" value="false" /> <property name="com.google.cloudcode.ide_session_index" value="20230118_0001" /> <property name="last_opened_file_path" value="$PROJECT_DIR$" /> <property name="nodejs_package_manager_path" value="npm" /> <property name="settings.editor.selected.configurable" value="preferences.pluginManager" /> <property name="ts.external.directory.path" value="C:\Program Files\JetBrains\IntelliJ IDEA 2021.3.2\plugins\JavaScriptLanguage\jsLanguageServicesImpl\external" /> </component> <component name="RunDashboard"> <option name="excludedTypes"> <set> <option value="gcp-app-engine-local-run" /> </set> </option> </component> <component name="RunManager"> <configuration name="Develop on Kubernetes" type="google-container-tools-skaffold-run-config" factoryName="google-container-tools-skaffold-run-config-dev" show_console_on_std_err="false" show_console_on_std_out="false"> <option name="allowRunningInParallel" value="false" /> <option name="buildEnvironment" value="Local" /> <option name="cleanupDeployments" value="false" /> <option name="deployToCurrentContext" value="true" /> <option name="deployToMinikube" value="false" /> <option name="envVariables" /> <option name="imageRepositoryOverride" /> <option name="kubernetesContext" /> <option name="mappings"> <list /> </option> <option name="moduleDeploymentType" value="DEPLOY_MODULE_SUBSET" /> <option name="projectPathOnTarget" /> <option name="resourceDeletionTimeoutMins" value="2" /> <option name="selectedOptions"> <list /> </option> <option name="skaffoldConfigurationFilePath" value="$PROJECT_DIR$/skaffold.yaml" /> <option name="skaffoldModules"> <list> <option value="XYZ" /> </list> </option> <option name="skaffoldNamespace" /> <option name="skaffoldProfile" /> <option name="skaffoldWatchMode" value="ON_DEMAND" /> <option name="statusCheck" value="true" /> <option name="verbosity" value="WARN" /> <method v="2" /> </configuration> </component> <component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="application-level" UseSingleDictionary="true" transferred="true" /> <component name="TaskManager"> <task active="true" id="Default" summary="Default task"> <changelist id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" /> <created>1674045398429</created> <option name="number" value="Default" /> <option name="presentableId" value="Default" /> <updated>1674045398429</updated> <workItem from="1674045401219" duration="2543000" /> </task> <servers /> </component> <component name="TypeScriptGeneratedFilesManager"> <option name="version" value="3" /> </component> </project> All other files are the same for the project of course. ### Response: It looks like the Cloud Code for IntelliJ configuration is restricting the deployment to the XYZ module, but not in the Cloud Code for VS Code configuration.
I am trying to use VSCode Cloud Studio plugin to deploy and debug a project in Kubernetes. When I use intellij and Cloud Studio plugin there, everything works perfect. My MongoDB is persistent with each deployment. When I use VSCode and Cloud Studio there, MongoDB is not persistent anymore. I would appreciate any tips to make it work in VSCode too. When I deploy via intellij it uses the same persistent volume claim. When I deploy via VSCode it creates a new persistent volume claim everytime. Here is the launch.json for VSCode: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "XYZ", "debug": [ { "image": "XYZ", "containerName": "XYZ", "sourceFileMap": { "${workspaceFolder}": "/root/" } } ] } ] } Here is the workspace.xml from intellij: <?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="ChangeListManager"> <list default="true" id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" /> <option name="SHOW_DIALOG" value="false" /> <option name="HIGHLIGHT_CONFLICTS" value="true" /> <option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" /> <option name="LAST_RESOLUTION" value="IGNORE" /> </component> <component name="Git.Settings"> <option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" /> </component> <component name="MarkdownSettingsMigration"> <option name="stateVersion" value="1" /> </component> <component name="ProjectId" id="2KV2OUqPUEf43q5Aj0UCGkKKm10" /> <component name="ProjectViewState"> <option name="hideEmptyMiddlePackages" value="true" /> <option name="showLibraryContents" value="true" /> </component> <component name="PropertiesComponent"> <property name="RunOnceActivity.OpenProjectViewOnStart" value="true" /> <property name="RunOnceActivity.ShowReadmeOnStart" value="true" /> <property name="WebServerToolWindowFactoryState" value="false" /> <property name="com.google.cloudcode.ide_session_index" value="20230118_0001" /> <property name="last_opened_file_path" value="$PROJECT_DIR$" /> <property name="nodejs_package_manager_path" value="npm" /> <property name="settings.editor.selected.configurable" value="preferences.pluginManager" /> <property name="ts.external.directory.path" value="C:\Program Files\JetBrains\IntelliJ IDEA 2021.3.2\plugins\JavaScriptLanguage\jsLanguageServicesImpl\external" /> </component> <component name="RunDashboard"> <option name="excludedTypes"> <set> <option value="gcp-app-engine-local-run" /> </set> </option> </component> <component name="RunManager"> <configuration name="Develop on Kubernetes" type="google-container-tools-skaffold-run-config" factoryName="google-container-tools-skaffold-run-config-dev" show_console_on_std_err="false" show_console_on_std_out="false"> <option name="allowRunningInParallel" value="false" /> <option name="buildEnvironment" value="Local" /> <option name="cleanupDeployments" value="false" /> <option name="deployToCurrentContext" value="true" /> <option name="deployToMinikube" value="false" /> <option name="envVariables" /> <option name="imageRepositoryOverride" /> <option name="kubernetesContext" /> <option name="mappings"> <list /> </option> <option name="moduleDeploymentType" value="DEPLOY_MODULE_SUBSET" /> <option name="projectPathOnTarget" /> <option name="resourceDeletionTimeoutMins" value="2" /> <option name="selectedOptions"> <list /> </option> <option name="skaffoldConfigurationFilePath" value="$PROJECT_DIR$/skaffold.yaml" /> <option name="skaffoldModules"> <list> <option value="XYZ" /> </list> </option> <option name="skaffoldNamespace" /> <option name="skaffoldProfile" /> <option name="skaffoldWatchMode" value="ON_DEMAND" /> <option name="statusCheck" value="true" /> <option name="verbosity" value="WARN" /> <method v="2" /> </configuration> </component> <component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="application-level" UseSingleDictionary="true" transferred="true" /> <component name="TaskManager"> <task active="true" id="Default" summary="Default task"> <changelist id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" /> <created>1674045398429</created> <option name="number" value="Default" /> <option name="presentableId" value="Default" /> <updated>1674045398429</updated> <workItem from="1674045401219" duration="2543000" /> </task> <servers /> </component> <component name="TypeScriptGeneratedFilesManager"> <option name="version" value="3" /> </component> </project> All other files are the same for the project of course.
You can't use Helm to modify existing objects. Helm works by running its templating engine to construct complete Kubernetes manifests, and them submits them to the cluster. This process assumes that an object is wholly owned by Helm, and these objects don't already exist, and nothing other than helm upgrade will modify them. The error you're getting here is in fact because the Node objects already exist; Kubernetes creates them when the actual nodes (physical machines, cloud instances, VMs) get created and are joined to the cluster. You can't modify these using Helm.
###Instruction: I am trying to add labels to the nodes using helm chart, however getting error while deploying. yaml template apiVersion: v1 kind: Node metadata: name: {{ index (lookup "v1" "Node" "" "").items 0 "metadata" "name" }} labels: content-strange: "true" name: {{ index (lookup "v1" "Node" "" "").items 1 "metadata" "name" }} labels: content-strange: "true" name: {{ index (lookup "v1" "Node" "" "").items 2 "metadata" "name" }} labels: content-strange: "true" Error helm install famous famous.1.1.tgz -n famous-ns1 Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Node "10.x.x.x" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "famous"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "famous-ns1" ### Response: You can't use Helm to modify existing objects. Helm works by running its templating engine to construct complete Kubernetes manifests, and them submits them to the cluster. This process assumes that an object is wholly owned by Helm, and these objects don't already exist, and nothing other than helm upgrade will modify them. The error you're getting here is in fact because the Node objects already exist; Kubernetes creates them when the actual nodes (physical machines, cloud instances, VMs) get created and are joined to the cluster. You can't modify these using Helm.
I am trying to add labels to the nodes using helm chart, however getting error while deploying. yaml template apiVersion: v1 kind: Node metadata: name: {{ index (lookup "v1" "Node" "" "").items 0 "metadata" "name" }} labels: content-strange: "true" name: {{ index (lookup "v1" "Node" "" "").items 1 "metadata" "name" }} labels: content-strange: "true" name: {{ index (lookup "v1" "Node" "" "").items 2 "metadata" "name" }} labels: content-strange: "true" Error helm install famous famous.1.1.tgz -n famous-ns1 Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Node "10.x.x.x" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "famous"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "famous-ns1"
You may want to have a look at https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary Let's say you have a normal Ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend port: number: 80 Set the header name and value for your desired backend on a second Ingress, with canary enabled. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend-header annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: sample-header nginx.ingress.kubernetes.io/canary-by-header-value: abc spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend-with-header port: number: 80 Now, every request with sample-header: abc routes to the second ingress/service. Any other value, e. g. sample-header: test, will route to the first ingress/service.
###Instruction: OAUTH2 is used for authentication and the OAUTH2 proxy is deployed in Kubernetes. When a request is received by the NGINX Ingress controller, it always routes the traffic to OAUTH proxy. The requirement is when the request contains a specific header (For example: abc) then those requests should be routed directly to the backend. Those shouldn't be routed to OAUTH proxy. Can this be done using some sort of an annotation in NGINX Ingress controller? Can we by pass those traffic going to OAUTH2? ### Response: You may want to have a look at https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary Let's say you have a normal Ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend port: number: 80 Set the header name and value for your desired backend on a second Ingress, with canary enabled. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend-header annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: sample-header nginx.ingress.kubernetes.io/canary-by-header-value: abc spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend-with-header port: number: 80 Now, every request with sample-header: abc routes to the second ingress/service. Any other value, e. g. sample-header: test, will route to the first ingress/service.
OAUTH2 is used for authentication and the OAUTH2 proxy is deployed in Kubernetes. When a request is received by the NGINX Ingress controller, it always routes the traffic to OAUTH proxy. The requirement is when the request contains a specific header (For example: abc) then those requests should be routed directly to the backend. Those shouldn't be routed to OAUTH proxy. Can this be done using some sort of an annotation in NGINX Ingress controller? Can we by pass those traffic going to OAUTH2?
There are a few things you can try to get your Go service to work with service accounts on Kubernetes: Verify that your Go service is properly configured to use the Kubernetes service account. This can be done by checking that the service account is correctly mounted as a volume in the pod definition and that the service is able to read the credentials from the volume. Make sure that the AWS SDK for Go you are using (https://github.com/aws/aws-sdk-go) is configured to use the correct credentials. The SDK supports several methods for providing credentials, including environment variables, shared credentials file, and IAM roles. You can try using the k8s.io/client-go library instead of the AWS SDK for Go, this will help you to use the Kubernetes service account to authenticate with the Kubernetes API and obtain the required credentials for the AWS SDK. If you are using the Kubernetes service account to authenticate with an external service such as AWS, you may also need to configure an IAM role that allows the service account to access the necessary resources. Double check that your Go service is correctly using the Kubernetes service account token and is passing it along as an authentication token to the AWS SDK. You can also try to use the k8s.io/client-go library to get the secret and use it in your go code.
###Instruction: Actually, I use kubernetes service accounts mostly with NodeJS, and this works fine, but I have this one service made in Go and I can't seem to make it work with service accounts (I know that the service account is correctly configured because I tested it with a pod). I'm using this lib https://github.com/aws/aws-sdk-go Up till now I tried this: sess := session.Must(session.NewSession()) creds := stscreds.NewCredentials(sess, os.Getenv("AWS_ROLE_ARN")) svc := s3.New(sess, &aws.Config{Credentials: creds}) And also this (just in case): region := os.Getenv("AMAZON_REGION") sess := session.Must(session.NewSession(&aws.Config{Region: &region})) svc := s3.New(sess) for the first case I got the following error: AccessDenied: User: arn:aws:sts::xxxxxxxx:assumed-role/staging-worker-node/i-0xxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxx:role/EKSServiceAccount-app and for the second case, I got a generic permission error. I read the docs and tried a few things more (that may not be relevant here), but I can't see to make it work, maybe because I don't have much experience with golang. ### Response: There are a few things you can try to get your Go service to work with service accounts on Kubernetes: Verify that your Go service is properly configured to use the Kubernetes service account. This can be done by checking that the service account is correctly mounted as a volume in the pod definition and that the service is able to read the credentials from the volume. Make sure that the AWS SDK for Go you are using (https://github.com/aws/aws-sdk-go) is configured to use the correct credentials. The SDK supports several methods for providing credentials, including environment variables, shared credentials file, and IAM roles. You can try using the k8s.io/client-go library instead of the AWS SDK for Go, this will help you to use the Kubernetes service account to authenticate with the Kubernetes API and obtain the required credentials for the AWS SDK. If you are using the Kubernetes service account to authenticate with an external service such as AWS, you may also need to configure an IAM role that allows the service account to access the necessary resources. Double check that your Go service is correctly using the Kubernetes service account token and is passing it along as an authentication token to the AWS SDK. You can also try to use the k8s.io/client-go library to get the secret and use it in your go code.
Actually, I use kubernetes service accounts mostly with NodeJS, and this works fine, but I have this one service made in Go and I can't seem to make it work with service accounts (I know that the service account is correctly configured because I tested it with a pod). I'm using this lib https://github.com/aws/aws-sdk-go Up till now I tried this: sess := session.Must(session.NewSession()) creds := stscreds.NewCredentials(sess, os.Getenv("AWS_ROLE_ARN")) svc := s3.New(sess, &aws.Config{Credentials: creds}) And also this (just in case): region := os.Getenv("AMAZON_REGION") sess := session.Must(session.NewSession(&aws.Config{Region: &region})) svc := s3.New(sess) for the first case I got the following error: AccessDenied: User: arn:aws:sts::xxxxxxxx:assumed-role/staging-worker-node/i-0xxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxx:role/EKSServiceAccount-app and for the second case, I got a generic permission error. I read the docs and tried a few things more (that may not be relevant here), but I can't see to make it work, maybe because I don't have much experience with golang.
The default limit of 110 pods per node is merely a compromise of Kubernetes, I think, not a technical limit. Some vendors have additional limitations. For example, on Azure, there's a limit on the number of IP addresses you can assign to a node. So if your Kubernetes cluster is configured to assign a IP address from Azure VNet to each pod, the limit is 30 (See https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node). On IBM Cloud, if you use IBM Block Storage for persistent volumes, they will be mounted as 'Secondary volumes' on your node, and you can only have 12 of those per node, so that's limit of 12 pods with persistent volumes. It sucks when you hit that limit when scaling up the first time :-( On other vendors or with other storage classes, this limit is larger: https://kubernetes.io/docs/concepts/storage/storage-limits/
###Instruction: Why is there a pod limit in Kubernetes? It makes intuitive sense to me that there'll be some limitation, but I'm curious to know the specific botteleneck which warrants the limit. ### Response: The default limit of 110 pods per node is merely a compromise of Kubernetes, I think, not a technical limit. Some vendors have additional limitations. For example, on Azure, there's a limit on the number of IP addresses you can assign to a node. So if your Kubernetes cluster is configured to assign a IP address from Azure VNet to each pod, the limit is 30 (See https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node). On IBM Cloud, if you use IBM Block Storage for persistent volumes, they will be mounted as 'Secondary volumes' on your node, and you can only have 12 of those per node, so that's limit of 12 pods with persistent volumes. It sucks when you hit that limit when scaling up the first time :-( On other vendors or with other storage classes, this limit is larger: https://kubernetes.io/docs/concepts/storage/storage-limits/
Why is there a pod limit in Kubernetes? It makes intuitive sense to me that there'll be some limitation, but I'm curious to know the specific botteleneck which warrants the limit.
when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? This is currently how helm works, here's the issue opened for discussion, there are several workarounds provided here as well.
###Instruction: when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? The example I am using is dagster. When installing with: helm install dagster dagster/dagster \ --namespace dagster \ --create-namespace everything starts up fine and secrets are created. When updating the image and tag and performing an upgrade with: helm upgrade -f charts/dagster-user-deployments/values.yaml dagster ./charts/dagster-user-deployments -n dagster the image is upgraded, but all secrets are deleted. Why would/ could this happen? After running the upgrade command, I expect secrets to still be in place, and the new image to be pulled and run. ### Response: when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? This is currently how helm works, here's the issue opened for discussion, there are several workarounds provided here as well.
when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? The example I am using is dagster. When installing with: helm install dagster dagster/dagster \ --namespace dagster \ --create-namespace everything starts up fine and secrets are created. When updating the image and tag and performing an upgrade with: helm upgrade -f charts/dagster-user-deployments/values.yaml dagster ./charts/dagster-user-deployments -n dagster the image is upgraded, but all secrets are deleted. Why would/ could this happen? After running the upgrade command, I expect secrets to still be in place, and the new image to be pulled and run.
In simple words, if you are running the GKE with GCE ingress controller, serviceType LoadBalancer yaml, you have to use the BackendConfig or FrontendConfig to configure the Load Balancer. BackendConfig or FrontendConfig is used to configure the Load Balancer. Suppose you are looking for HTTP > HTTPS redirect, SSL policy it's not suggested to set/configure the from GCP console instead use the YAML config. Backend Doc ref & Frontend Doc ref
###Instruction: I am new to GKE Can please help to understand What does this statement mean . You cannot manually configure LoadBalancer features using the Google Cloud SDK or the Google Cloud console. You must use BackendConfig or FrontendConfig Kubernetes resources. ### Response: In simple words, if you are running the GKE with GCE ingress controller, serviceType LoadBalancer yaml, you have to use the BackendConfig or FrontendConfig to configure the Load Balancer. BackendConfig or FrontendConfig is used to configure the Load Balancer. Suppose you are looking for HTTP > HTTPS redirect, SSL policy it's not suggested to set/configure the from GCP console instead use the YAML config. Backend Doc ref & Frontend Doc ref
I am new to GKE Can please help to understand What does this statement mean . You cannot manually configure LoadBalancer features using the Google Cloud SDK or the Google Cloud console. You must use BackendConfig or FrontendConfig Kubernetes resources.
In contrast to a Kubernetes Deployment, where pods are easily replaceable, each pod in a StatefulSet is given a name and treated individually. Pods with distinct identities are necessary for stateful applications. This implies that if any pod perishes, it will be apparent right away. StatefulSets act as controllers but do not generate ReplicaSets; rather, they generate pods with distinctive names that follow a predefined pattern. The ordinal index appears in the DNS name of a pod. A distinct persistent volume claim (PVC) is created for each pod, and each replica in a StatefulSet has its own state. For instance, a StatefulSet with four replicas generates four pods, each of which has its own volume, or four PVCs. StatefulSets require a headless service to return the IPs of the associated pods and enable direct interaction with them. The headless service has a service IP but no IP address and has to be created separately.The major components of a StatefulSet are the set itself, the persistent volume and the headless service. That all being said, people deploy Stateful Applications with Deployments, usually they mount a RWX PV into the pods so all "frontends" share the same backend. Quite common in CNCF projects.
###Instruction: I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!" However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever. Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard. The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps? I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong. I'm really curious about this question. P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs. I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions. I also see the k8s docs about StatefulSet either. ### Response: In contrast to a Kubernetes Deployment, where pods are easily replaceable, each pod in a StatefulSet is given a name and treated individually. Pods with distinct identities are necessary for stateful applications. This implies that if any pod perishes, it will be apparent right away. StatefulSets act as controllers but do not generate ReplicaSets; rather, they generate pods with distinctive names that follow a predefined pattern. The ordinal index appears in the DNS name of a pod. A distinct persistent volume claim (PVC) is created for each pod, and each replica in a StatefulSet has its own state. For instance, a StatefulSet with four replicas generates four pods, each of which has its own volume, or four PVCs. StatefulSets require a headless service to return the IPs of the associated pods and enable direct interaction with them. The headless service has a service IP but no IP address and has to be created separately.The major components of a StatefulSet are the set itself, the persistent volume and the headless service. That all being said, people deploy Stateful Applications with Deployments, usually they mount a RWX PV into the pods so all "frontends" share the same backend. Quite common in CNCF projects.
I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!" However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever. Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard. The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps? I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong. I'm really curious about this question. P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs. I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions. I also see the k8s docs about StatefulSet either.
You are using ClusterIP service(line 7). The kubernetes ClusterIP service is not made to allow you to access a pod outside of the cluster. ClusterIP just provide a way to have a not changing IP for other internal services to access your pod. You should use instead Loadbalanacer. Cf https://stackoverflow.com/a/48281728/8398523 for differences
###Instruction: Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration. My deployment of single instance MySQL DB instance is: apiVersion: v1 kind: Service metadata: name: mysql-db-testing-service namespace: testing spec: type: ClusterIP ports: - port: 3306 #targetPort: 3306 selector: app: mysql-db-testing --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-db-testing namespace: testing spec: selector: matchLabels: app: mysql-db-testing replicas: 1 strategy: type: Recreate template: metadata: labels: app: mysql-db-testing spec: containers: - name: mysql-db-container-testing image: mysql:8.0.31 env: - name: MYSQL_ROOT_PASSWORD value: test12345 ports: - containerPort: 3306 name: mysql-port volumeMounts: - mountPath: "/var/lib/mysql" name: mysql-persistent-storage volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: azure-managed-disk-pvc-mysql-testing nodeSelector: env: preprod As I've mentioned I can get to the container via CLI: Console output regarding the working pod with db looks like: Console output regarding the service: Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help. ### Response: You are using ClusterIP service(line 7). The kubernetes ClusterIP service is not made to allow you to access a pod outside of the cluster. ClusterIP just provide a way to have a not changing IP for other internal services to access your pod. You should use instead Loadbalanacer. Cf https://stackoverflow.com/a/48281728/8398523 for differences
Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration. My deployment of single instance MySQL DB instance is: apiVersion: v1 kind: Service metadata: name: mysql-db-testing-service namespace: testing spec: type: ClusterIP ports: - port: 3306 #targetPort: 3306 selector: app: mysql-db-testing --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-db-testing namespace: testing spec: selector: matchLabels: app: mysql-db-testing replicas: 1 strategy: type: Recreate template: metadata: labels: app: mysql-db-testing spec: containers: - name: mysql-db-container-testing image: mysql:8.0.31 env: - name: MYSQL_ROOT_PASSWORD value: test12345 ports: - containerPort: 3306 name: mysql-port volumeMounts: - mountPath: "/var/lib/mysql" name: mysql-persistent-storage volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: azure-managed-disk-pvc-mysql-testing nodeSelector: env: preprod As I've mentioned I can get to the container via CLI: Console output regarding the working pod with db looks like: Console output regarding the service: Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.
Different Priorities What happens when a Node becomes unreachable in a cluster? Deployment - Stateless apps You want to maximize availability. As soon as Kubernetes detects that there are fewer than the desired number of replicas running in your cluster, the controllers spawn new replicas of it. Since these apps are stateless, it is very easy to do for the Kubernetes controllers. StatefulSet - Stateful apps You want to maximize availability - but not you must ensure data consistency (the state). To ensure data consistency, each replica has its own unique ID, and there are never multiple replicas of this ID, e.g. it is unique. This means that you cannot spawn up a new replica, unless that you are sure that the replica on the unreachable Node are terminated (e.g. stops using the Persistent Volume). Conclusion Both Deployment and StatefulSet try to maximize the availability - but StatefulSet cannot sacrifice data consistency (e.g. your state), so it cannot act as fast as Deployment (stateless) apps can. These priorities does not only happens when a Node becomes unreachable, but at all times, e.g. also during upgrades and deployments.
###Instruction: I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!" However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever. Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard. The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps? I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong. I'm really curious about this question. P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs. I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions. I also see the k8s docs about StatefulSet either. ### Response: Different Priorities What happens when a Node becomes unreachable in a cluster? Deployment - Stateless apps You want to maximize availability. As soon as Kubernetes detects that there are fewer than the desired number of replicas running in your cluster, the controllers spawn new replicas of it. Since these apps are stateless, it is very easy to do for the Kubernetes controllers. StatefulSet - Stateful apps You want to maximize availability - but not you must ensure data consistency (the state). To ensure data consistency, each replica has its own unique ID, and there are never multiple replicas of this ID, e.g. it is unique. This means that you cannot spawn up a new replica, unless that you are sure that the replica on the unreachable Node are terminated (e.g. stops using the Persistent Volume). Conclusion Both Deployment and StatefulSet try to maximize the availability - but StatefulSet cannot sacrifice data consistency (e.g. your state), so it cannot act as fast as Deployment (stateless) apps can. These priorities does not only happens when a Node becomes unreachable, but at all times, e.g. also during upgrades and deployments.
I know why use StatefulSet for stateful applications. (e.g. DB or something) In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!" However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever. Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard. The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app. Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps? I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong. I'm really curious about this question. P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs. I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions. I also see the k8s docs about StatefulSet either.
@janeosaka, you are right This problem occurs when you have a 1)resource crunch or 2)network issue. It seems the leader election API call is getting timeout as Kube API Server had a resource crunch and it has increased the latency of API calls. 1)Resource Crunch : (Increasing the CPU and Memory of the nodes) It seems that it is the expected behavior. When the leader election fails the controller is not able to renew the lease and per design the controller is restarted to ensure that a single controller is active at a time. LeaseDuration and RenewDeadline (RenewDeadline is the duration that the acting master will retry), are configurable in controller-runtime. Another approach you may consider is to leverage API Priority & Fairness to increase the chances of success of the calls made to the API by your controller if it is not at the origin of the API overload. 2)Network Issue : If it is a network issue : (The leader election lost is a symptom that the host has network problems, not a cause). Check the issue may resolve after restarting the SDN pod "sdn-controller" and "sdn" are very different things. If restarting an sdn pod fixed things, then the sdn-controller the error you noticed was not the actual problem.
###Instruction: I have a production cluster is currently running on K8s version 1.19.9, where the kube-scheduler and kube-controller-manager failed to have leader elections. The leader is able to acquire the first lease, however it then cannot renew/reacquire the lease, this has caused other pods to constantly in the loop of electing leaders as none of them could stay on long enough to process anything/stay on long enough to do anything meaningful and they time out, where another pod will take the new lease; this happens from node to node. Here are the logs: E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost Detailed Docker logs: Flag --port has been deprecated, see --secure-port instead. I1201 22:14:10.374271 1 serving.go:331] Generated self-signed cert in-memory I1201 22:14:10.735495 1 controllermanager.go:175] Version: v1.19.9+vmware.1 I1201 22:14:10.736289 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I1201 22:14:10.736302 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I1201 22:14:10.736684 1 secure_serving.go:197] Serving securely on 0.0.0.0:10257 I1201 22:14:10.736747 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... I1201 22:14:10.736868 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1201 22:14:20.737137 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:14:32.803658 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:14:44.842075 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:15:13.386932 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: context deadline exceeded I1201 22:15:44.818571 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I1201 22:15:44.818755 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader" I1201 22:15:44.818790 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader" E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000fb20d0, 0x4c, 0xc6) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a57fa0, 0xc000000003, 0x0, 0x0, 0xc000472070, 0x68d5705, 0x14, 0x125, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191 My duct tape recovery method was to shutdown the other candidates and disable leader elections --leader-elect=false. We manually set a leader and let it stay on for a while, then reactivated leader elections after. This has seemed to work as intended again, the leases are renewing normally after. Could it be possible that the api-server may be too overwhelmed to expend any resources(?), because the elections have failed due to timeout? Was wondering if anyone has ever encountered such an issue. ### Response: @janeosaka, you are right This problem occurs when you have a 1)resource crunch or 2)network issue. It seems the leader election API call is getting timeout as Kube API Server had a resource crunch and it has increased the latency of API calls. 1)Resource Crunch : (Increasing the CPU and Memory of the nodes) It seems that it is the expected behavior. When the leader election fails the controller is not able to renew the lease and per design the controller is restarted to ensure that a single controller is active at a time. LeaseDuration and RenewDeadline (RenewDeadline is the duration that the acting master will retry), are configurable in controller-runtime. Another approach you may consider is to leverage API Priority & Fairness to increase the chances of success of the calls made to the API by your controller if it is not at the origin of the API overload. 2)Network Issue : If it is a network issue : (The leader election lost is a symptom that the host has network problems, not a cause). Check the issue may resolve after restarting the SDN pod "sdn-controller" and "sdn" are very different things. If restarting an sdn pod fixed things, then the sdn-controller the error you noticed was not the actual problem.
I have a production cluster is currently running on K8s version 1.19.9, where the kube-scheduler and kube-controller-manager failed to have leader elections. The leader is able to acquire the first lease, however it then cannot renew/reacquire the lease, this has caused other pods to constantly in the loop of electing leaders as none of them could stay on long enough to process anything/stay on long enough to do anything meaningful and they time out, where another pod will take the new lease; this happens from node to node. Here are the logs: E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost Detailed Docker logs: Flag --port has been deprecated, see --secure-port instead. I1201 22:14:10.374271 1 serving.go:331] Generated self-signed cert in-memory I1201 22:14:10.735495 1 controllermanager.go:175] Version: v1.19.9+vmware.1 I1201 22:14:10.736289 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I1201 22:14:10.736302 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I1201 22:14:10.736684 1 secure_serving.go:197] Serving securely on 0.0.0.0:10257 I1201 22:14:10.736747 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... I1201 22:14:10.736868 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1201 22:14:20.737137 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:14:32.803658 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:14:44.842075 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded E1201 22:15:13.386932 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: context deadline exceeded I1201 22:15:44.818571 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I1201 22:15:44.818755 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader" I1201 22:15:44.818790 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader" E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000fb20d0, 0x4c, 0xc6) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a57fa0, 0xc000000003, 0x0, 0x0, 0xc000472070, 0x68d5705, 0x14, 0x125, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191 My duct tape recovery method was to shutdown the other candidates and disable leader elections --leader-elect=false. We manually set a leader and let it stay on for a while, then reactivated leader elections after. This has seemed to work as intended again, the leases are renewing normally after. Could it be possible that the api-server may be too overwhelmed to expend any resources(?), because the elections have failed due to timeout? Was wondering if anyone has ever encountered such an issue.
You will need to create a new set of certificates and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster. Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent certificates by employing the minikube get-kube-config command. Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.
###Instruction: I am using kubernetes with docker desktop on MacOS Monterey. I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid. How can I renew them ? Error message: Error: Kubernetes cluster unreachable: Get "https://kubernetes.docker.internal:6443/version": EOF I tried to install kubeadm but I think it is only suitable if I use minikube. Edit: I am using Mac with M1 chip. ### Response: You will need to create a new set of certificates and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster. Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent certificates by employing the minikube get-kube-config command. Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.
I am using kubernetes with docker desktop on MacOS Monterey. I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid. How can I renew them ? Error message: Error: Kubernetes cluster unreachable: Get "https://kubernetes.docker.internal:6443/version": EOF I tried to install kubeadm but I think it is only suitable if I use minikube. Edit: I am using Mac with M1 chip.
You can use the GCP fuse and store files to S3 directly that way it would be a little easy to manage if it's works for you. However, if you want to go with your Idea you have to implement the ReadWriteMany (Read More). So two POD can attach to one PVC and one write and the other one read. You can also use the EFS, or NFS file systems GKE with Filestore from GCP or MinIO, GlusterFS. Ref answer glusterfs I Created a PVC and attached it a deployment but it is writing logs and configurations to it. volumeMounts: - name: file mountPath: /var/data readOnly: true You can set the mode when mounting the file or directory and set it as read-only mode also readOnly: true.
###Instruction: I have a requirement to store files in a PVC and attach that PVC to a Nginx ingress controller deployment so that application can access the files which are present in PVC but at the same application should not write back logs and configurations to PVC. Can you please let me know how can i achieve it? I Created a PVC and attached it a deployment but it is writing logs and configurations to it. ### Response: You can use the GCP fuse and store files to S3 directly that way it would be a little easy to manage if it's works for you. However, if you want to go with your Idea you have to implement the ReadWriteMany (Read More). So two POD can attach to one PVC and one write and the other one read. You can also use the EFS, or NFS file systems GKE with Filestore from GCP or MinIO, GlusterFS. Ref answer glusterfs I Created a PVC and attached it a deployment but it is writing logs and configurations to it. volumeMounts: - name: file mountPath: /var/data readOnly: true You can set the mode when mounting the file or directory and set it as read-only mode also readOnly: true.
I have a requirement to store files in a PVC and attach that PVC to a Nginx ingress controller deployment so that application can access the files which are present in PVC but at the same application should not write back logs and configurations to PVC. Can you please let me know how can i achieve it? I Created a PVC and attached it a deployment but it is writing logs and configurations to it.
This work for me, I'm using minukube, When checking the minikube status by running the command minikube status you'll probably get something like that E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got: 127.0.0.1:55900, want: 127.0.0.1:49736 type: Control Plane host: Running kubelet: Stopped apiserver: Stopped kubeconfig: Misconfigured To fix it, I just followed the next steps: minikube update-context minukube start
###Instruction: I have setup the Kubernetes cluster with Kubespray Once I restart the node and check the status of the node I am getting as below $ kubectl get nodes The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? Environment: OS : CentOS 7 Kubespray kubelet version: 1.22.3 Need your help on this. Regards, Zain ### Response: This work for me, I'm using minukube, When checking the minikube status by running the command minikube status you'll probably get something like that E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got: 127.0.0.1:55900, want: 127.0.0.1:49736 type: Control Plane host: Running kubelet: Stopped apiserver: Stopped kubeconfig: Misconfigured To fix it, I just followed the next steps: minikube update-context minukube start
I have setup the Kubernetes cluster with Kubespray Once I restart the node and check the status of the node I am getting as below $ kubectl get nodes The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? Environment: OS : CentOS 7 Kubespray kubelet version: 1.22.3 Need your help on this. Regards, Zain
It toured out that it is possible. There are 2 ways that you could solve this problem. Push based Start GitHub codespace with minikube installed in it Install and configure GitHub's self hosted runner in GitHub Codespaces Configure and start GitHub's self hosted runner in GitHub Codespace Run your GitHub's Actions on self hosted runners jobs: build: runs-on: labels: - self-hosted - self-hosted-runner-label I've successful implement above flow in this repository. Pull based Start GitHub Codespace with minikube installed in it Install ArgoCD in minikube Point ArgoCD towards your GitHub repository Use GitHub Actions to generate new k8s manifests files
###Instruction: Context I've installed minikube in github codespaces, and that works fantastic! With this setup I'm able to port forward any application running in minikube and reach it with the url generated by github codespaces. Problem I'd like to use github actions to deploy an app into the minikube cluster that runs in github codespaces. Question Is it possible, if so the how to do it? ### Response: It toured out that it is possible. There are 2 ways that you could solve this problem. Push based Start GitHub codespace with minikube installed in it Install and configure GitHub's self hosted runner in GitHub Codespaces Configure and start GitHub's self hosted runner in GitHub Codespace Run your GitHub's Actions on self hosted runners jobs: build: runs-on: labels: - self-hosted - self-hosted-runner-label I've successful implement above flow in this repository. Pull based Start GitHub Codespace with minikube installed in it Install ArgoCD in minikube Point ArgoCD towards your GitHub repository Use GitHub Actions to generate new k8s manifests files
Context I've installed minikube in github codespaces, and that works fantastic! With this setup I'm able to port forward any application running in minikube and reach it with the url generated by github codespaces. Problem I'd like to use github actions to deploy an app into the minikube cluster that runs in github codespaces. Question Is it possible, if so the how to do it?
It's also a good thing to check your cluster and node-pool operations. Check the cluster operation in cloud shell and run the command: gcloud container operations list Check the age of the nodes with the command kubectl get nodes Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below: Please note you have to add your cluster and node-pool name in the queries. Control plane (master) upgraded: resource.type="gke_cluster" log_id("cloudaudit.googleapis.com/activity") protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal") (protoPayload.metadata.operationType="UPGRADE_MASTER" OR protoPayload.response.operationType="UPGRADE_MASTER") resource.labels.cluster_name="" Node-pool upgraded resource.type="gke_nodepool" log_id("cloudaudit.googleapis.com/activity") protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal") protoPayload.metadata.operationType="UPGRADE_NODES" resource.labels.cluster_name="" resource.labels.nodepool_name=""
###Instruction: From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that. ### Response: It's also a good thing to check your cluster and node-pool operations. Check the cluster operation in cloud shell and run the command: gcloud container operations list Check the age of the nodes with the command kubectl get nodes Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below: Please note you have to add your cluster and node-pool name in the queries. Control plane (master) upgraded: resource.type="gke_cluster" log_id("cloudaudit.googleapis.com/activity") protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal") (protoPayload.metadata.operationType="UPGRADE_MASTER" OR protoPayload.response.operationType="UPGRADE_MASTER") resource.labels.cluster_name="" Node-pool upgraded resource.type="gke_nodepool" log_id("cloudaudit.googleapis.com/activity") protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal") protoPayload.metadata.operationType="UPGRADE_NODES" resource.labels.cluster_name="" resource.labels.nodepool_name=""
From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.
The behavior you are describing is most probably because the ingress and ingress-dns addons are currently only supported on Linux Systems when using the Docker driver, as mentioned in the Known Issues section of the minikube documentation. Minikube supports ingress differently on the Mac and Linux. On Linux the ingress is fully supported and therefore does not need the use of minikube tunnel. On Mac there is an open issue due to a network issue. The documentation states that the minikube ingress addon is not supported, seems misleading if not incorrect. It's just supported differently (and not as well). Please go through Ingress DNS and similar SO for more information.
###Instruction: MacOS Big Sur 11.6.8 minikube version: v1.28.0 Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual minikube service foo works, kubectl get ingress shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results. I boiled it down to the simplest replication from the tutorial at kubernetes.io : # kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # kubectl expose deployment web --type=NodePort --port=8080 # kubectl get service web (ensure it's a node port) # minikube service web --url (test url) # kubectl apply -f ingress_hello_world.yaml # curl localkube.com apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: localkube.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 Manual service works: >minikube service web --url http://127.0.0.1:50111 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. >curl http://127.0.0.1:50111 Hello, world! Version: 1.0.0 Hostname: web-84fb9498c7-hnphb Ingress looks good: >minikube addons list | grep ingress | ingress | minikube | enabled βœ… | Kubernetes | >kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress nginx localkube.com 192.168.49.2 80 15m ping resolves the address mapped in /etc/hosts: >ping localkube.com PING localkube.com (192.168.49.2): 56 data bytes I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working. ### Response: The behavior you are describing is most probably because the ingress and ingress-dns addons are currently only supported on Linux Systems when using the Docker driver, as mentioned in the Known Issues section of the minikube documentation. Minikube supports ingress differently on the Mac and Linux. On Linux the ingress is fully supported and therefore does not need the use of minikube tunnel. On Mac there is an open issue due to a network issue. The documentation states that the minikube ingress addon is not supported, seems misleading if not incorrect. It's just supported differently (and not as well). Please go through Ingress DNS and similar SO for more information.
MacOS Big Sur 11.6.8 minikube version: v1.28.0 Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual minikube service foo works, kubectl get ingress shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results. I boiled it down to the simplest replication from the tutorial at kubernetes.io : # kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # kubectl expose deployment web --type=NodePort --port=8080 # kubectl get service web (ensure it's a node port) # minikube service web --url (test url) # kubectl apply -f ingress_hello_world.yaml # curl localkube.com apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: localkube.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 Manual service works: >minikube service web --url http://127.0.0.1:50111 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. >curl http://127.0.0.1:50111 Hello, world! Version: 1.0.0 Hostname: web-84fb9498c7-hnphb Ingress looks good: >minikube addons list | grep ingress | ingress | minikube | enabled βœ… | Kubernetes | >kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress nginx localkube.com 192.168.49.2 80 15m ping resolves the address mapped in /etc/hosts: >ping localkube.com PING localkube.com (192.168.49.2): 56 data bytes I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working.
Based on Veera's answer, I looked into the ingress issue with macOS and minikube tunnel. To save others the hassle, here is how I resolved the issue: ingress doesn't seem to work on macOS (the different pages say "with docker" but I had the same outcome with other drivers like hyperkit. the issue seems to be IP / networking related. You can not get to the minikube IP from your local workstation. If you first run minikube ssh you can ping and curl the minikube IP and the domain name you mapped to that IP in /etc/hosts. However, this does not help trying to access the service from a browser. the solution is to map the domain names to 127.0.0.1 in /etc/hosts (instead of the ingress assigned IP) and use ingress components to control the domain-name -> service mappings as before... then starting a tunnel with sudo minikube tunnel will keep a base tunnel open, and create tunneling for any existing or new ingress components. This combined with the ingress rules will mimic host header style connecting to any domain resolving to the local host. Here is a full example of a working solution on mac. Dump this to a file named ingress_hello_world.yaml and follow the commented instructions to achieve a simple ingress solution that routes 2 domains to 2 different services (note this will work with pretty much any internal service, and can be a ClusterIP instead of NodePort): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx rules: - host: test1.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 - host: test2.com http: paths: - path: / pathType: Prefix backend: service: name: web2 port: number: 8080 # Instructions: # start minikube if not already # >minikube start --vm-driver=docker # # enable ingress if not already # >minikube addons enable ingress # >minikube addons list | grep "ingress " # | ingress | minikube | enabled βœ… | Kubernetes | # # >kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # deployment.apps/web created # # >kubectl expose deployment web --type=NodePort --port=8080 # service/web exposed # # >kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0 # deployment.apps/web2 created # # >kubectl expose deployment web2 --port=8080 --type=NodePort # service/web2 exposed # # >kubectl get service | grep web # web NodePort 10.101.19.188 <none> 8080:31631/TCP 21m # web2 NodePort 10.102.52.139 <none> 8080:30590/TCP 40s # # >minikube service web --url # http://127.0.0.1:51813 # ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. # # ------ in another console ------ # >curl http://127.0.0.1:51813 # ^---- this must match the port from the output above # Hello, world! # Version: 1.0.0 <---- will show version 2.0.0 for web2 # Hostname: web-84fb9498c7-7bjtg # -------------------------------- # ctrl+c to kill tunnel in original tab, repeat with web2 if desired # # ------ In another console ------ # >sudo minikube tunnel # βœ… Tunnel successfully started # # (leave open, will show the following when you start an ingress component) # Starting tunnel for service example-ingress. # -------------------------------- # # >kubectl apply -f ingress_hello_world.yaml # ingress.networking.k8s.io/example-ingress created # # >kubectl get ingress example-ingress --watch # NAME CLASS HOSTS ADDRESS PORTS AGE # example-ingress nginx test1.com,test2.com 80 15s # example-ingress nginx test1.com,test2.com 192.168.49.2 80 29s # wait for this to be populated ----^ # # >cat /etc/hosts | grep test # 127.0.0.1 test1.com # 127.0.0.1 test2.com # ^---- set this to localhost ip # # >ping test1.com # PING test1.com (127.0.0.1): 56 data bytes # # >curl test1.com # Hello, world! # Version: 1.0.0 # Hostname: web-84fb9498c7-w6bkc # # >curl test2.com # Hello, world! # Version: 2.0.0 # Hostname: web2-7df4dcf77b-66g5b # ------- Cleanup: # stop tunnel # # >kubectl delete -f ingress_hello_world.yaml # ingress.networking.k8s.io "example-ingress" deleted # # >kubectl delete service web # service "web" deleted # # >kubectl delete service web2 # service "web2" deleted # # >kubectl delete deployment web # deployment.apps "web" deleted # # >kubectl delete deployment web2 # deployment.apps "web2" deleted
###Instruction: MacOS Big Sur 11.6.8 minikube version: v1.28.0 Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual minikube service foo works, kubectl get ingress shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results. I boiled it down to the simplest replication from the tutorial at kubernetes.io : # kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # kubectl expose deployment web --type=NodePort --port=8080 # kubectl get service web (ensure it's a node port) # minikube service web --url (test url) # kubectl apply -f ingress_hello_world.yaml # curl localkube.com apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: localkube.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 Manual service works: >minikube service web --url http://127.0.0.1:50111 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. >curl http://127.0.0.1:50111 Hello, world! Version: 1.0.0 Hostname: web-84fb9498c7-hnphb Ingress looks good: >minikube addons list | grep ingress | ingress | minikube | enabled βœ… | Kubernetes | >kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress nginx localkube.com 192.168.49.2 80 15m ping resolves the address mapped in /etc/hosts: >ping localkube.com PING localkube.com (192.168.49.2): 56 data bytes I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working. ### Response: Based on Veera's answer, I looked into the ingress issue with macOS and minikube tunnel. To save others the hassle, here is how I resolved the issue: ingress doesn't seem to work on macOS (the different pages say "with docker" but I had the same outcome with other drivers like hyperkit. the issue seems to be IP / networking related. You can not get to the minikube IP from your local workstation. If you first run minikube ssh you can ping and curl the minikube IP and the domain name you mapped to that IP in /etc/hosts. However, this does not help trying to access the service from a browser. the solution is to map the domain names to 127.0.0.1 in /etc/hosts (instead of the ingress assigned IP) and use ingress components to control the domain-name -> service mappings as before... then starting a tunnel with sudo minikube tunnel will keep a base tunnel open, and create tunneling for any existing or new ingress components. This combined with the ingress rules will mimic host header style connecting to any domain resolving to the local host. Here is a full example of a working solution on mac. Dump this to a file named ingress_hello_world.yaml and follow the commented instructions to achieve a simple ingress solution that routes 2 domains to 2 different services (note this will work with pretty much any internal service, and can be a ClusterIP instead of NodePort): apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx rules: - host: test1.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 - host: test2.com http: paths: - path: / pathType: Prefix backend: service: name: web2 port: number: 8080 # Instructions: # start minikube if not already # >minikube start --vm-driver=docker # # enable ingress if not already # >minikube addons enable ingress # >minikube addons list | grep "ingress " # | ingress | minikube | enabled βœ… | Kubernetes | # # >kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # deployment.apps/web created # # >kubectl expose deployment web --type=NodePort --port=8080 # service/web exposed # # >kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0 # deployment.apps/web2 created # # >kubectl expose deployment web2 --port=8080 --type=NodePort # service/web2 exposed # # >kubectl get service | grep web # web NodePort 10.101.19.188 <none> 8080:31631/TCP 21m # web2 NodePort 10.102.52.139 <none> 8080:30590/TCP 40s # # >minikube service web --url # http://127.0.0.1:51813 # ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. # # ------ in another console ------ # >curl http://127.0.0.1:51813 # ^---- this must match the port from the output above # Hello, world! # Version: 1.0.0 <---- will show version 2.0.0 for web2 # Hostname: web-84fb9498c7-7bjtg # -------------------------------- # ctrl+c to kill tunnel in original tab, repeat with web2 if desired # # ------ In another console ------ # >sudo minikube tunnel # βœ… Tunnel successfully started # # (leave open, will show the following when you start an ingress component) # Starting tunnel for service example-ingress. # -------------------------------- # # >kubectl apply -f ingress_hello_world.yaml # ingress.networking.k8s.io/example-ingress created # # >kubectl get ingress example-ingress --watch # NAME CLASS HOSTS ADDRESS PORTS AGE # example-ingress nginx test1.com,test2.com 80 15s # example-ingress nginx test1.com,test2.com 192.168.49.2 80 29s # wait for this to be populated ----^ # # >cat /etc/hosts | grep test # 127.0.0.1 test1.com # 127.0.0.1 test2.com # ^---- set this to localhost ip # # >ping test1.com # PING test1.com (127.0.0.1): 56 data bytes # # >curl test1.com # Hello, world! # Version: 1.0.0 # Hostname: web-84fb9498c7-w6bkc # # >curl test2.com # Hello, world! # Version: 2.0.0 # Hostname: web2-7df4dcf77b-66g5b # ------- Cleanup: # stop tunnel # # >kubectl delete -f ingress_hello_world.yaml # ingress.networking.k8s.io "example-ingress" deleted # # >kubectl delete service web # service "web" deleted # # >kubectl delete service web2 # service "web2" deleted # # >kubectl delete deployment web # deployment.apps "web" deleted # # >kubectl delete deployment web2 # deployment.apps "web2" deleted
MacOS Big Sur 11.6.8 minikube version: v1.28.0 Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual minikube service foo works, kubectl get ingress shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results. I boiled it down to the simplest replication from the tutorial at kubernetes.io : # kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 # kubectl expose deployment web --type=NodePort --port=8080 # kubectl get service web (ensure it's a node port) # minikube service web --url (test url) # kubectl apply -f ingress_hello_world.yaml # curl localkube.com apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: localkube.com http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 8080 Manual service works: >minikube service web --url http://127.0.0.1:50111 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it. >curl http://127.0.0.1:50111 Hello, world! Version: 1.0.0 Hostname: web-84fb9498c7-hnphb Ingress looks good: >minikube addons list | grep ingress | ingress | minikube | enabled βœ… | Kubernetes | >kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress nginx localkube.com 192.168.49.2 80 15m ping resolves the address mapped in /etc/hosts: >ping localkube.com PING localkube.com (192.168.49.2): 56 data bytes I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working.
If it is a public cluster where each node in the cluster has an ip address the public ip will be the address of the node the pod is on. If it is a private cluster you can deploy a nat gateway for all the nodes and specify static ip addresses. you can use this terraform module for a private cluster: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster Plus a nat gateway from here https://cloud.google.com/nat/docs/gke-example#terraform
###Instruction: Pod A is on ClusterIP service type, so incoming requests from external resources are not allowed. Pod A executes outgoing requests to 3rd party services (Such as Google APIs). And I want to specify the IP address that this request is coming from on google for security reasons. Is there a way to find the IP address this pod uses for outgoing HTTP requests? ### Response: If it is a public cluster where each node in the cluster has an ip address the public ip will be the address of the node the pod is on. If it is a private cluster you can deploy a nat gateway for all the nodes and specify static ip addresses. you can use this terraform module for a private cluster: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster Plus a nat gateway from here https://cloud.google.com/nat/docs/gke-example#terraform
Pod A is on ClusterIP service type, so incoming requests from external resources are not allowed. Pod A executes outgoing requests to 3rd party services (Such as Google APIs). And I want to specify the IP address that this request is coming from on google for security reasons. Is there a way to find the IP address this pod uses for outgoing HTTP requests?
To pull from a private container registry on Gitlab you must first create a Deploy Token similar to how the pipeline or similar "service" would access it. Go to the repository then go to Settings -> Repository -> Deploy Tokens Give the deploy token a name, and a username(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click create deploy token it will show you the token be sure to copy it as you won't see it again. Now just recreate your secret in your k8s cluster. kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token> Make sure to apply the secret to the same namespace as your deployment that is pulling the image. [See Docs] https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token
###Instruction: I got problem with connecting my k3s cluster to GitLab Docker Registry. On cluster I got created secret in default namespace like this kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN Then in Deployment config I got this secret included, my config: apiVersion: apps/v1 kind: Deployment metadata: name: app labels: app.kubernetes.io/name: "app" app.kubernetes.io/version: "1.0" namespace: default spec: template: metadata: labels: app: app spec: imagePullSecrets: - name: regcred containers: - image: gitlab.domain.tld:5050/group/appproject:1.0 name: app imagePullPolicy: Always ports: - containerPort: 80 But the created pod is still unable to pull this image. There is still error message of: failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Can you help me, where the error may be? If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image. Thanks ### Response: To pull from a private container registry on Gitlab you must first create a Deploy Token similar to how the pipeline or similar "service" would access it. Go to the repository then go to Settings -> Repository -> Deploy Tokens Give the deploy token a name, and a username(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click create deploy token it will show you the token be sure to copy it as you won't see it again. Now just recreate your secret in your k8s cluster. kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token> Make sure to apply the secret to the same namespace as your deployment that is pulling the image. [See Docs] https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token
I got problem with connecting my k3s cluster to GitLab Docker Registry. On cluster I got created secret in default namespace like this kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN Then in Deployment config I got this secret included, my config: apiVersion: apps/v1 kind: Deployment metadata: name: app labels: app.kubernetes.io/name: "app" app.kubernetes.io/version: "1.0" namespace: default spec: template: metadata: labels: app: app spec: imagePullSecrets: - name: regcred containers: - image: gitlab.domain.tld:5050/group/appproject:1.0 name: app imagePullPolicy: Always ports: - containerPort: 80 But the created pod is still unable to pull this image. There is still error message of: failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Can you help me, where the error may be? If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image. Thanks
You have used the type: ClusterIP so it won't expose the MYSQL outside the cluster ideally, your Microservices running in the cluster will be able to access it however you can not use it externally. To expose the service we generally have to use the type: LoadBalancer. It will directly expose your MySQL service internet and from your local workbench, you can connect to DB running on K8s. If you really don't want to expose the MySQL service directly to internet you can deploy the adminer. So traffic will flow like internet > adminer > internal communication > MySQL service > MySQL POD YAML file to deploy and get the UI output directly in the browser, it will ask of MySQL DB username, password, Host (mysql-db-testing-service.testing.svc.cluster.local) to connect apiVersion: apps/v1 kind: Deployment metadata: name: adminer labels: app: adminer spec: selector: matchLabels: app: adminer template: metadata: labels: app: adminer spec: containers: - name: adminer image: adminer:4.6.3 ports: - containerPort: 8080 env: - name: ADMINER_DESIGN value: "pappu687" --- apiVersion: v1 kind: Service metadata: name: adminer-svc spec: type: ClusterIP(Internally to cluster)/LoadBalancer (Expose to internet) selector: app: adminer ports: - protocol: TCP port: 8080 targetPort: 8080 Port-forward for local access or use service type: LoadBalancer kubectl port-forward svc/adminer-svc 8080:8080 Open localhost:8080 in browser
###Instruction: Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration. My deployment of single instance MySQL DB instance is: apiVersion: v1 kind: Service metadata: name: mysql-db-testing-service namespace: testing spec: type: ClusterIP ports: - port: 3306 #targetPort: 3306 selector: app: mysql-db-testing --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-db-testing namespace: testing spec: selector: matchLabels: app: mysql-db-testing replicas: 1 strategy: type: Recreate template: metadata: labels: app: mysql-db-testing spec: containers: - name: mysql-db-container-testing image: mysql:8.0.31 env: - name: MYSQL_ROOT_PASSWORD value: test12345 ports: - containerPort: 3306 name: mysql-port volumeMounts: - mountPath: "/var/lib/mysql" name: mysql-persistent-storage volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: azure-managed-disk-pvc-mysql-testing nodeSelector: env: preprod As I've mentioned I can get to the container via CLI: Console output regarding the working pod with db looks like: Console output regarding the service: Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help. ### Response: You have used the type: ClusterIP so it won't expose the MYSQL outside the cluster ideally, your Microservices running in the cluster will be able to access it however you can not use it externally. To expose the service we generally have to use the type: LoadBalancer. It will directly expose your MySQL service internet and from your local workbench, you can connect to DB running on K8s. If you really don't want to expose the MySQL service directly to internet you can deploy the adminer. So traffic will flow like internet > adminer > internal communication > MySQL service > MySQL POD YAML file to deploy and get the UI output directly in the browser, it will ask of MySQL DB username, password, Host (mysql-db-testing-service.testing.svc.cluster.local) to connect apiVersion: apps/v1 kind: Deployment metadata: name: adminer labels: app: adminer spec: selector: matchLabels: app: adminer template: metadata: labels: app: adminer spec: containers: - name: adminer image: adminer:4.6.3 ports: - containerPort: 8080 env: - name: ADMINER_DESIGN value: "pappu687" --- apiVersion: v1 kind: Service metadata: name: adminer-svc spec: type: ClusterIP(Internally to cluster)/LoadBalancer (Expose to internet) selector: app: adminer ports: - protocol: TCP port: 8080 targetPort: 8080 Port-forward for local access or use service type: LoadBalancer kubectl port-forward svc/adminer-svc 8080:8080 Open localhost:8080 in browser
Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration. My deployment of single instance MySQL DB instance is: apiVersion: v1 kind: Service metadata: name: mysql-db-testing-service namespace: testing spec: type: ClusterIP ports: - port: 3306 #targetPort: 3306 selector: app: mysql-db-testing --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-db-testing namespace: testing spec: selector: matchLabels: app: mysql-db-testing replicas: 1 strategy: type: Recreate template: metadata: labels: app: mysql-db-testing spec: containers: - name: mysql-db-container-testing image: mysql:8.0.31 env: - name: MYSQL_ROOT_PASSWORD value: test12345 ports: - containerPort: 3306 name: mysql-port volumeMounts: - mountPath: "/var/lib/mysql" name: mysql-persistent-storage volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: azure-managed-disk-pvc-mysql-testing nodeSelector: env: preprod As I've mentioned I can get to the container via CLI: Console output regarding the working pod with db looks like: Console output regarding the service: Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.
An interesting approach would be to output the json and extract the relevant fields. This is useful if you want to analyse the output after. You may want to add the namespace in case you have multiple. Single command: kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}" Bash script #!/usr/bin/env bash for service in `kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}"`; do echo $service | awk -F, {'print $1'} for endpoint in $(echo $services | awk -F, {'print $2'}); do echo $endpoint done done
###Instruction: It seems a silly question, but how do I expand the "+ 1 more..." from the endpoints list? Particularly I want to see all the endpoints of a service and the following command hides some endpoints behind the "+ 1 more...". kubectl describe service <name> I tried this: kubectl get endpoints -o wide It still doesn't show the hidden one. Thanks ### Response: An interesting approach would be to output the json and extract the relevant fields. This is useful if you want to analyse the output after. You may want to add the namespace in case you have multiple. Single command: kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}" Bash script #!/usr/bin/env bash for service in `kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}"`; do echo $service | awk -F, {'print $1'} for endpoint in $(echo $services | awk -F, {'print $2'}); do echo $endpoint done done
It seems a silly question, but how do I expand the "+ 1 more..." from the endpoints list? Particularly I want to see all the endpoints of a service and the following command hides some endpoints behind the "+ 1 more...". kubectl describe service <name> I tried this: kubectl get endpoints -o wide It still doesn't show the hidden one. Thanks
To illustrate Nataraj Medayhal, you can find an example based on configMap on devlinx9/k8s_tomcat_custer The configMap is used to control the configuration of tomcat, in this we added the cluster configuration, save the following text in a file configmap-tomcat.yaml apiVersion: v1 kind: ConfigMap metadata: name: testconfig data: server.xml: | <?xml version="1.0" encoding="UTF-8"?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> ... </Server> Create the configMap: kubectl apply -f configmap-tomcat.yaml -n {namespace}
###Instruction: When deploying the app, certain environment-specific settings need to be applied to the server.xml, which cannot be applied when the container is built. Has anyone tried using a volume_mounted config file, and where would I tell tomcat the location of this custom config? ### Response: To illustrate Nataraj Medayhal, you can find an example based on configMap on devlinx9/k8s_tomcat_custer The configMap is used to control the configuration of tomcat, in this we added the cluster configuration, save the following text in a file configmap-tomcat.yaml apiVersion: v1 kind: ConfigMap metadata: name: testconfig data: server.xml: | <?xml version="1.0" encoding="UTF-8"?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.startup.VersionLoggerListener" /> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> ... </Server> Create the configMap: kubectl apply -f configmap-tomcat.yaml -n {namespace}
When deploying the app, certain environment-specific settings need to be applied to the server.xml, which cannot be applied when the container is built. Has anyone tried using a volume_mounted config file, and where would I tell tomcat the location of this custom config?
This isn't a direct answer, but it ended up being too long for a comment and I think it's relevant. If you were to apply your manifests using kustomize (aka kubectl apply -k), then you get the following behavior: ConfigMaps are generated with a content-based hash appended to their name Kustomize substitutes the generated name into your Deployment This means the Deployment is only modified when the content of the ConfigMap changes, causing an implicit re-deploy of the pods managed by the Deployment. This largely gets you the behavior you want, but it would require some changes to your deployment pipeline.
###Instruction: I have a devops pipeline divided in three steps: kubectl apply -f configmap.yml kubectl apply -f deployment.yml kubectl rollout restart deployment/test-service I think that when the configmap.yml changes the rollout restart step is useful. But when only the deployment.yml changes, I'm worried that the "extra" rollout restart step is not useful and should be avoided. Should I execute rollout restart only when the configmap.yml changes or should I don't care about? ### Response: This isn't a direct answer, but it ended up being too long for a comment and I think it's relevant. If you were to apply your manifests using kustomize (aka kubectl apply -k), then you get the following behavior: ConfigMaps are generated with a content-based hash appended to their name Kustomize substitutes the generated name into your Deployment This means the Deployment is only modified when the content of the ConfigMap changes, causing an implicit re-deploy of the pods managed by the Deployment. This largely gets you the behavior you want, but it would require some changes to your deployment pipeline.
I have a devops pipeline divided in three steps: kubectl apply -f configmap.yml kubectl apply -f deployment.yml kubectl rollout restart deployment/test-service I think that when the configmap.yml changes the rollout restart step is useful. But when only the deployment.yml changes, I'm worried that the "extra" rollout restart step is not useful and should be avoided. Should I execute rollout restart only when the configmap.yml changes or should I don't care about?
Based on the description, the --force flag should do the trick. --force force resource updates through a replacement strategy However, there are some issues with it as mentioned in this GitHub issue.
###Instruction: I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs here (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict. So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced? ### Response: Based on the description, the --force flag should do the trick. --force force resource updates through a replacement strategy However, there are some issues with it as mentioned in this GitHub issue.
I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs here (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict. So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?
After looking at the yaml files I realized that Prometheus doesn't have the permissions to read all namespaces. And after looking at the repository customization examples I found the solution: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md Hope this helps someone else in the future.
###Instruction: I just installed prometheus operator as indicated here: https://github.com/prometheus-operator/kube-prometheus: kubectl apply --server-side -f manifests/setup kubectl wait \ --for condition=Established \ --all CustomResourceDefinition \ --namespace=monitoring kubectl apply -f manifests/ After that I just tried to setup my own service monitor for grafana as follows: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: in1-grafana-service-monitor namespace: monitoring spec: selector: matchLabels: app.kubernetes.io/name: grafana endpoints: - port: http interval: 10s This monitor works just fine and I can see it in the Prometheus /targets and /service-discovery. The fact is that when I want to create this same service monitor but outside the "monitoring" namespace it just not appears neither in /targets or in /service-discovery. My setup for this service monitor is as follows: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: out1-grafana-service-monitor namespace: other-namespace spec: selector: matchLabels: app.kubernetes.io/name: grafana namespaceSelector: any: true endpoints: - port: http interval: 10s How can I make Prometheus operator to scrape service monitors (and services) outside the monitoring namespace? I checked the output of kubectl get prom -Ao yaml and it just displays an empty list: [...] serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} [...] Any help will be appreciated. Thank you. I expect that the service monitor outside the monitoring namespace works as I need it for other service (Not for Grafana). ### Response: After looking at the yaml files I realized that Prometheus doesn't have the permissions to read all namespaces. And after looking at the repository customization examples I found the solution: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md Hope this helps someone else in the future.
I just installed prometheus operator as indicated here: https://github.com/prometheus-operator/kube-prometheus: kubectl apply --server-side -f manifests/setup kubectl wait \ --for condition=Established \ --all CustomResourceDefinition \ --namespace=monitoring kubectl apply -f manifests/ After that I just tried to setup my own service monitor for grafana as follows: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: in1-grafana-service-monitor namespace: monitoring spec: selector: matchLabels: app.kubernetes.io/name: grafana endpoints: - port: http interval: 10s This monitor works just fine and I can see it in the Prometheus /targets and /service-discovery. The fact is that when I want to create this same service monitor but outside the "monitoring" namespace it just not appears neither in /targets or in /service-discovery. My setup for this service monitor is as follows: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: out1-grafana-service-monitor namespace: other-namespace spec: selector: matchLabels: app.kubernetes.io/name: grafana namespaceSelector: any: true endpoints: - port: http interval: 10s How can I make Prometheus operator to scrape service monitors (and services) outside the monitoring namespace? I checked the output of kubectl get prom -Ao yaml and it just displays an empty list: [...] serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} [...] Any help will be appreciated. Thank you. I expect that the service monitor outside the monitoring namespace works as I need it for other service (Not for Grafana).
Once understood the issue is fairly simple. The metallb setup described above works as it is supposed to. However, the Kubernetes setup does not. Most likely due to bad network configuration. Understanding the error The key to understanding what is going on is the following error: Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host Part of the applied metallb manifest is going to deploy a so-called ValidatingWebhookConfiguration. In the case of metallb this validating webhook will force the kube-apiserver to: send metallb-related objects like IPAddressPool to the webhook whenever someone creates or updates such an object wait for the webhook to perform some checks on the object (e.g. validate that CIDRs and IPs are valid and not something like 481.9.141.12.27) and finally receive an answer from the webhook whether or not that object satisfies metallb's requirements and is allowed to be created (persisted to etcd) The error above pretty clearly suggests that the first out of the three outlined steps is failing. Debugging To fix this error one has to debug the current setup, particularly the connection from the kube-apiserver to webhook-service.metallb-system.svc:443. There is a wide range of possible network misconfigurations that could lead to the error. However, with the information available to us it is most likely going to be an error with the configured CNI. Knowing that here is some help and a bit of guidance regarding the further debugging process: Since the kube-apiserver is hardened by default it won't be possible to execute a shell into it. For that reason one should deploy a debug application with the same network configuration as the kube-apiserver onto one of the control-plane nodes. This can be achieved by executing the following command: kubectl debug -n kube-system node/<control-plane-node> -it --image=nicolaka/netshoot Using common tools one can now reproduce the error inside the interactive shell. The following command is expected to fail (in a similar fashion to the kube-apiserver): curl -m 10 -k https://<webhook-service-ip>:443/ Given above error message it should fail due to bad routing on the node. To check the routing table execute the following command: routel Does someone know why the webhook cannot be called? The output should show multiple CIDR ranges configured one of which is supposed to include the IP queried earlier. Most likely the CIDR range in question will either be missing or a bad gateway configured which leads to the no route to host error. It is the CNIs job to update routing tables on all nodes and ensure that nodes can reach these addresses so adding or editing new Kubernetes related entries to the routing table manually is not recommended. Further debugging is dependent on the exact setup. Depending on the setup and CNI of choice kube-proxy may or may not be involved in the issue as well. However, inspecting the CNI configuration and logs is a good next step. Some bonus information Some CNIs require the user to pay more attention to certain features and configuration as there can be issues involved otherwise. Here are some popular CNIs that fall into this category: Calico (see here) Weave (see here) Kube-Router (see here)
###Instruction: I'm setting up an on-premise kubernetes cluster with kubeadm. Here is the Kubernestes version clientVersion: buildDate: "2022-10-12T10:57:26Z" compiler: gc gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506 gitTreeState: clean gitVersion: v1.25.3 goVersion: go1.19.2 major: "1" minor: "25" platform: linux/amd64 kustomizeVersion: v4.5.7 serverVersion: buildDate: "2022-10-12T10:49:09Z" compiler: gc gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506 gitTreeState: clean gitVersion: v1.25.3 goVersion: go1.19.2 major: "1" minor: "25" platform: linux/amd64 I have installed metallb version 0.13.7 kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml Everything is running $ kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-84d6d4db45-l2r55 1/1 Running 0 35s pod/speaker-48qn4 1/1 Running 0 35s pod/speaker-ds8hh 1/1 Running 0 35s pod/speaker-pfbcp 1/1 Running 0 35s pod/speaker-st7n2 1/1 Running 0 35s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 35s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/speaker 4 4 4 4 4 kubernetes.io/os=linux 35s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/controller 1/1 1 1 35s NAME DESIRED CURRENT READY AGE replicaset.apps/controller-84d6d4db45 1 1 1 35s But when i try to apply an IPaddressPool CRD i get an error kubectl apply -f ipaddresspool.yaml ipaddresspool.yaml file content apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.2.100-192.168.2.199 The error is a fail to call the validation webhook no route to host Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host Here is the same error with line brakes Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host The IP -address is correct NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 18m I have also tried installing metallb v 0.13.7 using helm but with the same result Does someone know why the webhook cannot be called? EDIT As an answer to Thomas question, here is the description for webhook-service. NOTE that this is from another cluster with the same problem because I deleted the last cluster so the IP is not the same as last time $ kubectl describe svc webhook-service -n metallb-system Name: webhook-service Namespace: metallb-system Labels: <none> Annotations: <none> Selector: component=controller Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.157.72 IPs: 10.105.157.72 Port: <unset> 443/TCP TargetPort: 9443/TCP Endpoints: 172.17.0.3:9443 Session Affinity: None Events: <none> ### Response: Once understood the issue is fairly simple. The metallb setup described above works as it is supposed to. However, the Kubernetes setup does not. Most likely due to bad network configuration. Understanding the error The key to understanding what is going on is the following error: Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host Part of the applied metallb manifest is going to deploy a so-called ValidatingWebhookConfiguration. In the case of metallb this validating webhook will force the kube-apiserver to: send metallb-related objects like IPAddressPool to the webhook whenever someone creates or updates such an object wait for the webhook to perform some checks on the object (e.g. validate that CIDRs and IPs are valid and not something like 481.9.141.12.27) and finally receive an answer from the webhook whether or not that object satisfies metallb's requirements and is allowed to be created (persisted to etcd) The error above pretty clearly suggests that the first out of the three outlined steps is failing. Debugging To fix this error one has to debug the current setup, particularly the connection from the kube-apiserver to webhook-service.metallb-system.svc:443. There is a wide range of possible network misconfigurations that could lead to the error. However, with the information available to us it is most likely going to be an error with the configured CNI. Knowing that here is some help and a bit of guidance regarding the further debugging process: Since the kube-apiserver is hardened by default it won't be possible to execute a shell into it. For that reason one should deploy a debug application with the same network configuration as the kube-apiserver onto one of the control-plane nodes. This can be achieved by executing the following command: kubectl debug -n kube-system node/<control-plane-node> -it --image=nicolaka/netshoot Using common tools one can now reproduce the error inside the interactive shell. The following command is expected to fail (in a similar fashion to the kube-apiserver): curl -m 10 -k https://<webhook-service-ip>:443/ Given above error message it should fail due to bad routing on the node. To check the routing table execute the following command: routel Does someone know why the webhook cannot be called? The output should show multiple CIDR ranges configured one of which is supposed to include the IP queried earlier. Most likely the CIDR range in question will either be missing or a bad gateway configured which leads to the no route to host error. It is the CNIs job to update routing tables on all nodes and ensure that nodes can reach these addresses so adding or editing new Kubernetes related entries to the routing table manually is not recommended. Further debugging is dependent on the exact setup. Depending on the setup and CNI of choice kube-proxy may or may not be involved in the issue as well. However, inspecting the CNI configuration and logs is a good next step. Some bonus information Some CNIs require the user to pay more attention to certain features and configuration as there can be issues involved otherwise. Here are some popular CNIs that fall into this category: Calico (see here) Weave (see here) Kube-Router (see here)
I'm setting up an on-premise kubernetes cluster with kubeadm. Here is the Kubernestes version clientVersion: buildDate: "2022-10-12T10:57:26Z" compiler: gc gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506 gitTreeState: clean gitVersion: v1.25.3 goVersion: go1.19.2 major: "1" minor: "25" platform: linux/amd64 kustomizeVersion: v4.5.7 serverVersion: buildDate: "2022-10-12T10:49:09Z" compiler: gc gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506 gitTreeState: clean gitVersion: v1.25.3 goVersion: go1.19.2 major: "1" minor: "25" platform: linux/amd64 I have installed metallb version 0.13.7 kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml Everything is running $ kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-84d6d4db45-l2r55 1/1 Running 0 35s pod/speaker-48qn4 1/1 Running 0 35s pod/speaker-ds8hh 1/1 Running 0 35s pod/speaker-pfbcp 1/1 Running 0 35s pod/speaker-st7n2 1/1 Running 0 35s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 35s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/speaker 4 4 4 4 4 kubernetes.io/os=linux 35s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/controller 1/1 1 1 35s NAME DESIRED CURRENT READY AGE replicaset.apps/controller-84d6d4db45 1 1 1 35s But when i try to apply an IPaddressPool CRD i get an error kubectl apply -f ipaddresspool.yaml ipaddresspool.yaml file content apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.2.100-192.168.2.199 The error is a fail to call the validation webhook no route to host Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host Here is the same error with line brakes Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host The IP -address is correct NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 18m I have also tried installing metallb v 0.13.7 using helm but with the same result Does someone know why the webhook cannot be called? EDIT As an answer to Thomas question, here is the description for webhook-service. NOTE that this is from another cluster with the same problem because I deleted the last cluster so the IP is not the same as last time $ kubectl describe svc webhook-service -n metallb-system Name: webhook-service Namespace: metallb-system Labels: <none> Annotations: <none> Selector: component=controller Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.157.72 IPs: 10.105.157.72 Port: <unset> 443/TCP TargetPort: 9443/TCP Endpoints: 172.17.0.3:9443 Session Affinity: None Events: <none>
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service. One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here. MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.
###Instruction: Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser. For that, I have created a deployment and service for NGINX as shown below, As per my search, found that we have below to expose to outside world MetalLb Ingress NGINX Some HELM resources I would like to know all these 3 or any more approaches in such way it help me to learn new things. GOAL Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world. How Can I make my service has its own public IP to access from the outside cluster? ### Response: You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service. One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here. MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.
Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser. For that, I have created a deployment and service for NGINX as shown below, As per my search, found that we have below to expose to outside world MetalLb Ingress NGINX Some HELM resources I would like to know all these 3 or any more approaches in such way it help me to learn new things. GOAL Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world. How Can I make my service has its own public IP to access from the outside cluster?
This is because the ingress gateway selector when installed with Helm is istio: ingress, instead of istio: ingressgateway when installed with istioctl. If you modify the Gateway to reflect this, then it should work: apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: bookinfo-gateway namespace: default spec: selector: istio: ingress ... One way to show this (without knowing this issue previously) is with istioctl analyze: $ istioctl analyze Error [IST0101] (Gateway default/bookinfo-gateway) Referenced selector not found: "istio=ingressgateway" Error: Analyzers found issues when analyzing namespace: default. See https://istio.io/v1.16/docs/reference/config/analysis for more information about causes and resolutions.
###Instruction: I have created an AKS cluster using the following Terraform code resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } subnet { name = "appgwsubnet" address_prefix = var.app_gateway_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } resource "azurerm_role_assignment" "ra1" { scope = data.azurerm_subnet.kubesubnet.id role_definition_name = "Network Contributor" principal_id = local.client_objectid depends_on = [data.azurerm_subnet.kubesubnet] } and followed the below steps to install the ISTIO as per the ISTIO documentation #Prerequisites helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update #create namespace kubectl create namespace istio-system # helm install istio-base and istiod helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait # Check the installation status helm status istiod -n istio-system #create namespace and enable istio-injection for envoy proxy containers kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled ## helm install istio-ingress for traffic management helm install istio-ingress istio/gateway -n istio-ingress --wait ## Mark the default namespace as istio-injection=enabled kubectl label namespace default istio-injection=enabled ## Install the App and Gateway kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml # Check the Services, Pods and Gateway kubectl get services kubectl get pods kubectl get gateway # Ensure the app is running kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" and it is responding as shown below # Check the $INGRESS_NAME="istio-ingress" $INGRESS_NS="istio-ingress" kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS" it returns the external IP as shown below However, I am not able to access the application Also I am getting an error while trying to run the following commands to find the ports kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}' ### Response: This is because the ingress gateway selector when installed with Helm is istio: ingress, instead of istio: ingressgateway when installed with istioctl. If you modify the Gateway to reflect this, then it should work: apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: bookinfo-gateway namespace: default spec: selector: istio: ingress ... One way to show this (without knowing this issue previously) is with istioctl analyze: $ istioctl analyze Error [IST0101] (Gateway default/bookinfo-gateway) Referenced selector not found: "istio=ingressgateway" Error: Analyzers found issues when analyzing namespace: default. See https://istio.io/v1.16/docs/reference/config/analysis for more information about causes and resolutions.
I have created an AKS cluster using the following Terraform code resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } subnet { name = "appgwsubnet" address_prefix = var.app_gateway_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } resource "azurerm_role_assignment" "ra1" { scope = data.azurerm_subnet.kubesubnet.id role_definition_name = "Network Contributor" principal_id = local.client_objectid depends_on = [data.azurerm_subnet.kubesubnet] } and followed the below steps to install the ISTIO as per the ISTIO documentation #Prerequisites helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update #create namespace kubectl create namespace istio-system # helm install istio-base and istiod helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait # Check the installation status helm status istiod -n istio-system #create namespace and enable istio-injection for envoy proxy containers kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled ## helm install istio-ingress for traffic management helm install istio-ingress istio/gateway -n istio-ingress --wait ## Mark the default namespace as istio-injection=enabled kubectl label namespace default istio-injection=enabled ## Install the App and Gateway kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml # Check the Services, Pods and Gateway kubectl get services kubectl get pods kubectl get gateway # Ensure the app is running kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" and it is responding as shown below # Check the $INGRESS_NAME="istio-ingress" $INGRESS_NS="istio-ingress" kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS" it returns the external IP as shown below However, I am not able to access the application Also I am getting an error while trying to run the following commands to find the ports kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'
Reading the kernel documentation on this leads me to believe this is not possible. And I don't think this is a desirable behavior. Let's just think about the following scenario: You have a machine with 64GB of physical memory with 10GB of those used. Then you start a process with a "physical" memory limit of 500MB. If this memory limit is reached the kernel would start swapping and the process would stall even though there is enough memory available to service the memory requests of the process. The memory limit you specify on the container is actually not a physical memory limit, but a virtual memory limit with overcommit allowed. This means your process can allocate as much memory as it wants (until you reach the overcommit limit), but it gets killed as soon as it tries to use too much memory.
###Instruction: In Kubernetes, is it possible to enforce virtual memory (physical page swapping to disk) on a pod/container with memory requests and limits set? For instance, as per the Kubernetes documentation, β€œif you set a memory limit of 4GiB for a container, the kubelet (and container runtime) enforce the limit. The runtime prevents the container from using more than the configured resource limit. For example: when a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error.” Hence, is it possible to configure the pod (and hence linux kernel) to enforce virtual memory (that is paging and memory swapping ) on the specified physical memory limits of the pod (4GiB) instead of OOM error? am I missing something? ### Response: Reading the kernel documentation on this leads me to believe this is not possible. And I don't think this is a desirable behavior. Let's just think about the following scenario: You have a machine with 64GB of physical memory with 10GB of those used. Then you start a process with a "physical" memory limit of 500MB. If this memory limit is reached the kernel would start swapping and the process would stall even though there is enough memory available to service the memory requests of the process. The memory limit you specify on the container is actually not a physical memory limit, but a virtual memory limit with overcommit allowed. This means your process can allocate as much memory as it wants (until you reach the overcommit limit), but it gets killed as soon as it tries to use too much memory.
In Kubernetes, is it possible to enforce virtual memory (physical page swapping to disk) on a pod/container with memory requests and limits set? For instance, as per the Kubernetes documentation, β€œif you set a memory limit of 4GiB for a container, the kubelet (and container runtime) enforce the limit. The runtime prevents the container from using more than the configured resource limit. For example: when a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error.” Hence, is it possible to configure the pod (and hence linux kernel) to enforce virtual memory (that is paging and memory swapping ) on the specified physical memory limits of the pod (4GiB) instead of OOM error? am I missing something?
This is because you have hit general concerns of istio- prefix get striped, from the steps by steps installation with istio-ingress will stripe with ingress, so if you using istio-ingressgateway that could match with app selector , or change the app selector to match with it.
###Instruction: I have created an AKS cluster using the following Terraform code resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } subnet { name = "appgwsubnet" address_prefix = var.app_gateway_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } resource "azurerm_role_assignment" "ra1" { scope = data.azurerm_subnet.kubesubnet.id role_definition_name = "Network Contributor" principal_id = local.client_objectid depends_on = [data.azurerm_subnet.kubesubnet] } and followed the below steps to install the ISTIO as per the ISTIO documentation #Prerequisites helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update #create namespace kubectl create namespace istio-system # helm install istio-base and istiod helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait # Check the installation status helm status istiod -n istio-system #create namespace and enable istio-injection for envoy proxy containers kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled ## helm install istio-ingress for traffic management helm install istio-ingress istio/gateway -n istio-ingress --wait ## Mark the default namespace as istio-injection=enabled kubectl label namespace default istio-injection=enabled ## Install the App and Gateway kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml # Check the Services, Pods and Gateway kubectl get services kubectl get pods kubectl get gateway # Ensure the app is running kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" and it is responding as shown below # Check the $INGRESS_NAME="istio-ingress" $INGRESS_NS="istio-ingress" kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS" it returns the external IP as shown below However, I am not able to access the application Also I am getting an error while trying to run the following commands to find the ports kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}' ### Response: This is because you have hit general concerns of istio- prefix get striped, from the steps by steps installation with istio-ingress will stripe with ingress, so if you using istio-ingressgateway that could match with app selector , or change the app selector to match with it.
I have created an AKS cluster using the following Terraform code resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } subnet { name = "appgwsubnet" address_prefix = var.app_gateway_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } resource "azurerm_role_assignment" "ra1" { scope = data.azurerm_subnet.kubesubnet.id role_definition_name = "Network Contributor" principal_id = local.client_objectid depends_on = [data.azurerm_subnet.kubesubnet] } and followed the below steps to install the ISTIO as per the ISTIO documentation #Prerequisites helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update #create namespace kubectl create namespace istio-system # helm install istio-base and istiod helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait # Check the installation status helm status istiod -n istio-system #create namespace and enable istio-injection for envoy proxy containers kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled ## helm install istio-ingress for traffic management helm install istio-ingress istio/gateway -n istio-ingress --wait ## Mark the default namespace as istio-injection=enabled kubectl label namespace default istio-injection=enabled ## Install the App and Gateway kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml # Check the Services, Pods and Gateway kubectl get services kubectl get pods kubectl get gateway # Ensure the app is running kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>" and it is responding as shown below # Check the $INGRESS_NAME="istio-ingress" $INGRESS_NS="istio-ingress" kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS" it returns the external IP as shown below However, I am not able to access the application Also I am getting an error while trying to run the following commands to find the ports kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}' kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'
This error is from client-go when there are no resources available in external.metrics.k8s.io/v1beta1 here in client-go, it gets all ServerGroups. When KEDA is not installed then external.metrics.k8s.io/v1beta1 is not part of ServerGroups and hence its not called and therefore no issue. But when KEDA is installed then it creates an ApiService $ kubectl get apiservice | grep keda-metrics v1beta1.external.metrics.k8s.io keda/keda-metrics-apiserver True 20m But it doesn't create any external.metrics.k8s.io resources $ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq . { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "external.metrics.k8s.io/v1beta1", "resources": [] } Since there are no resources, client-go throws an error. The workaround is registering a dummy resource in the empty resource group. Refer to this Github link for more detailed information.
###Instruction: I am getting below error after installing keda in my k8s cluster and created some scaled object... whatever command i am running EG: " kubectl get pods" i am getting response with below error message.. How to get rid of below error message. E0125 11:45:32.766448 316 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 ### Response: This error is from client-go when there are no resources available in external.metrics.k8s.io/v1beta1 here in client-go, it gets all ServerGroups. When KEDA is not installed then external.metrics.k8s.io/v1beta1 is not part of ServerGroups and hence its not called and therefore no issue. But when KEDA is installed then it creates an ApiService $ kubectl get apiservice | grep keda-metrics v1beta1.external.metrics.k8s.io keda/keda-metrics-apiserver True 20m But it doesn't create any external.metrics.k8s.io resources $ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq . { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "external.metrics.k8s.io/v1beta1", "resources": [] } Since there are no resources, client-go throws an error. The workaround is registering a dummy resource in the empty resource group. Refer to this Github link for more detailed information.
I am getting below error after installing keda in my k8s cluster and created some scaled object... whatever command i am running EG: " kubectl get pods" i am getting response with below error message.. How to get rid of below error message. E0125 11:45:32.766448 316 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service. Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address. I have done a similar setup you can check it here in detail: https://developerdiary.me/lets-build-low-budget-aws-at-home/
###Instruction: Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser. For that, I have created a deployment and service for NGINX as shown below, As per my search, found that we have below to expose to outside world MetalLb Ingress NGINX Some HELM resources I would like to know all these 3 or any more approaches in such way it help me to learn new things. GOAL Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world. How Can I make my service has its own public IP to access from the outside cluster? ### Response: You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service. Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address. I have done a similar setup you can check it here in detail: https://developerdiary.me/lets-build-low-budget-aws-at-home/
Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser. For that, I have created a deployment and service for NGINX as shown below, As per my search, found that we have below to expose to outside world MetalLb Ingress NGINX Some HELM resources I would like to know all these 3 or any more approaches in such way it help me to learn new things. GOAL Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world. How Can I make my service has its own public IP to access from the outside cluster?
The "kubectl describe " function can be found in the command-line interface (CLI) of Kubernetes, specifically in the "kubectl" tool. "kubectl" is used to manage and interact with a Kubernetes cluster and its resources. enter image description here
###Instruction: I am studying "kubectl describe" sourcecodes at https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go However, I still could not figure out how "kubectl decsribe [CRD]" works (as in which function/functions are called). I am a Go newbie, so would like to get some pointers please. Thanks. I have read describePod function and understand how it works more or less, but still could not figure out how "kubectl describe [CRD]" works. ### Response: The "kubectl describe " function can be found in the command-line interface (CLI) of Kubernetes, specifically in the "kubectl" tool. "kubectl" is used to manage and interact with a Kubernetes cluster and its resources. enter image description here
I am studying "kubectl describe" sourcecodes at https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go However, I still could not figure out how "kubectl decsribe [CRD]" works (as in which function/functions are called). I am a Go newbie, so would like to get some pointers please. Thanks. I have read describePod function and understand how it works more or less, but still could not figure out how "kubectl describe [CRD]" works.
The problem was the host name set on the Ingress
###Instruction: I have an ingress for my application: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapi-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: ingressClassName: nginx rules: - host: mysite.com http: paths: - path: "/posts" pathType: Prefix backend: service: name: myservice port: number: 80 When I run kubectl describe ing myapi-ingress, I can see that the ingress is stuck in sync state: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 26m (x2 over 27m) nginx-ingress-controller Scheduled for sync PS. Before this happens, I tried to install another ingress for internal usage under another namespace and ingressclassname. I'm getting 404 when I try to hit this endpoint. Nothing in the logs. What is the problem? ### Response: The problem was the host name set on the Ingress
I have an ingress for my application: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapi-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: ingressClassName: nginx rules: - host: mysite.com http: paths: - path: "/posts" pathType: Prefix backend: service: name: myservice port: number: 80 When I run kubectl describe ing myapi-ingress, I can see that the ingress is stuck in sync state: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 26m (x2 over 27m) nginx-ingress-controller Scheduled for sync PS. Before this happens, I tried to install another ingress for internal usage under another namespace and ingressclassname. I'm getting 404 when I try to hit this endpoint. Nothing in the logs. What is the problem?
I was using Spring Boot 3 but was missing: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency>
###Instruction: Using Kubernetes, exactly the kubectl apply -f ./auth.yaml statement, i'm trying to run a Authorization Server in a pod, but when I check out the logs, this show me the following error: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.6.13) 2022-12-07 01:33:30.099 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : Starting MsvcAuthApplication v1.0-SNAPSHOT using Java 18.0.2.1 on msvc-auth-7d696f776d-hpk99 with PID 1 (/app/msvc-auth-1.0-SNAPSHOT.jar started by root in /app) 2022-12-07 01:33:30.203 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : The following 1 profile is active: "kubernetes" 2022-12-07 01:33:48.711 INFO 1 --- [ main] o.s.c.k.client.KubernetesClientUtils : Created API client in the cluster. 2022-12-07 01:33:48.913 INFO 1 --- [ main] o.s.c.a.ConfigurationClassPostProcessor : Cannot enhance @Configuration bean definition 'org.springframework.cloud.kubernetes.client.KubernetesClientAutoConfiguration' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'. 2022-12-07 01:33:49.794 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=9e09a67e-4528-373e-99ad-3031c15d14ab 2022-12-07 01:33:50.922 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.113 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration' of type [org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.184 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.187 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'loadBalancerClientsDefaultsMappingsProvider' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration$$Lambda$420/0x0000000800f30898] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.205 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'defaultsBindHandlerAdvisor' of type [org.springframework.cloud.commons.config.DefaultsBindHandlerAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.311 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'kubernetes.manifests-io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.412 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.419 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.489 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'reactorDeferringLoadBalancerExchangeFilterFunction' of type [org.springframework.cloud.client.loadbalancer.reactive.DeferringLoadBalancerExchangeFilterFunction] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:58.301 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9000 (http) 2022-12-07 01:33:58.393 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-12-07 01:33:58.393 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68] 2022-12-07 01:33:58.795 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-12-07 01:33:58.796 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 26917 ms 2022-12-07 01:34:01.099 WARN 1 --- [ main] o.s.security.core.userdetails.User : User.withDefaultPasswordEncoder() is considered unsafe for production and is only intended for sample applications. 2022-12-07 01:34:02.385 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest 2022-12-07 01:34:02.413 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2022-12-07 01:34:02.677 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener : Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2022-12-07 01:34:02.991 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:745) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:420) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1317) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.villamzr.springcloud.msvc.auth.MsvcAuthApplication.main(MsvcAuthApplication.java:12) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 25 common frames omitted Caused by: java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer.getEndpointsMatcher(OAuth2AuthorizationServerConfigurer.java:235) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(OAuth2AuthorizationServerConfiguration.java:63) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.villamzr.springcloud.msvc.auth.SecurityConfig.authorizationServerSecurityFilterChain(SecurityConfig.java:51) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.CGLIB$authorizationServerSecurityFilterChain$1(<generated>) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf$$FastClassBySpringCGLIB$$a983a242.invoke(<generated>) ~[classes!/:1.0-SNAPSHOT] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.3.23.jar!/:5.3.23] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.3.23.jar!/:5.3.23] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.authorizationServerSecurityFilterChain(<generated>) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 26 common frames omitted Caused by: java.lang.ClassNotFoundException: jakarta.servlet.http.HttpServletRequest at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588) ~[na:na] at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ~[na:na] ... 37 common frames omitted This is the auth.yaml configuration. apiVersion: apps/v1 kind: Deployment metadata: name: msvc-auth spec: replicas: 1 selector: matchLabels: app: msvc-auth template: metadata: labels: app: msvc-auth spec: containers: - image: villamzr/auth:latest name: msvc-auth ports: - containerPort: 9000 env: - name: LB_USUARIOS_URI valueFrom: configMapKeyRef: name: msvc-usuarios key: lb_usuarios_uri --- apiVersion: v1 kind: Service metadata: name: msvc-auth spec: type: LoadBalancer ports: - port: 9000 protocol: TCP targetPort: 9000 selector: app: msvc-auth this one is the pom.xml of the microservice <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.villamzr.springcloud.msvc</groupId> <artifactId>curso-kubernetes</artifactId> <version>1.0-SNAPSHOT</version> </parent> <groupId>org.villamzr.springcloud.msvc.auth</groupId> <artifactId>msvc-auth</artifactId> <name>msvc-auth</name> <description>Demo project for Spring Boot</description> <properties> <java.version>18</java.version> <spring-cloud.version>2021.0.5</spring-cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-client</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-authorization-server</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-client</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-client-loadbalancer</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.projectreactor</groupId> <artifactId>reactor-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> and this one is the Securityconfig package org.villamzr.springcloud.msvc.auth; import com.nimbusds.jose.jwk.JWKSet; import com.nimbusds.jose.jwk.RSAKey; import com.nimbusds.jose.jwk.source.ImmutableJWKSet; import com.nimbusds.jose.jwk.source.JWKSource; import com.nimbusds.jose.proc.SecurityContext; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.core.env.Environment; import org.springframework.security.config.Customizer; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configurers.oauth2.server.resource.OAuth2ResourceServerConfigurer; import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.oauth2.core.AuthorizationGrantType; import org.springframework.security.oauth2.core.ClientAuthenticationMethod; import org.springframework.security.oauth2.core.oidc.OidcScopes; import org.springframework.security.oauth2.jwt.JwtDecoder; import org.springframework.security.oauth2.server.authorization.client.InMemoryRegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.client.RegisteredClient; import org.springframework.security.oauth2.server.authorization.client.RegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer; import org.springframework.security.oauth2.server.authorization.settings.AuthorizationServerSettings; import org.springframework.security.oauth2.server.authorization.settings.ClientSettings; import org.springframework.security.provisioning.InMemoryUserDetailsManager; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint; import java.security.KeyPair; import java.security.KeyPairGenerator; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; import java.util.UUID; @Configuration public class SecurityConfig { @Autowired private Environment env; @Bean @Order(1) public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http) throws Exception { OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http); http.getConfigurer(OAuth2AuthorizationServerConfigurer.class) .oidc(Customizer.withDefaults()); // Enable OpenID Connect 1.0 http // Redirect to the login page when not authenticated from the // authorization endpoint .exceptionHandling((exceptions) -> exceptions .authenticationEntryPoint( new LoginUrlAuthenticationEntryPoint("/login")) ) // Accept access tokens for User Info and/or Client Registration .oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt); return http.build(); } @Bean @Order(2) public SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests((authorize) -> authorize .anyRequest().authenticated() ) // Form login handles the redirect to the login page from the // authorization server filter chain .formLogin(Customizer.withDefaults()); return http.build(); } @Bean public UserDetailsService userDetailsService() { UserDetails userDetails = User.withDefaultPasswordEncoder() .username("admin") .password("12345") .roles("USER") .build(); return new InMemoryUserDetailsManager(userDetails); } @Bean public RegisteredClientRepository registeredClientRepository() { RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString()) .clientId("usuarios-client") .clientSecret("{noop}12345") .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC) .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE) .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN) .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS) .redirectUri(env.getProperty("LB_USUARIOS_URI")+"/login/oauth2/code/msvc-usuarios-client") .redirectUri(env.getProperty("LB_USUARIOS_URI")+"/authorized") .scope(OidcScopes.OPENID) .scope(OidcScopes.PROFILE) .scope("read") .scope("write") .clientSettings(ClientSettings.builder().requireAuthorizationConsent(true).build()) .build(); return new InMemoryRegisteredClientRepository(registeredClient); } @Bean public JWKSource<SecurityContext> jwkSource() { KeyPair keyPair = generateRsaKey(); RSAPublicKey publicKey = (RSAPublicKey) keyPair.getPublic(); RSAPrivateKey privateKey = (RSAPrivateKey) keyPair.getPrivate(); RSAKey rsaKey = new RSAKey.Builder(publicKey) .privateKey(privateKey) .keyID(UUID.randomUUID().toString()) .build(); JWKSet jwkSet = new JWKSet(rsaKey); return new ImmutableJWKSet<>(jwkSet); } private static KeyPair generateRsaKey() { KeyPair keyPair; try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(2048); keyPair = keyPairGenerator.generateKeyPair(); } catch (Exception ex) { throw new IllegalStateException(ex); } return keyPair; } @Bean public JwtDecoder jwtDecoder(JWKSource<SecurityContext> jwkSource) { return OAuth2AuthorizationServerConfiguration.jwtDecoder(jwkSource); } @Bean public AuthorizationServerSettings authorizationServerSettings() { return AuthorizationServerSettings.builder().build(); } } SOLUTIONS I TESTED BUT IT DOWS NOT WORK I changed the tomcat server version to 10.x I added the jakarta-api dependency to pom.xml of microservice, with 3.x, 5.x and 6.x versions I added the @EnableWebSecurity NOTES I'm using java 18 I'm using Oauth 2.1 and authorization server 1.0.0 ### Response: I was using Spring Boot 3 but was missing: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency>
Using Kubernetes, exactly the kubectl apply -f ./auth.yaml statement, i'm trying to run a Authorization Server in a pod, but when I check out the logs, this show me the following error: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.6.13) 2022-12-07 01:33:30.099 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : Starting MsvcAuthApplication v1.0-SNAPSHOT using Java 18.0.2.1 on msvc-auth-7d696f776d-hpk99 with PID 1 (/app/msvc-auth-1.0-SNAPSHOT.jar started by root in /app) 2022-12-07 01:33:30.203 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : The following 1 profile is active: "kubernetes" 2022-12-07 01:33:48.711 INFO 1 --- [ main] o.s.c.k.client.KubernetesClientUtils : Created API client in the cluster. 2022-12-07 01:33:48.913 INFO 1 --- [ main] o.s.c.a.ConfigurationClassPostProcessor : Cannot enhance @Configuration bean definition 'org.springframework.cloud.kubernetes.client.KubernetesClientAutoConfiguration' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'. 2022-12-07 01:33:49.794 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=9e09a67e-4528-373e-99ad-3031c15d14ab 2022-12-07 01:33:50.922 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.113 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration' of type [org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.184 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.187 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'loadBalancerClientsDefaultsMappingsProvider' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration$$Lambda$420/0x0000000800f30898] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.205 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'defaultsBindHandlerAdvisor' of type [org.springframework.cloud.commons.config.DefaultsBindHandlerAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.311 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'kubernetes.manifests-io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.412 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.419 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:51.489 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'reactorDeferringLoadBalancerExchangeFilterFunction' of type [org.springframework.cloud.client.loadbalancer.reactive.DeferringLoadBalancerExchangeFilterFunction] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2022-12-07 01:33:58.301 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9000 (http) 2022-12-07 01:33:58.393 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-12-07 01:33:58.393 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68] 2022-12-07 01:33:58.795 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-12-07 01:33:58.796 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 26917 ms 2022-12-07 01:34:01.099 WARN 1 --- [ main] o.s.security.core.userdetails.User : User.withDefaultPasswordEncoder() is considered unsafe for production and is only intended for sample applications. 2022-12-07 01:34:02.385 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest 2022-12-07 01:34:02.413 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat] 2022-12-07 01:34:02.677 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener : Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 2022-12-07 01:34:02.991 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.23.jar!/:5.3.23] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:745) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:420) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1317) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.6.13.jar!/:2.6.13] at org.villamzr.springcloud.msvc.auth.MsvcAuthApplication.main(MsvcAuthApplication.java:12) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.23.jar!/:5.3.23] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 25 common frames omitted Caused by: java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest at org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer.getEndpointsMatcher(OAuth2AuthorizationServerConfigurer.java:235) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(OAuth2AuthorizationServerConfiguration.java:63) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0] at org.villamzr.springcloud.msvc.auth.SecurityConfig.authorizationServerSecurityFilterChain(SecurityConfig.java:51) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.CGLIB$authorizationServerSecurityFilterChain$1(<generated>) ~[classes!/:1.0-SNAPSHOT] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf$$FastClassBySpringCGLIB$$a983a242.invoke(<generated>) ~[classes!/:1.0-SNAPSHOT] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.3.23.jar!/:5.3.23] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.3.23.jar!/:5.3.23] at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.authorizationServerSecurityFilterChain(<generated>) ~[classes!/:1.0-SNAPSHOT] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.3.23.jar!/:5.3.23] ... 26 common frames omitted Caused by: java.lang.ClassNotFoundException: jakarta.servlet.http.HttpServletRequest at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588) ~[na:na] at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ~[na:na] ... 37 common frames omitted This is the auth.yaml configuration. apiVersion: apps/v1 kind: Deployment metadata: name: msvc-auth spec: replicas: 1 selector: matchLabels: app: msvc-auth template: metadata: labels: app: msvc-auth spec: containers: - image: villamzr/auth:latest name: msvc-auth ports: - containerPort: 9000 env: - name: LB_USUARIOS_URI valueFrom: configMapKeyRef: name: msvc-usuarios key: lb_usuarios_uri --- apiVersion: v1 kind: Service metadata: name: msvc-auth spec: type: LoadBalancer ports: - port: 9000 protocol: TCP targetPort: 9000 selector: app: msvc-auth this one is the pom.xml of the microservice <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.villamzr.springcloud.msvc</groupId> <artifactId>curso-kubernetes</artifactId> <version>1.0-SNAPSHOT</version> </parent> <groupId>org.villamzr.springcloud.msvc.auth</groupId> <artifactId>msvc-auth</artifactId> <name>msvc-auth</name> <description>Demo project for Spring Boot</description> <properties> <java.version>18</java.version> <spring-cloud.version>2021.0.5</spring-cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-client</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-authorization-server</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-client</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-client-loadbalancer</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.projectreactor</groupId> <artifactId>reactor-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> and this one is the Securityconfig package org.villamzr.springcloud.msvc.auth; import com.nimbusds.jose.jwk.JWKSet; import com.nimbusds.jose.jwk.RSAKey; import com.nimbusds.jose.jwk.source.ImmutableJWKSet; import com.nimbusds.jose.jwk.source.JWKSource; import com.nimbusds.jose.proc.SecurityContext; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.annotation.Order; import org.springframework.core.env.Environment; import org.springframework.security.config.Customizer; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configurers.oauth2.server.resource.OAuth2ResourceServerConfigurer; import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.oauth2.core.AuthorizationGrantType; import org.springframework.security.oauth2.core.ClientAuthenticationMethod; import org.springframework.security.oauth2.core.oidc.OidcScopes; import org.springframework.security.oauth2.jwt.JwtDecoder; import org.springframework.security.oauth2.server.authorization.client.InMemoryRegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.client.RegisteredClient; import org.springframework.security.oauth2.server.authorization.client.RegisteredClientRepository; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration; import org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer; import org.springframework.security.oauth2.server.authorization.settings.AuthorizationServerSettings; import org.springframework.security.oauth2.server.authorization.settings.ClientSettings; import org.springframework.security.provisioning.InMemoryUserDetailsManager; import org.springframework.security.web.SecurityFilterChain; import org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint; import java.security.KeyPair; import java.security.KeyPairGenerator; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; import java.util.UUID; @Configuration public class SecurityConfig { @Autowired private Environment env; @Bean @Order(1) public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http) throws Exception { OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http); http.getConfigurer(OAuth2AuthorizationServerConfigurer.class) .oidc(Customizer.withDefaults()); // Enable OpenID Connect 1.0 http // Redirect to the login page when not authenticated from the // authorization endpoint .exceptionHandling((exceptions) -> exceptions .authenticationEntryPoint( new LoginUrlAuthenticationEntryPoint("/login")) ) // Accept access tokens for User Info and/or Client Registration .oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt); return http.build(); } @Bean @Order(2) public SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests((authorize) -> authorize .anyRequest().authenticated() ) // Form login handles the redirect to the login page from the // authorization server filter chain .formLogin(Customizer.withDefaults()); return http.build(); } @Bean public UserDetailsService userDetailsService() { UserDetails userDetails = User.withDefaultPasswordEncoder() .username("admin") .password("12345") .roles("USER") .build(); return new InMemoryUserDetailsManager(userDetails); } @Bean public RegisteredClientRepository registeredClientRepository() { RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString()) .clientId("usuarios-client") .clientSecret("{noop}12345") .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC) .authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE) .authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN) .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS) .redirectUri(env.getProperty("LB_USUARIOS_URI")+"/login/oauth2/code/msvc-usuarios-client") .redirectUri(env.getProperty("LB_USUARIOS_URI")+"/authorized") .scope(OidcScopes.OPENID) .scope(OidcScopes.PROFILE) .scope("read") .scope("write") .clientSettings(ClientSettings.builder().requireAuthorizationConsent(true).build()) .build(); return new InMemoryRegisteredClientRepository(registeredClient); } @Bean public JWKSource<SecurityContext> jwkSource() { KeyPair keyPair = generateRsaKey(); RSAPublicKey publicKey = (RSAPublicKey) keyPair.getPublic(); RSAPrivateKey privateKey = (RSAPrivateKey) keyPair.getPrivate(); RSAKey rsaKey = new RSAKey.Builder(publicKey) .privateKey(privateKey) .keyID(UUID.randomUUID().toString()) .build(); JWKSet jwkSet = new JWKSet(rsaKey); return new ImmutableJWKSet<>(jwkSet); } private static KeyPair generateRsaKey() { KeyPair keyPair; try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(2048); keyPair = keyPairGenerator.generateKeyPair(); } catch (Exception ex) { throw new IllegalStateException(ex); } return keyPair; } @Bean public JwtDecoder jwtDecoder(JWKSource<SecurityContext> jwkSource) { return OAuth2AuthorizationServerConfiguration.jwtDecoder(jwkSource); } @Bean public AuthorizationServerSettings authorizationServerSettings() { return AuthorizationServerSettings.builder().build(); } } SOLUTIONS I TESTED BUT IT DOWS NOT WORK I changed the tomcat server version to 10.x I added the jakarta-api dependency to pom.xml of microservice, with 3.x, 5.x and 6.x versions I added the @EnableWebSecurity NOTES I'm using java 18 I'm using Oauth 2.1 and authorization server 1.0.0
If you want to use shell substitution, then you need to execute your command with a shell. For example: containers: - name: postgres image: postgres:latest imagePullPolicy: IfNotPresent command: - /bin/sh - -c - > pg_dump --username=postgres --no-password --format=custom --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak --host={{ include "dragalia-api.fullname" . }}-postgresql --verbose Also, unrelated to your question, you should pin your postgres image to a specific version (postgres:14) or you'll be in for a rude surprise when :latest is unexpectedly a new major version.
###Instruction: I have the following CronJob to run a backup of my database, and I'd like the backup files to be appended with the date: {{- if .Values.postgresqlBackup.enabled }} apiVersion: batch/v1 kind: CronJob metadata: name: postgres-backup spec: schedule: {{ .Values.postgresqlBackup.cron | quote }} jobTemplate: spec: template: spec: containers: - name: postgres image: postgres:latest imagePullPolicy: IfNotPresent command: - pg_dump - --username=postgres - --no-password - --format=custom - --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak - --host={{ include "dragalia-api.fullname" . }}-postgresql - --verbose volumeMounts: - name: data mountPath: /backups env: - name: PGPASSWORD valueFrom: secretKeyRef: name: {{ include "dragalia-api.fullname" . }}-postgresql key: postgres-password optional: false restartPolicy: Never volumes: - name: data persistentVolumeClaim: claimName: {{ include "dragalia-api.fullname" . }}-db-backup {{- end }} The job executes successfully but I am left with files like: docker@minikube:/dragalia-api/db-backup$ ls 'dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak' The entire filename is quoted and the string is not evaluated. How can I make it so that the string is evaluated by the shell? Things I've tried: using backticks: --file=/backups/dragalia-api-1`date +"%Y-%m-%d_%H-%M-%S".bak` : still rendered literally defining a DATE env var and putting ${DATE} in the string: rendered literally escaping the % signs e.g. \%Y: rendered literally passing a multi-line string to sh -c: this caused the job to fail on being unable to connect to the db, so I guess --host was not passed properly The only other thing I can think of is passing in a shell script, but I'd rather not create another resource if possible. Equivalently, since the date information is stored by the filesystem, if there's some other way to pass a unique string into the filename that would work. Anything so that it keeps rolling backups instead of just the one. ### Response: If you want to use shell substitution, then you need to execute your command with a shell. For example: containers: - name: postgres image: postgres:latest imagePullPolicy: IfNotPresent command: - /bin/sh - -c - > pg_dump --username=postgres --no-password --format=custom --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak --host={{ include "dragalia-api.fullname" . }}-postgresql --verbose Also, unrelated to your question, you should pin your postgres image to a specific version (postgres:14) or you'll be in for a rude surprise when :latest is unexpectedly a new major version.
I have the following CronJob to run a backup of my database, and I'd like the backup files to be appended with the date: {{- if .Values.postgresqlBackup.enabled }} apiVersion: batch/v1 kind: CronJob metadata: name: postgres-backup spec: schedule: {{ .Values.postgresqlBackup.cron | quote }} jobTemplate: spec: template: spec: containers: - name: postgres image: postgres:latest imagePullPolicy: IfNotPresent command: - pg_dump - --username=postgres - --no-password - --format=custom - --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak - --host={{ include "dragalia-api.fullname" . }}-postgresql - --verbose volumeMounts: - name: data mountPath: /backups env: - name: PGPASSWORD valueFrom: secretKeyRef: name: {{ include "dragalia-api.fullname" . }}-postgresql key: postgres-password optional: false restartPolicy: Never volumes: - name: data persistentVolumeClaim: claimName: {{ include "dragalia-api.fullname" . }}-db-backup {{- end }} The job executes successfully but I am left with files like: docker@minikube:/dragalia-api/db-backup$ ls 'dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak' The entire filename is quoted and the string is not evaluated. How can I make it so that the string is evaluated by the shell? Things I've tried: using backticks: --file=/backups/dragalia-api-1`date +"%Y-%m-%d_%H-%M-%S".bak` : still rendered literally defining a DATE env var and putting ${DATE} in the string: rendered literally escaping the % signs e.g. \%Y: rendered literally passing a multi-line string to sh -c: this caused the job to fail on being unable to connect to the db, so I guess --host was not passed properly The only other thing I can think of is passing in a shell script, but I'd rather not create another resource if possible. Equivalently, since the date information is stored by the filesystem, if there's some other way to pass a unique string into the filename that would work. Anything so that it keeps rolling backups instead of just the one.
This is work for me: ... authorizerClassName: "kafka.security.authorizer.AclAuthorizer" auth: clientProtocol: sasl externalClientProtocol: sasl interBrokerProtocol: plaintext sasl: mechanisms: plain,scram-sha-256,scram-sha-512 interBrokerMechanism: plain jaas: clientUsers: - yourusername clientPasswords: - yourpassword .... I don't know why but I have to set clientProtocol: sasl otherwise, I get java.io.IOException: /opt/bitnami/kafka/conf/kafka_jaas.conf (No such file or directory) also, I didn't see the error till I set image.debug: true Note: as u you probably see this will force authentication also between the clients inside the cluster
###Instruction: Goal: I want to use bitnami/kafka helm chart with SASL enabled with the PLAIN mechanism for only the external client. (client-broker, broker-broker, broker-zookeeper connection can be in PLAINTEXT mechanism) What I have Done: I've set configured parameters in values.yaml file: superUsers: User:adminuser auth.externalClientProtocol: sasl auth.sasl.jaas.clientUsers: - adminuser - otheruser auth.sasl.jaas.clientPasswords: - adminuserpass - otheruserpass auth.sasl.jaas.interBrokerUser: adminuser And left other parameters as it is. But it doesn't seem to be enough. The broker container is going to backOff state when I try to install the chart. Question#1: Aren't these configuration parameters enough for setting up what I'm trying to achieve? Won't these create a JAAS config file for me? From Kafka documentation Kafka_SASL, I have to pass a JAAS config file for the broker. It can be done by sasl.jaas.config configuration parameter. For me it should be something like this: listener.name.EXTERNAL.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="adminuser" \ password="adminuserpass" \ user_adminuser="adminuserpass" \ user_otheruser="otheruserpass"; But it doesn't seem there is any sasl.jaas.config available in bitnami/kafka.values.yaml. Question#2: How can I pass this JAAS config file values if the answer for question#1 is NO? Should I use config or extraEnvVars for this? Thanks! ### Response: This is work for me: ... authorizerClassName: "kafka.security.authorizer.AclAuthorizer" auth: clientProtocol: sasl externalClientProtocol: sasl interBrokerProtocol: plaintext sasl: mechanisms: plain,scram-sha-256,scram-sha-512 interBrokerMechanism: plain jaas: clientUsers: - yourusername clientPasswords: - yourpassword .... I don't know why but I have to set clientProtocol: sasl otherwise, I get java.io.IOException: /opt/bitnami/kafka/conf/kafka_jaas.conf (No such file or directory) also, I didn't see the error till I set image.debug: true Note: as u you probably see this will force authentication also between the clients inside the cluster
Goal: I want to use bitnami/kafka helm chart with SASL enabled with the PLAIN mechanism for only the external client. (client-broker, broker-broker, broker-zookeeper connection can be in PLAINTEXT mechanism) What I have Done: I've set configured parameters in values.yaml file: superUsers: User:adminuser auth.externalClientProtocol: sasl auth.sasl.jaas.clientUsers: - adminuser - otheruser auth.sasl.jaas.clientPasswords: - adminuserpass - otheruserpass auth.sasl.jaas.interBrokerUser: adminuser And left other parameters as it is. But it doesn't seem to be enough. The broker container is going to backOff state when I try to install the chart. Question#1: Aren't these configuration parameters enough for setting up what I'm trying to achieve? Won't these create a JAAS config file for me? From Kafka documentation Kafka_SASL, I have to pass a JAAS config file for the broker. It can be done by sasl.jaas.config configuration parameter. For me it should be something like this: listener.name.EXTERNAL.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="adminuser" \ password="adminuserpass" \ user_adminuser="adminuserpass" \ user_otheruser="otheruserpass"; But it doesn't seem there is any sasl.jaas.config available in bitnami/kafka.values.yaml. Question#2: How can I pass this JAAS config file values if the answer for question#1 is NO? Should I use config or extraEnvVars for this? Thanks!
I'm not sure what will work exactly for you, but here's a few things you can try to approach this (read carefully because nginx is confusing): Define service for your deployment, and make sure it covers port routes you want and support on deployment end: apiVersion: v1 kind: Service metadata: name: web-app namespace: web labels: app: web-app spec: ports: - port: 80 targetPort: 1337 protocol: TCP selector: app: web-app Refer to it in nginx ingress: rules: - host: mycoolwebapp.com http: paths: - path: / pathType: Prefix backend: service: name: web-app port: number: 80 The catch here is that you can route ALL services via port 80 but use any target port you want, so that you can, say, add 50 ingress hosts/routes over a morning routing to port 80 and only difference they'll have is target port in service. 3. If you are specifically unhappy with ports 80 and 443, you are welcome to edit ingress-nginx-controller (service one, because as I said nginx is confusing). 4. Alternatively, you can find example of ingress-nginx-controller service on the web, customize it and apply, then connect ingress to it... but I advise against this because if nginx doesn't like anything you set up as custom service, it's easier to just reinstall whole helm release of it and try again.
###Instruction: by default, nginx ingress listens to two ports 80 and 443, how to add listening on port 9898 I tried to change it in daemon set, but nothing came out , I don 't even know where else to dig ### Response: I'm not sure what will work exactly for you, but here's a few things you can try to approach this (read carefully because nginx is confusing): Define service for your deployment, and make sure it covers port routes you want and support on deployment end: apiVersion: v1 kind: Service metadata: name: web-app namespace: web labels: app: web-app spec: ports: - port: 80 targetPort: 1337 protocol: TCP selector: app: web-app Refer to it in nginx ingress: rules: - host: mycoolwebapp.com http: paths: - path: / pathType: Prefix backend: service: name: web-app port: number: 80 The catch here is that you can route ALL services via port 80 but use any target port you want, so that you can, say, add 50 ingress hosts/routes over a morning routing to port 80 and only difference they'll have is target port in service. 3. If you are specifically unhappy with ports 80 and 443, you are welcome to edit ingress-nginx-controller (service one, because as I said nginx is confusing). 4. Alternatively, you can find example of ingress-nginx-controller service on the web, customize it and apply, then connect ingress to it... but I advise against this because if nginx doesn't like anything you set up as custom service, it's easier to just reinstall whole helm release of it and try again.
by default, nginx ingress listens to two ports 80 and 443, how to add listening on port 9898 I tried to change it in daemon set, but nothing came out , I don 't even know where else to dig