input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
When you use an image WITHOUT a tag, Kubernetes will assume that you want the latest version of the image, which is identified by the latest tag by default. If you have multiple versions of the same image in your repository with different tags, such as img1:1.0.0, img1:1.1.0, and img1:latest, Kubernetes will use the image with the tag specified in the pod specification. If you use IfNotPresent and the image with the specified tag is already present on the worker node, Kubernetes will use that image to start the container, even if there is a newer version of the image available in the repository with the same tag. If you use Always, however, Kubernetes will always attempt to download the latest version of the image with the specified tag from the repository, even if a cached copy of the image is already present on the worker node. This can be useful if you want to ensure that your containers are always running the latest version of the image. consider a scenario where a container is running on a worker node with img1:latest as the latest tag, and then the container restarts or reschedules on another worker node with the same tag pointing to an older version of the image, IfNotPresent will use the local image present on the worker node, while Always will attempt to download the latest version of the image from the repository. However, it's important to note that the behavior of Always is based on the digest of the image, not the tag. The digest is a unique identifier for a specific version of an image that is based on the content of the image. When you specify Always, Kubernetes will check the digest of the image on the worker node against the digest of the latest version of the image in the repository with the same tag. If the digests match, Kubernetes will use the cached copy of the image on the worker node. If the digests differ, Kubernetes will download the latest version of the image from the repository and use it to start the container.
###Instruction: I was checking Kubernetes documentation for pulling images. In that, I saw two policies IfNotPresent and Always. In "Always" its stated that If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container. I am unable to understand what is local here. Is it a node, pod, or cluster? What is the difference between Always and IfNotPresent if it is at node level? It's very confusing. ### Response: When you use an image WITHOUT a tag, Kubernetes will assume that you want the latest version of the image, which is identified by the latest tag by default. If you have multiple versions of the same image in your repository with different tags, such as img1:1.0.0, img1:1.1.0, and img1:latest, Kubernetes will use the image with the tag specified in the pod specification. If you use IfNotPresent and the image with the specified tag is already present on the worker node, Kubernetes will use that image to start the container, even if there is a newer version of the image available in the repository with the same tag. If you use Always, however, Kubernetes will always attempt to download the latest version of the image with the specified tag from the repository, even if a cached copy of the image is already present on the worker node. This can be useful if you want to ensure that your containers are always running the latest version of the image. consider a scenario where a container is running on a worker node with img1:latest as the latest tag, and then the container restarts or reschedules on another worker node with the same tag pointing to an older version of the image, IfNotPresent will use the local image present on the worker node, while Always will attempt to download the latest version of the image from the repository. However, it's important to note that the behavior of Always is based on the digest of the image, not the tag. The digest is a unique identifier for a specific version of an image that is based on the content of the image. When you specify Always, Kubernetes will check the digest of the image on the worker node against the digest of the latest version of the image in the repository with the same tag. If the digests match, Kubernetes will use the cached copy of the image on the worker node. If the digests differ, Kubernetes will download the latest version of the image from the repository and use it to start the container.
I was checking Kubernetes documentation for pulling images. In that, I saw two policies IfNotPresent and Always. In "Always" its stated that If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container. I am unable to understand what is local here. Is it a node, pod, or cluster? What is the difference between Always and IfNotPresent if it is at node level? It's very confusing.
You won't be able to use the current ManagedCertificate CRD to generate wildcard certificates. Wildcard certificates are only supported by Google Cloud Certificate Manager. ManagedCertificate actually creates the older Google Managed SSL Certificates which do not support wildcards. Currently, the only supported ways to create wildcard certificates would be via CLI (gcloud), API or Terraform. There is currently no CRD for them available directly in GKE. Also, note that Certificate Manager only integrates with the Gateway API and not with Ingress.
###Instruction: By default creating a managed certificate object on GKE creates a managed certificate of type "Load Balancer Authorization". How can I create one with DNS authorization through GKE? https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 I want to add wildcard domains and this only possible with DNS authorization. How to generate Google-managed certificates for wildcard hostnames in GCP? ### Response: You won't be able to use the current ManagedCertificate CRD to generate wildcard certificates. Wildcard certificates are only supported by Google Cloud Certificate Manager. ManagedCertificate actually creates the older Google Managed SSL Certificates which do not support wildcards. Currently, the only supported ways to create wildcard certificates would be via CLI (gcloud), API or Terraform. There is currently no CRD for them available directly in GKE. Also, note that Certificate Manager only integrates with the Gateway API and not with Ingress.
By default creating a managed certificate object on GKE creates a managed certificate of type "Load Balancer Authorization". How can I create one with DNS authorization through GKE? https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 I want to add wildcard domains and this only possible with DNS authorization. How to generate Google-managed certificates for wildcard hostnames in GCP?
Not sure if you have whitelisted the IP of Spark cluster NAT to Kafka GKE, i think the request is routing outside to internet from NAT and then accessing the Kafka. If both are in same VPC what you can do is expose the Kafka service on GKE with internal private load balancer, once SVC in GKE have private LB attached you will be able to access Kafka from anywhere in VPC without using the NAT. apiVersion: v1 kind: Service metadata: name: ilb-svc annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer externalTrafficPolicy: Cluster selector: app: ilb-deployment ports: - name: tcp-port protocol: TCP port: 8080 targetPort: 8080 If you are using any Ingress controller to expose the Kafka like Nginx or so with TCP port proxy (Static port) Read More you can attach the internal LB to Nginx ingress instead of attaching it to Kafka service.
###Instruction: I have a Kafka Custer on GKE, and I'm using Apache Spark on Dataproc to access the Kafka Cluster. Dataproc cluster is a private cluster i.e. --no-address is specified when creating the Dataproc cluster, which means it does not have public IP (required due to compliance). Also the Kafka cluster is SSL enabled. Note : GKE & Dataproc are in the same VPC/project & region We have a NAT created, which is allowing the Spark on Dataproc to access Kafka on GKE (using the public IP on Kafka brokers). Without the NAT, Spark is not able to connect to Kafka on GKE - even though they are on the same VPC & region. We would like to get rid of the NAT to reduce costs, any ideas on how to architect/instrument this ? Here is the script used for creating the Dataproc cluster : # --no-address - enables it to be a private TYPE=n1-highmem-4 CNAME=<cluster-name> BUCKET=<gcp-bucket> REGION=us-east1 ZONE=us-east1-b IMG_VERSION=2.0-ubuntu18 PROJECT=versa-sml-googl NUM_WORKER=3 gcloud beta dataproc clusters create $CNAME \ --enable-component-gateway \ --bucket $BUCKET \ --region $REGION \ --zone $ZONE \ --no-address --master-machine-type $TYPE \ --master-boot-disk-size 100 \ --master-boot-disk-type pd-ssd \ --num-workers $NUM_WORKER \ --worker-machine-type $TYPE \ --worker-boot-disk-type pd-ssd \ --worker-boot-disk-size 500 \ --image-version $IMG_VERSION \ --scopes 'https://www.googleapis.com/auth/cloud-platform' \ --project $PROJECT \ --initialization-actions 'gs://dataproc-spark-configs/pip_install_versions.sh' \ --metadata 'gcs-connector-version=2.0.0' \ --properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:dataproc.conscrypt.provider.enable=false,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=true,spark:spark.eventLog.enabled=true,spark:spark.eventLog.dir=gs://dataproc-spark-logs/eventlogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/jobhistorylogs' ### Response: Not sure if you have whitelisted the IP of Spark cluster NAT to Kafka GKE, i think the request is routing outside to internet from NAT and then accessing the Kafka. If both are in same VPC what you can do is expose the Kafka service on GKE with internal private load balancer, once SVC in GKE have private LB attached you will be able to access Kafka from anywhere in VPC without using the NAT. apiVersion: v1 kind: Service metadata: name: ilb-svc annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer externalTrafficPolicy: Cluster selector: app: ilb-deployment ports: - name: tcp-port protocol: TCP port: 8080 targetPort: 8080 If you are using any Ingress controller to expose the Kafka like Nginx or so with TCP port proxy (Static port) Read More you can attach the internal LB to Nginx ingress instead of attaching it to Kafka service.
I have a Kafka Custer on GKE, and I'm using Apache Spark on Dataproc to access the Kafka Cluster. Dataproc cluster is a private cluster i.e. --no-address is specified when creating the Dataproc cluster, which means it does not have public IP (required due to compliance). Also the Kafka cluster is SSL enabled. Note : GKE & Dataproc are in the same VPC/project & region We have a NAT created, which is allowing the Spark on Dataproc to access Kafka on GKE (using the public IP on Kafka brokers). Without the NAT, Spark is not able to connect to Kafka on GKE - even though they are on the same VPC & region. We would like to get rid of the NAT to reduce costs, any ideas on how to architect/instrument this ? Here is the script used for creating the Dataproc cluster : # --no-address - enables it to be a private TYPE=n1-highmem-4 CNAME=<cluster-name> BUCKET=<gcp-bucket> REGION=us-east1 ZONE=us-east1-b IMG_VERSION=2.0-ubuntu18 PROJECT=versa-sml-googl NUM_WORKER=3 gcloud beta dataproc clusters create $CNAME \ --enable-component-gateway \ --bucket $BUCKET \ --region $REGION \ --zone $ZONE \ --no-address --master-machine-type $TYPE \ --master-boot-disk-size 100 \ --master-boot-disk-type pd-ssd \ --num-workers $NUM_WORKER \ --worker-machine-type $TYPE \ --worker-boot-disk-type pd-ssd \ --worker-boot-disk-size 500 \ --image-version $IMG_VERSION \ --scopes 'https://www.googleapis.com/auth/cloud-platform' \ --project $PROJECT \ --initialization-actions 'gs://dataproc-spark-configs/pip_install_versions.sh' \ --metadata 'gcs-connector-version=2.0.0' \ --properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:dataproc.conscrypt.provider.enable=false,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=true,spark:spark.eventLog.enabled=true,spark:spark.eventLog.dir=gs://dataproc-spark-logs/eventlogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/jobhistorylogs'
actual, i'm trying to find out what plugins are used by default too. the kubernetes version i'm using is 1.20.6. i found pretty hard to get the answer from source code. but you can export the configuration used by scheduler isntance by adding an arugument to kube-scheduler.yaml: --write-config-to=/path/to/hostpath/config/file ATTENTIONS: --v should >= 2 if write-config-to success, scheduler will exit 0, so remove this argument after you export config file
###Instruction: The version of Kubernetes in use is v1.26.3. I have cloned the Kubernetes source code to peruse the scheduler logic. Upon inspection of the framework.go file, I have observed the existence of a frameworkImpl struct. I am curious as to how I can ascertain which plugins have been registered. I am curious as to how I can ascertain which plugins have been registered // NewFramework initializes plugins given the configuration and the registry. func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-chan struct{}, opts ...Option) (framework.Framework, error) { options := defaultFrameworkOptions(stopCh) for _, opt := range opts { opt(&options) } f := &frameworkImpl{ registry: r, snapshotSharedLister: options.snapshotSharedLister, scorePluginWeight: make(map[string]int), waitingPods: newWaitingPodsMap(), clientSet: options.clientSet, kubeConfig: options.kubeConfig, eventRecorder: options.eventRecorder, informerFactory: options.informerFactory, metricsRecorder: options.metricsRecorder, extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer, } if profile == nil { return f, nil } f.profileName = profile.SchedulerName f.percentageOfNodesToScore = profile.PercentageOfNodesToScore if profile.Plugins == nil { return f, nil } // get needed plugins from config pg := f.pluginsNeeded(profile.Plugins) pluginConfig := make(map[string]runtime.Object, len(profile.PluginConfig)) for i := range profile.PluginConfig { name := profile.PluginConfig[i].Name if _, ok := pluginConfig[name]; ok { return nil, fmt.Errorf("repeated config for plugin %s", name) } pluginConfig[name] = profile.PluginConfig[i].Args } outputProfile := config.KubeSchedulerProfile{ SchedulerName: f.profileName, PercentageOfNodesToScore: f.percentageOfNodesToScore, Plugins: profile.Plugins, PluginConfig: make([]config.PluginConfig, 0, len(pg)), } pluginsMap := make(map[string]framework.Plugin) for name, factory := range r { // initialize only needed plugins. if !pg.Has(name) { continue } args := pluginConfig[name] if args != nil { outputProfile.PluginConfig = append(outputProfile.PluginConfig, config.PluginConfig{ Name: name, Args: args, }) } p, err := factory(args, f) if err != nil { return nil, fmt.Errorf("initializing plugin %q: %w", name, err) } pluginsMap[name] = p // Update ClusterEventMap in place. fillEventToPluginMap(p, options.clusterEventMap) } // initialize plugins per individual extension points for _, e := range f.getExtensionPoints(profile.Plugins) { if err := updatePluginList(e.slicePtr, *e.plugins, pluginsMap); err != nil { return nil, err } } // initialize multiPoint plugins to their expanded extension points if len(profile.Plugins.MultiPoint.Enabled) > 0 { if err := f.expandMultiPointPlugins(profile, pluginsMap); err != nil { return nil, err } } if len(f.queueSortPlugins) != 1 { return nil, fmt.Errorf("only one queue sort plugin required for profile with scheduler name %q, but got %d", profile.SchedulerName, len(f.queueSortPlugins)) } if len(f.bindPlugins) == 0 { return nil, fmt.Errorf("at least one bind plugin is needed for profile with scheduler name %q", profile.SchedulerName) } if err := getScoreWeights(f, pluginsMap, append(profile.Plugins.Score.Enabled, profile.Plugins.MultiPoint.Enabled...)); err != nil { return nil, err } // Verifying the score weights again since Plugin.Name() could return a different // value from the one used in the configuration. for _, scorePlugin := range f.scorePlugins { if f.scorePluginWeight[scorePlugin.Name()] == 0 { return nil, fmt.Errorf("score plugin %q is not configured with weight", scorePlugin.Name()) } } if options.captureProfile != nil { if len(outputProfile.PluginConfig) != 0 { sort.Slice(outputProfile.PluginConfig, func(i, j int) bool { return outputProfile.PluginConfig[i].Name < outputProfile.PluginConfig[j].Name }) } else { outputProfile.PluginConfig = nil } options.captureProfile(outputProfile) } return f, nil } I am curious as to how I can ascertain which plugins have been registered ### Response: actual, i'm trying to find out what plugins are used by default too. the kubernetes version i'm using is 1.20.6. i found pretty hard to get the answer from source code. but you can export the configuration used by scheduler isntance by adding an arugument to kube-scheduler.yaml: --write-config-to=/path/to/hostpath/config/file ATTENTIONS: --v should >= 2 if write-config-to success, scheduler will exit 0, so remove this argument after you export config file
The version of Kubernetes in use is v1.26.3. I have cloned the Kubernetes source code to peruse the scheduler logic. Upon inspection of the framework.go file, I have observed the existence of a frameworkImpl struct. I am curious as to how I can ascertain which plugins have been registered. I am curious as to how I can ascertain which plugins have been registered // NewFramework initializes plugins given the configuration and the registry. func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-chan struct{}, opts ...Option) (framework.Framework, error) { options := defaultFrameworkOptions(stopCh) for _, opt := range opts { opt(&options) } f := &frameworkImpl{ registry: r, snapshotSharedLister: options.snapshotSharedLister, scorePluginWeight: make(map[string]int), waitingPods: newWaitingPodsMap(), clientSet: options.clientSet, kubeConfig: options.kubeConfig, eventRecorder: options.eventRecorder, informerFactory: options.informerFactory, metricsRecorder: options.metricsRecorder, extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer, } if profile == nil { return f, nil } f.profileName = profile.SchedulerName f.percentageOfNodesToScore = profile.PercentageOfNodesToScore if profile.Plugins == nil { return f, nil } // get needed plugins from config pg := f.pluginsNeeded(profile.Plugins) pluginConfig := make(map[string]runtime.Object, len(profile.PluginConfig)) for i := range profile.PluginConfig { name := profile.PluginConfig[i].Name if _, ok := pluginConfig[name]; ok { return nil, fmt.Errorf("repeated config for plugin %s", name) } pluginConfig[name] = profile.PluginConfig[i].Args } outputProfile := config.KubeSchedulerProfile{ SchedulerName: f.profileName, PercentageOfNodesToScore: f.percentageOfNodesToScore, Plugins: profile.Plugins, PluginConfig: make([]config.PluginConfig, 0, len(pg)), } pluginsMap := make(map[string]framework.Plugin) for name, factory := range r { // initialize only needed plugins. if !pg.Has(name) { continue } args := pluginConfig[name] if args != nil { outputProfile.PluginConfig = append(outputProfile.PluginConfig, config.PluginConfig{ Name: name, Args: args, }) } p, err := factory(args, f) if err != nil { return nil, fmt.Errorf("initializing plugin %q: %w", name, err) } pluginsMap[name] = p // Update ClusterEventMap in place. fillEventToPluginMap(p, options.clusterEventMap) } // initialize plugins per individual extension points for _, e := range f.getExtensionPoints(profile.Plugins) { if err := updatePluginList(e.slicePtr, *e.plugins, pluginsMap); err != nil { return nil, err } } // initialize multiPoint plugins to their expanded extension points if len(profile.Plugins.MultiPoint.Enabled) > 0 { if err := f.expandMultiPointPlugins(profile, pluginsMap); err != nil { return nil, err } } if len(f.queueSortPlugins) != 1 { return nil, fmt.Errorf("only one queue sort plugin required for profile with scheduler name %q, but got %d", profile.SchedulerName, len(f.queueSortPlugins)) } if len(f.bindPlugins) == 0 { return nil, fmt.Errorf("at least one bind plugin is needed for profile with scheduler name %q", profile.SchedulerName) } if err := getScoreWeights(f, pluginsMap, append(profile.Plugins.Score.Enabled, profile.Plugins.MultiPoint.Enabled...)); err != nil { return nil, err } // Verifying the score weights again since Plugin.Name() could return a different // value from the one used in the configuration. for _, scorePlugin := range f.scorePlugins { if f.scorePluginWeight[scorePlugin.Name()] == 0 { return nil, fmt.Errorf("score plugin %q is not configured with weight", scorePlugin.Name()) } } if options.captureProfile != nil { if len(outputProfile.PluginConfig) != 0 { sort.Slice(outputProfile.PluginConfig, func(i, j int) bool { return outputProfile.PluginConfig[i].Name < outputProfile.PluginConfig[j].Name }) } else { outputProfile.PluginConfig = nil } options.captureProfile(outputProfile) } return f, nil } I am curious as to how I can ascertain which plugins have been registered
AFAIK, there is no possibility to add labels to your alerts based on condition without rewriting all rules. Best solution for your exact question is to create separate alerts for all environments/teams/conditions and just add static labels. Something along the lines of - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*test.*"}[30m]) > 2 labels: team: data - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*prod.*"}[30m]) > 2 labels: team: sre But it will require multiplying number of alerts by number of teams. I would argue way easier solution is to use routing capabilities of alertmanager (or PagerDuty if it provides similar functionality). This way you write criteria which alerts with which labels should be routed to which teams, at alertmanager configuration, and it works independently from alerts creation part. routes: - matchers: - container =~ ".*test.*" - severity =~ ".*test.*" - alertname =~ "my_alert_1|my_alert_2" receiver: team-data - matchers: - container =~ ".*prod.*" - severity =~ ".*prod.*" - alertname =~ "my_alert_1|my_alert_2" receiver: team-sre
###Instruction: I'm working with Prometheus alerts, and I would like to dynamically add a 'team' label to all of my alerts based on a regex pattern. I have an example alert: expr: label_replace(label_replace(increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*"}[30m]) > 2, "team", "data", "container", ".*test.*"), "team", "data", "pod", ".*test.*") This example alert adds the 'team' label with the value 'data' for metrics matching the regex pattern ".test." in the 'container' and 'pod' labels. However, I want to apply this logic to all of my alerts, not just this specific one. Is there a way to do this dynamically in Prometheus or Alertmanager? Any guidance would be appreciated. I tried using the label_replace function in the expression of the alert, and it worked as expected for the specific alert mentioned above. I was expecting to find a way to apply this label addition to all of my alerts without having to modify each alert expression individually. Is there a way to achieve this? Any help or guidance would be greatly appreciated. ### Response: AFAIK, there is no possibility to add labels to your alerts based on condition without rewriting all rules. Best solution for your exact question is to create separate alerts for all environments/teams/conditions and just add static labels. Something along the lines of - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*test.*"}[30m]) > 2 labels: team: data - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*", container=~".*prod.*"}[30m]) > 2 labels: team: sre But it will require multiplying number of alerts by number of teams. I would argue way easier solution is to use routing capabilities of alertmanager (or PagerDuty if it provides similar functionality). This way you write criteria which alerts with which labels should be routed to which teams, at alertmanager configuration, and it works independently from alerts creation part. routes: - matchers: - container =~ ".*test.*" - severity =~ ".*test.*" - alertname =~ "my_alert_1|my_alert_2" receiver: team-data - matchers: - container =~ ".*prod.*" - severity =~ ".*prod.*" - alertname =~ "my_alert_1|my_alert_2" receiver: team-sre
I'm working with Prometheus alerts, and I would like to dynamically add a 'team' label to all of my alerts based on a regex pattern. I have an example alert: expr: label_replace(label_replace(increase(kube_pod_container_status_restarts_total{job="kube-state-metrics",namespace=~".*",pod!~"app-test-.*"}[30m]) > 2, "team", "data", "container", ".*test.*"), "team", "data", "pod", ".*test.*") This example alert adds the 'team' label with the value 'data' for metrics matching the regex pattern ".test." in the 'container' and 'pod' labels. However, I want to apply this logic to all of my alerts, not just this specific one. Is there a way to do this dynamically in Prometheus or Alertmanager? Any guidance would be appreciated. I tried using the label_replace function in the expression of the alert, and it worked as expected for the specific alert mentioned above. I was expecting to find a way to apply this label addition to all of my alerts without having to modify each alert expression individually. Is there a way to achieve this? Any help or guidance would be greatly appreciated.
this can be limited by enabling authentication on Kafka You would need to enable asymmetric encryption (SSL / Kerberos) between Kafka and any client. This way, you would delegate public keys to any trusted client, while any untrusted code would be unable to connect without a valid key-pair. Also, the encryption would prevent untrusted code from packet-sniffing the network data local to that container/pod/host. None of this requires/involves Kubernetes. https://kafka.apache.org/documentation/#security https://docs.confluent.io/platform/current/kafka/encryption.html#encrypt-with-tls Beyond that, run the untrusted code in the container on a limited-access (non-root) user account, and regular security best practices you should be doing in containers, anyway. If you need finer network policies in the container, install iptables, for example.
###Instruction: Currently, network policies in Kubernetes allow you to control egress and ingress on the pod. However, if two containers run in the same pod, there is no way to have distinct network policies for each container. I am trying to implement a Kafka consumer, which is able to read messages from a broker which is hosted in our private subnet and then dispatch this request to the side container, which runs untrusted code designed by random users on the web. Since there is no way to restrict container communication with a policy, this untrusted code can reach our Kafka broker. I understand that this can be limited by enabling authentication on Kafka. However, the service would still be exposed to an untrusted container. Is there any way to stop this from happening? We have explored Kata containers, Istio + Envoy and Cilium, none of which seem to help solve this problem. ### Response: this can be limited by enabling authentication on Kafka You would need to enable asymmetric encryption (SSL / Kerberos) between Kafka and any client. This way, you would delegate public keys to any trusted client, while any untrusted code would be unable to connect without a valid key-pair. Also, the encryption would prevent untrusted code from packet-sniffing the network data local to that container/pod/host. None of this requires/involves Kubernetes. https://kafka.apache.org/documentation/#security https://docs.confluent.io/platform/current/kafka/encryption.html#encrypt-with-tls Beyond that, run the untrusted code in the container on a limited-access (non-root) user account, and regular security best practices you should be doing in containers, anyway. If you need finer network policies in the container, install iptables, for example.
Currently, network policies in Kubernetes allow you to control egress and ingress on the pod. However, if two containers run in the same pod, there is no way to have distinct network policies for each container. I am trying to implement a Kafka consumer, which is able to read messages from a broker which is hosted in our private subnet and then dispatch this request to the side container, which runs untrusted code designed by random users on the web. Since there is no way to restrict container communication with a policy, this untrusted code can reach our Kafka broker. I understand that this can be limited by enabling authentication on Kafka. However, the service would still be exposed to an untrusted container. Is there any way to stop this from happening? We have explored Kata containers, Istio + Envoy and Cilium, none of which seem to help solve this problem.
On 1.23.5 even after restarting kubelet through systemctl restart kubelet my kube-controller-manager still had errors (kubectl -n kube-system logs kube-controller-manager-kubeadm - name will vary according to your node name): error retrieving resource lock kube-system/kube-controller-manager: Unauthorized Only after following this documentation I was able to fix: mkdir tmp mv /etc/kubernetes/manifests/* tmp/ # Wait until pods stop running (in my case, single master, kubectl stopped working) mv tmp/* /etc/kubernetes/manifests/ Before running the commands above some pods worked, but my cluster was highly inconsistent: wrong endpoints for services, deleted pods from daemonsets would not restart, that is how I suspected the controller was not healthy.
###Instruction: Today, my kubernetes(v1.21) cluster certificate was expired(1 year), after I using this command to renew the certificate: kubeadm certs renew all the logs shows that the kube-apiserver\etcd should be restart: Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. I have tried this way to restart: [root@k8smasterone ~]# systemctl restart kube-apiserver Failed to restart kube-apiserver.service: Unit not found. what should I do to restart all the component's properly? I also tried to find the kubernetes pods that did not found pods with kubernetes with api server. ### Response: On 1.23.5 even after restarting kubelet through systemctl restart kubelet my kube-controller-manager still had errors (kubectl -n kube-system logs kube-controller-manager-kubeadm - name will vary according to your node name): error retrieving resource lock kube-system/kube-controller-manager: Unauthorized Only after following this documentation I was able to fix: mkdir tmp mv /etc/kubernetes/manifests/* tmp/ # Wait until pods stop running (in my case, single master, kubectl stopped working) mv tmp/* /etc/kubernetes/manifests/ Before running the commands above some pods worked, but my cluster was highly inconsistent: wrong endpoints for services, deleted pods from daemonsets would not restart, that is how I suspected the controller was not healthy.
Today, my kubernetes(v1.21) cluster certificate was expired(1 year), after I using this command to renew the certificate: kubeadm certs renew all the logs shows that the kube-apiserver\etcd should be restart: Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. I have tried this way to restart: [root@k8smasterone ~]# systemctl restart kube-apiserver Failed to restart kube-apiserver.service: Unit not found. what should I do to restart all the component's properly? I also tried to find the kubernetes pods that did not found pods with kubernetes with api server.
Looks kubectl lost connection to the Cluster and you can set the Cluster context by following the official GCP troubleshooting doc GCP kubectl command times out, try below two solutions : Solutions 1 : When kubectl cannot communicate with the cluster control plane or doesn't exist: To resolve your issue, verify the context were the cluster is set: Go to $HOME/.kube/config or run the command kubectl config view to verify the config file contains the cluster context and the external IP address of the control plane (Check there is a possibility that the server mentioned was old or not reachable). Set the cluster credentials: gcloud container clusters get-credentials CLUSTER_NAME \ --region=COMPUTE_REGION \ --project=PROJECT_ID Note : For zonal clusters, use --zone=COMPUTE_ZONE Above command will automatically update the default cluster for kubectl. In case you don’t know the correct cluster name and zone, use gcloud container clusters list. After completing the above steps, please try to create the namespace again and let me know the outcome. Solution 2 : Source IP is not allowed on the "Control plane authorized networks" - cluster config : If the cluster is a private GKE cluster, then ensure that the outgoing IP of the machine you are attempting to connect from is included in the list of existing authorized networks. You can find your existing authorized networks by running the following command: gcloud container clusters describe \ --region= --project= \ --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])" In GKE, there is a feature called "Control plane authorized networks". The main purpose of this parameter is to allow the user to specify CIDR ranges and allow IP addresses in those ranges to access GKE cluster endpoints. You can also use the GCP console to check the allowed IP CIDR in the "Control plane authorized networks" from the GKE cluster details. Other Scenarios : a. For other common reasons refer to Uli Köhler’s blog on Techover flow How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout. b. Check If you're using docker's default bridge network as your GKE endpoint, disable the bridge network to avoid the network conflict. c. Also, check if there are recent updates in Windows/Docker. Refer to Dzmitry Paulouski’s answer Kubernetes on Windows Error: Unable to connect to the server: dial tcp, which may help to resolve your issue. d. If the issue with the config file, restarts the Docker Desktop restores with the new config file by adding/replacing it in your HOME/.kube / config file.
###Instruction: So I was going to set up a GKE cluster and interact it with kubectl. But when I tried to apply the namespace, it just threw an error. I've added my IP to Control plane authorized networks in GKE dashboard. I'm using a Windows 10 machine, my kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: [REDACTED] server: https://34.66.200.196 name: gke_project-307907_us-central1_test contexts: - context: cluster: gke_project-307907_us-central1_test user: gke_project-307907_us-central1_test name: gke_project-307907_us-central1_test current-context: gke_project-307907_us-central1_test kind: Config preferences: {} users: - name: gke_project-307907_us-central1_test user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin.exe installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true I tried to apply namespace but, ╰─ kubectl apply -f k8s/canary/namespace.yaml Unable to connect to the server: dial tcp 34.66.200.196:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ### Response: Looks kubectl lost connection to the Cluster and you can set the Cluster context by following the official GCP troubleshooting doc GCP kubectl command times out, try below two solutions : Solutions 1 : When kubectl cannot communicate with the cluster control plane or doesn't exist: To resolve your issue, verify the context were the cluster is set: Go to $HOME/.kube/config or run the command kubectl config view to verify the config file contains the cluster context and the external IP address of the control plane (Check there is a possibility that the server mentioned was old or not reachable). Set the cluster credentials: gcloud container clusters get-credentials CLUSTER_NAME \ --region=COMPUTE_REGION \ --project=PROJECT_ID Note : For zonal clusters, use --zone=COMPUTE_ZONE Above command will automatically update the default cluster for kubectl. In case you don’t know the correct cluster name and zone, use gcloud container clusters list. After completing the above steps, please try to create the namespace again and let me know the outcome. Solution 2 : Source IP is not allowed on the "Control plane authorized networks" - cluster config : If the cluster is a private GKE cluster, then ensure that the outgoing IP of the machine you are attempting to connect from is included in the list of existing authorized networks. You can find your existing authorized networks by running the following command: gcloud container clusters describe \ --region= --project= \ --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])" In GKE, there is a feature called "Control plane authorized networks". The main purpose of this parameter is to allow the user to specify CIDR ranges and allow IP addresses in those ranges to access GKE cluster endpoints. You can also use the GCP console to check the allowed IP CIDR in the "Control plane authorized networks" from the GKE cluster details. Other Scenarios : a. For other common reasons refer to Uli Köhler’s blog on Techover flow How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout. b. Check If you're using docker's default bridge network as your GKE endpoint, disable the bridge network to avoid the network conflict. c. Also, check if there are recent updates in Windows/Docker. Refer to Dzmitry Paulouski’s answer Kubernetes on Windows Error: Unable to connect to the server: dial tcp, which may help to resolve your issue. d. If the issue with the config file, restarts the Docker Desktop restores with the new config file by adding/replacing it in your HOME/.kube / config file.
So I was going to set up a GKE cluster and interact it with kubectl. But when I tried to apply the namespace, it just threw an error. I've added my IP to Control plane authorized networks in GKE dashboard. I'm using a Windows 10 machine, my kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: [REDACTED] server: https://34.66.200.196 name: gke_project-307907_us-central1_test contexts: - context: cluster: gke_project-307907_us-central1_test user: gke_project-307907_us-central1_test name: gke_project-307907_us-central1_test current-context: gke_project-307907_us-central1_test kind: Config preferences: {} users: - name: gke_project-307907_us-central1_test user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin.exe installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true I tried to apply namespace but, ╰─ kubectl apply -f k8s/canary/namespace.yaml Unable to connect to the server: dial tcp 34.66.200.196:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
ping doesn't work with a service's cluster IP, as it is a virtual IP. You should be able to ping a specific pod, but not a service. ping sends packets using the very-low-level ICMP protocol, but Nginx serves HTTP which uses the TCP protocol instead.
###Instruction: I have made a nginx deployment which will be tagged by a ClusterIP service via a selector. Then I entered a new pod that is not related to that deployment nor service. And from within that pod I try to ping the i.p of the ClusterIP service hoping it would reach the nginx deploy, but it's not receiving the ping response. The nginx deployment I made was with this manifest. apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 Then, the service I created was with this manifest: apiVersion: v1 kind: Service metadata: name: basicping-service labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx I made sure the service got created by running kubectl get svc and it did, the i.p is 10.98.91.185 And then I created a new pod completely unrelated to this deployment&Service. kubectl run -it --rm --image=ubuntu bash From within it, I pinged a sandbox server called pingtest.net just to see it was able to send requests and receive response. And it did So finally, I tried pinging the basicping-service created previously by trying to ping the i.p of the service, I did this by running ping 10.98.91.185 And here is the problem. It does sends pings but doesn't receives the responses back, even after several minutes. It was my understanding that the ping should have received a response. But is my understanding of services incorrect ? Or it should have worked but there is an error? Just for more documentation, the my-nginx deployment is running, and the pod as well. And there seems to be nothing wrong with the nginx running in it. I checked this by running the kubectl describe of the deploy & pod, and also by checking the pod's logs, it's running nginx correctly apparently. Also, after running kubectl describe svc basicping-service it does shows the nginx pod's i.p address with port 80 as the endpoint ### Response: ping doesn't work with a service's cluster IP, as it is a virtual IP. You should be able to ping a specific pod, but not a service. ping sends packets using the very-low-level ICMP protocol, but Nginx serves HTTP which uses the TCP protocol instead.
I have made a nginx deployment which will be tagged by a ClusterIP service via a selector. Then I entered a new pod that is not related to that deployment nor service. And from within that pod I try to ping the i.p of the ClusterIP service hoping it would reach the nginx deploy, but it's not receiving the ping response. The nginx deployment I made was with this manifest. apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 Then, the service I created was with this manifest: apiVersion: v1 kind: Service metadata: name: basicping-service labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx I made sure the service got created by running kubectl get svc and it did, the i.p is 10.98.91.185 And then I created a new pod completely unrelated to this deployment&Service. kubectl run -it --rm --image=ubuntu bash From within it, I pinged a sandbox server called pingtest.net just to see it was able to send requests and receive response. And it did So finally, I tried pinging the basicping-service created previously by trying to ping the i.p of the service, I did this by running ping 10.98.91.185 And here is the problem. It does sends pings but doesn't receives the responses back, even after several minutes. It was my understanding that the ping should have received a response. But is my understanding of services incorrect ? Or it should have worked but there is an error? Just for more documentation, the my-nginx deployment is running, and the pod as well. And there seems to be nothing wrong with the nginx running in it. I checked this by running the kubectl describe of the deploy & pod, and also by checking the pod's logs, it's running nginx correctly apparently. Also, after running kubectl describe svc basicping-service it does shows the nginx pod's i.p address with port 80 as the endpoint
The error implies that the kubelogin executable could not be located. You need to install kubelogin in the azure cli using az aks install-cli, then it works as expected. Refer github for installation process. I tried the same requirement in my environment, and it worked for me as follows. az aks get-credentials --resource-group caroline --name sampleaks1 kubectl get pods Output: Once you have the aks credentials, running kubectl get pods will prompt you for an Azure kubernetes service authentication with AAD, as shown. Just give kubectl in the bash to verify whether it is installed successfully. If still the issue persists, Delete all the cache or any unused folders inside the ~/.kube/ and ran the aks credentials command by adding --admin flag in the end. Refer this doc by @Geert Baeke for more related information. Check the kube config version and upgrade if required.
###Instruction: I created new config file for Kubernetes from Azure in Powershell by az aks get-credentials --resource-group <RGName> --name <ClusterName>. Got a message that Merged "cluster_name" as current context in C:\michu\.kube\config. I copied this file into default .kube\config location and now when I try to run any command e.g kubectl get pods I am receiving: Unable to connect to the server: getting credentials: exec: executable kubelogin not found It looks like you are trying to use a client-go credential plugin that is not installed. To learn more about this feature, consult the documentation available at: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins What is wrong here? I just realized that when I type kubectl config get-contexts then I can see my cluster_name and I can even switch to this by kubectl config use-context cluster_name and message is correct: Switched to context cluster_name but then still all other commands ends with Unable to connect to the server: getting credentilas: exec: executable kubelogin not found ### Response: The error implies that the kubelogin executable could not be located. You need to install kubelogin in the azure cli using az aks install-cli, then it works as expected. Refer github for installation process. I tried the same requirement in my environment, and it worked for me as follows. az aks get-credentials --resource-group caroline --name sampleaks1 kubectl get pods Output: Once you have the aks credentials, running kubectl get pods will prompt you for an Azure kubernetes service authentication with AAD, as shown. Just give kubectl in the bash to verify whether it is installed successfully. If still the issue persists, Delete all the cache or any unused folders inside the ~/.kube/ and ran the aks credentials command by adding --admin flag in the end. Refer this doc by @Geert Baeke for more related information. Check the kube config version and upgrade if required.
I created new config file for Kubernetes from Azure in Powershell by az aks get-credentials --resource-group <RGName> --name <ClusterName>. Got a message that Merged "cluster_name" as current context in C:\michu\.kube\config. I copied this file into default .kube\config location and now when I try to run any command e.g kubectl get pods I am receiving: Unable to connect to the server: getting credentials: exec: executable kubelogin not found It looks like you are trying to use a client-go credential plugin that is not installed. To learn more about this feature, consult the documentation available at: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins What is wrong here? I just realized that when I type kubectl config get-contexts then I can see my cluster_name and I can even switch to this by kubectl config use-context cluster_name and message is correct: Switched to context cluster_name but then still all other commands ends with Unable to connect to the server: getting credentilas: exec: executable kubelogin not found
I was able to figure it out. One can use personal access tokens as passwords provided by whatever git service the private repo is in along with the repo's username. I just stored the personal access token as an Opaque secret in my Airflow K8s cluster and referenced that in my git-sync sidecar container yaml definition which I included in my Airflow yaml deployment definition. containers: - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - "-wait=60" - "-repo=<repo>" - "-branch=master" - "-root=/opt/airflow/dags" - "-username=<username>" - "-password-file=/etc/git-secret/token" volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt/airflow/dags volumes: - name: dags-data emptyDir: {} - name: git-secret secret: secretName: github-token
###Instruction: I have an Airflow environment (v2.4.3) on Kubernetes and I want to sync it with a private git repo so that any changes I make to DAGs in my master branch get automatically picked up by my Airflow environment. According to Airflow documentation, I can use Git-sync sidecar along with an SSH key added to my private git repo and Airflow env to make it work. However, given that I am constantly creating new private repos and Airflow environments, I am wondering if there is a more simple way of connecting my private git repos to their respective Airflow environment. If I have a webapp managing my Airflow environments and have access to an OAuth token from Github after signing into my account (or any other git service), could I use that to connect my an Airflow environement and sync changes to any git repo of my choice under my account? ### Response: I was able to figure it out. One can use personal access tokens as passwords provided by whatever git service the private repo is in along with the repo's username. I just stored the personal access token as an Opaque secret in my Airflow K8s cluster and referenced that in my git-sync sidecar container yaml definition which I included in my Airflow yaml deployment definition. containers: - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - "-wait=60" - "-repo=<repo>" - "-branch=master" - "-root=/opt/airflow/dags" - "-username=<username>" - "-password-file=/etc/git-secret/token" volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt/airflow/dags volumes: - name: dags-data emptyDir: {} - name: git-secret secret: secretName: github-token
I have an Airflow environment (v2.4.3) on Kubernetes and I want to sync it with a private git repo so that any changes I make to DAGs in my master branch get automatically picked up by my Airflow environment. According to Airflow documentation, I can use Git-sync sidecar along with an SSH key added to my private git repo and Airflow env to make it work. However, given that I am constantly creating new private repos and Airflow environments, I am wondering if there is a more simple way of connecting my private git repos to their respective Airflow environment. If I have a webapp managing my Airflow environments and have access to an OAuth token from Github after signing into my account (or any other git service), could I use that to connect my an Airflow environement and sync changes to any git repo of my choice under my account?
I figured it out. It needed a bit more configuration in launch.json. Specifically debug section with sourceFileMap. Here is the final result if anybody else has trouble making it work: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "zzz.common.repositories.zzz.zzz", "debug": [ { "image": "zzz.common.repositories.zzz.zzz/zzz-lcs/k8s", "containerName": "lcs", "sourceFileMap": { "${workspaceFolder}\\lcs\\dist": "/app/dist" } } ] }, { "type": "node", "request": "launch", "name": "Local Debug", "runtimeExecutable": "node", "runtimeArgs": ["--nolazy", "-r", "ts-node/register/transpile-only"], "args": ["${workspaceRoot}/lcs/src/index.ts"], "cwd": "${workspaceRoot}/lcs", "internalConsoleOptions": "openOnSessionStart", "env": { "NODE_ENV": "development" }, "skipFiles": ["<node_internals>/**", "node_modules/**"] } ] } Note that my files are located in my local inside /lcs/app/src and my build put them in ./lcs/dist
###Instruction: I am trying to figure out if it is possible to use Cloud Code in VSCode to debug a TypeScript Node application in Kubernetes remotely? I used Cloud Code regularly to debug JS Node applications in Kubernetes, but I am having hard time to configure launch.json to do the same for TypeScript Node app. Sources for that is non-existent and at this point I am not even sure if this is possible. Here is the launch.json file I configured. Also you can see a setting for local debugging which works fine: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "zzz.common.repositories.zzz.zzz" }, { "type": "node", "request": "launch", "name": "Local Debug", "runtimeExecutable": "node", "runtimeArgs": ["--nolazy", "-r", "ts-node/register/transpile-only"], "args": ["${workspaceRoot}/lcs/src/index.ts"], "cwd": "${workspaceRoot}/lcs", "internalConsoleOptions": "openOnSessionStart", "env": { "NODE_ENV": "development" }, "skipFiles": ["<node_internals>/**", "node_modules/**"] } ]} In my tsconfig.json I have "sourceMap": true. I assume I need to map my dist folder in Kubernetes (that is where I compile TS files to JS files) to src folder (which contains original TS files). Problem is I couldn't find any documentation to do that. Here is my docker file so you can see that I am putting the compiled files into ./dist folder: FROM node:19-alpine as builder WORKDIR /app COPY package*.json ./ COPY tsconfig.json ./ RUN npm install COPY . . RUN npm run build FROM node:19-alpine WORKDIR /app COPY package*.json ./ RUN npm install --production COPY --from=builder /app/dist ./dist EXPOSE 3000 CMD [ "node", "./dist/index.js" ] ### Response: I figured it out. It needed a bit more configuration in launch.json. Specifically debug section with sourceFileMap. Here is the final result if anybody else has trouble making it work: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "zzz.common.repositories.zzz.zzz", "debug": [ { "image": "zzz.common.repositories.zzz.zzz/zzz-lcs/k8s", "containerName": "lcs", "sourceFileMap": { "${workspaceFolder}\\lcs\\dist": "/app/dist" } } ] }, { "type": "node", "request": "launch", "name": "Local Debug", "runtimeExecutable": "node", "runtimeArgs": ["--nolazy", "-r", "ts-node/register/transpile-only"], "args": ["${workspaceRoot}/lcs/src/index.ts"], "cwd": "${workspaceRoot}/lcs", "internalConsoleOptions": "openOnSessionStart", "env": { "NODE_ENV": "development" }, "skipFiles": ["<node_internals>/**", "node_modules/**"] } ] } Note that my files are located in my local inside /lcs/app/src and my build put them in ./lcs/dist
I am trying to figure out if it is possible to use Cloud Code in VSCode to debug a TypeScript Node application in Kubernetes remotely? I used Cloud Code regularly to debug JS Node applications in Kubernetes, but I am having hard time to configure launch.json to do the same for TypeScript Node app. Sources for that is non-existent and at this point I am not even sure if this is possible. Here is the launch.json file I configured. Also you can see a setting for local debugging which works fine: { "configurations": [ { "name": "Kubernetes: Run/Debug", "type": "cloudcode.kubernetes", "request": "launch", "skaffoldConfig": "${workspaceFolder}\\skaffold.yaml", "watch": false, "cleanUp": false, "portForward": true, "imageRegistry": "zzz.common.repositories.zzz.zzz" }, { "type": "node", "request": "launch", "name": "Local Debug", "runtimeExecutable": "node", "runtimeArgs": ["--nolazy", "-r", "ts-node/register/transpile-only"], "args": ["${workspaceRoot}/lcs/src/index.ts"], "cwd": "${workspaceRoot}/lcs", "internalConsoleOptions": "openOnSessionStart", "env": { "NODE_ENV": "development" }, "skipFiles": ["<node_internals>/**", "node_modules/**"] } ]} In my tsconfig.json I have "sourceMap": true. I assume I need to map my dist folder in Kubernetes (that is where I compile TS files to JS files) to src folder (which contains original TS files). Problem is I couldn't find any documentation to do that. Here is my docker file so you can see that I am putting the compiled files into ./dist folder: FROM node:19-alpine as builder WORKDIR /app COPY package*.json ./ COPY tsconfig.json ./ RUN npm install COPY . . RUN npm run build FROM node:19-alpine WORKDIR /app COPY package*.json ./ RUN npm install --production COPY --from=builder /app/dist ./dist EXPOSE 3000 CMD [ "node", "./dist/index.js" ]
It is (or can be) a part of the grafana helm chart, from what I remember when used it in the past. That configuration needs to go into the values.yaml when using that helm chart. Specifically here to enable/configure dashboardProviders and here to provision the dashboard using dashboard id from the grafana website. Can also refer to some documentation here. Hope it helps. Update: Using the below config I was able to import the MinIO Dashboard (the one OP tried to import): dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: 'default' type: file disableDeletion: true editable: true options: path: /var/lib/grafana/dashboards/standard dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus Ofcourse I don't have the prometheus data source, hence the warning sign(s).
###Instruction: I have a helm chart that deploys a kube-prometheus stack (prometheus, grafana, node-exporter), there are some json files (dashboards) in the grafana configuration, they are transferred to the grafana pod via configMap (common practice). I have a task to optimize this configuration to add grafana dashboards via their id from grafana.com and not to use json files (as they are very big). I know how to create a folder in grafana and specify the prometheus resource, but I don't understand how to export dashboards by id. To create a folder I have a file (yaml) which is added via configmap to the directory /etc/grafana/provisioning/dashboards - name: 'default' org_id: 1 folder: 'my-dashboards' type: 'file' options: folder: '/var/lib/grafana/dashboards' How to do it and in which file I need to insert this configuration to make it work. I will be grateful for help. I tried to create configurations I found on github (dashboardProviders), but it only creates a folder (the code I specified), without dashboards UPD: what i have now apiVersion: 1 providers: # <string> an unique provider name. Required - name: 'prometheus' # <int> Org id. Default to 1 orgId: 1 # <string> name of the dashboard folder. folder: 'my-dashboards' # <string> folder UID. will be automatically generated if not specified folderUid: '' # <string> provider type. Default to 'file' type: file # <bool> disable dashboard deletion disableDeletion: false # <int> how often Grafana will scan for changed dashboards updateIntervalSeconds: 10 # <bool> allow updating provisioned dashboards from the UI allowUiUpdates: false options: # <string, required> path to dashboard files on disk. Required when using the 'file' type path: /var/lib/grafana/dashboards # <bool> use folder names from filesystem to create folders in Grafana foldersFromFilesStructure: true dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus but it still doesn't work...why? ### Response: It is (or can be) a part of the grafana helm chart, from what I remember when used it in the past. That configuration needs to go into the values.yaml when using that helm chart. Specifically here to enable/configure dashboardProviders and here to provision the dashboard using dashboard id from the grafana website. Can also refer to some documentation here. Hope it helps. Update: Using the below config I was able to import the MinIO Dashboard (the one OP tried to import): dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: 'default' type: file disableDeletion: true editable: true options: path: /var/lib/grafana/dashboards/standard dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus Ofcourse I don't have the prometheus data source, hence the warning sign(s).
I have a helm chart that deploys a kube-prometheus stack (prometheus, grafana, node-exporter), there are some json files (dashboards) in the grafana configuration, they are transferred to the grafana pod via configMap (common practice). I have a task to optimize this configuration to add grafana dashboards via their id from grafana.com and not to use json files (as they are very big). I know how to create a folder in grafana and specify the prometheus resource, but I don't understand how to export dashboards by id. To create a folder I have a file (yaml) which is added via configmap to the directory /etc/grafana/provisioning/dashboards - name: 'default' org_id: 1 folder: 'my-dashboards' type: 'file' options: folder: '/var/lib/grafana/dashboards' How to do it and in which file I need to insert this configuration to make it work. I will be grateful for help. I tried to create configurations I found on github (dashboardProviders), but it only creates a folder (the code I specified), without dashboards UPD: what i have now apiVersion: 1 providers: # <string> an unique provider name. Required - name: 'prometheus' # <int> Org id. Default to 1 orgId: 1 # <string> name of the dashboard folder. folder: 'my-dashboards' # <string> folder UID. will be automatically generated if not specified folderUid: '' # <string> provider type. Default to 'file' type: file # <bool> disable dashboard deletion disableDeletion: false # <int> how often Grafana will scan for changed dashboards updateIntervalSeconds: 10 # <bool> allow updating provisioned dashboards from the UI allowUiUpdates: false options: # <string, required> path to dashboard files on disk. Required when using the 'file' type path: /var/lib/grafana/dashboards # <bool> use folder names from filesystem to create folders in Grafana foldersFromFilesStructure: true dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus but it still doesn't work...why?
A service is basically a load balancer over a deployment (and a deployment is a replica set of multiple pods). So, chances are you want to be listing the deployments instead of the services in this case. Services and deployments are loosely coupled via tags, so there probably isn't a great way to go from services to pods directly. I recommend kubectl get deploy -A to view all deployments and their pod counts in all namespaces in this case. Example output from the kube-system namespace in our cluster (since it's not sensitive in any way)... NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system cluster-autoscaler-aws-cluster-autoscaler 1/1 1 1 231d kube-system coredns 2/2 2 2 231d kube-system ebs-csi-controller 2/2 2 2 231d kube-system efs-csi-controller 2/2 2 2 231d kube-system metrics-server 1/1 1 1 231d
###Instruction: How can I list all Kubernetes services along with the number of active pods associated with each service? Currently, I can list all services with: kubectl get services I would like to add one additional column to the output, which lists active pod count for each service. ### Response: A service is basically a load balancer over a deployment (and a deployment is a replica set of multiple pods). So, chances are you want to be listing the deployments instead of the services in this case. Services and deployments are loosely coupled via tags, so there probably isn't a great way to go from services to pods directly. I recommend kubectl get deploy -A to view all deployments and their pod counts in all namespaces in this case. Example output from the kube-system namespace in our cluster (since it's not sensitive in any way)... NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system cluster-autoscaler-aws-cluster-autoscaler 1/1 1 1 231d kube-system coredns 2/2 2 2 231d kube-system ebs-csi-controller 2/2 2 2 231d kube-system efs-csi-controller 2/2 2 2 231d kube-system metrics-server 1/1 1 1 231d
How can I list all Kubernetes services along with the number of active pods associated with each service? Currently, I can list all services with: kubectl get services I would like to add one additional column to the output, which lists active pod count for each service.
df.rename() introduces some overhead, but you can skip that step by constructing the result directly with the new column names: profile_data = [{new_cols[col]: profiles_df.loc[ix, col] for col in new_cols} for ix in profile_ids] I do not know the answer to your second question.
###Instruction: When I run locally, in my laptop (using python 3.10 and pandas 1.3.5), the following code, I get 0.031s approximately (ball parking it): profile_data = ( profiles_df[data_cols] .loc[profile_ids] .rename(columns=new_cols) .to_dict("records") ) where data_cols and new_cols are two lists of strings, and profiles_df is a dataframe with mostly string data. However, when I run in it in a pod, using the same python and pandas versions, I get it run in 0.1s approx. The pod has still ample secondary memory (a few GBs) and never reaches its limit, nor does it reach the CPU limits (1 out of 1.5) Is there a way to optimize the above code? What could be causing this difference in performance? ### Response: df.rename() introduces some overhead, but you can skip that step by constructing the result directly with the new column names: profile_data = [{new_cols[col]: profiles_df.loc[ix, col] for col in new_cols} for ix in profile_ids] I do not know the answer to your second question.
When I run locally, in my laptop (using python 3.10 and pandas 1.3.5), the following code, I get 0.031s approximately (ball parking it): profile_data = ( profiles_df[data_cols] .loc[profile_ids] .rename(columns=new_cols) .to_dict("records") ) where data_cols and new_cols are two lists of strings, and profiles_df is a dataframe with mostly string data. However, when I run in it in a pod, using the same python and pandas versions, I get it run in 0.1s approx. The pod has still ample secondary memory (a few GBs) and never reaches its limit, nor does it reach the CPU limits (1 out of 1.5) Is there a way to optimize the above code? What could be causing this difference in performance?
I think you were on the right path using 0 (and presumably 1) to access elements of the S3Catalogs array in your values.yaml file. The following syntax works for me: deployment.yaml: - name: S3Catalogs__Catalogs__CatalogName_0 value: "{{ index .Values.S3Catalogs.Catalogs 0 "CatalogName" }}" - name: S3Catalogs__Catalogs__Url_0 value: "{{ index .Values.S3Catalogs.Catalogs 0 "Url" }}" - name: S3Catalogs__Catalogs__CatalogName_1 value: "{{ index .Values.S3Catalogs.Catalogs 1 "CatalogName" }}" - name: S3Catalogs__Catalogs__Url_1 value: "{{ index .Values.S3Catalogs.Catalogs 1 "Url" }}" values.yaml: S3Catalogs: Catalogs: - CatalogName: BotVoice Url: "http://sandbox5.foo.com" - CatalogName: WrongVoice Url: "http://sandbox5.bar.com" When I do a helm template, following is the result: - name: S3Catalogs__Catalogs__CatalogName_0 value: "BotVoice" - name: S3Catalogs__Catalogs__Url_0 value: "http://sandbox5.foo.com" - name: S3Catalogs__Catalogs__CatalogName_1 value: "WrongVoice" - name: S3Catalogs__Catalogs__Url_1 value: "http://sandbox5.bar.com" Is this something that'll work out for you?
###Instruction: values.yaml S3Catalogs: Catalogs: - CatalogName: BotVoice Url: "http://sandbox5.*.com" - CatalogName: WrongVoice Url: "http://sandbox5.*.com" the structure specified above needs to be filled in somehow in deployment.yaml tried different variants, with brackets [0] does not accept. - name: S3Catalogs__Catalogs value: "{{ .Values.S3Catalogs.Catalogs }}" - name: S3Catalogs__Catalogs__Url value: "{{ .Values.S3Catalogs.Catalogs.Url}}" that's not how it works ### Response: I think you were on the right path using 0 (and presumably 1) to access elements of the S3Catalogs array in your values.yaml file. The following syntax works for me: deployment.yaml: - name: S3Catalogs__Catalogs__CatalogName_0 value: "{{ index .Values.S3Catalogs.Catalogs 0 "CatalogName" }}" - name: S3Catalogs__Catalogs__Url_0 value: "{{ index .Values.S3Catalogs.Catalogs 0 "Url" }}" - name: S3Catalogs__Catalogs__CatalogName_1 value: "{{ index .Values.S3Catalogs.Catalogs 1 "CatalogName" }}" - name: S3Catalogs__Catalogs__Url_1 value: "{{ index .Values.S3Catalogs.Catalogs 1 "Url" }}" values.yaml: S3Catalogs: Catalogs: - CatalogName: BotVoice Url: "http://sandbox5.foo.com" - CatalogName: WrongVoice Url: "http://sandbox5.bar.com" When I do a helm template, following is the result: - name: S3Catalogs__Catalogs__CatalogName_0 value: "BotVoice" - name: S3Catalogs__Catalogs__Url_0 value: "http://sandbox5.foo.com" - name: S3Catalogs__Catalogs__CatalogName_1 value: "WrongVoice" - name: S3Catalogs__Catalogs__Url_1 value: "http://sandbox5.bar.com" Is this something that'll work out for you?
values.yaml S3Catalogs: Catalogs: - CatalogName: BotVoice Url: "http://sandbox5.*.com" - CatalogName: WrongVoice Url: "http://sandbox5.*.com" the structure specified above needs to be filled in somehow in deployment.yaml tried different variants, with brackets [0] does not accept. - name: S3Catalogs__Catalogs value: "{{ .Values.S3Catalogs.Catalogs }}" - name: S3Catalogs__Catalogs__Url value: "{{ .Values.S3Catalogs.Catalogs.Url}}" that's not how it works
Change your bash script to this: #!/bin/bash cat test.txt | xargs -I {} bash -c {} Works like a treat: {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config","namespace":"default"}} name: example-config {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config2","namespace":"default"}} name: example-config2 {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config3","namespace":"default"}} name: example-config3 Got the idea from here. I wasn't sure if it'd work, but it did :) Hope it helps.
###Instruction: I have a test.txt file that has bunch of kubernetes commands. It looks like: kubectl get cm -o yaml | grep "test" kubectl get pods kubectl describe pod xxx . . bash script read the test.txt and loop thru each command should executes. However, when I run the script, it gives the following error. Any ideas how to fix this? Error from server (NotFound): configmaps "|" not found Error from server (NotFound): configmaps "grep" not found Error from server (NotFound): configmaps """ not found Here is the script: in_file=test.txt # Loop over each line while read -r test_case; do "kubectl -n ${namespace} ${test_case}" done < "$in_file" ### Response: Change your bash script to this: #!/bin/bash cat test.txt | xargs -I {} bash -c {} Works like a treat: {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config","namespace":"default"}} name: example-config {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config2","namespace":"default"}} name: example-config2 {"apiVersion":"v1","data":{"api-key":"abc123","database-url":"postgresql://localhost/mydatabase"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"example-config3","namespace":"default"}} name: example-config3 Got the idea from here. I wasn't sure if it'd work, but it did :) Hope it helps.
I have a test.txt file that has bunch of kubernetes commands. It looks like: kubectl get cm -o yaml | grep "test" kubectl get pods kubectl describe pod xxx . . bash script read the test.txt and loop thru each command should executes. However, when I run the script, it gives the following error. Any ideas how to fix this? Error from server (NotFound): configmaps "|" not found Error from server (NotFound): configmaps "grep" not found Error from server (NotFound): configmaps """ not found Here is the script: in_file=test.txt # Loop over each line while read -r test_case; do "kubectl -n ${namespace} ${test_case}" done < "$in_file"
You need to set the flag in the config-features ConfigMap. That ConfigMap should already have one key, named _example. You'll need to add your key above or below the example. The _example key is ignored by Knative, but provides a place for documentation for system administrators.
###Instruction: I am trying to deploy service on eks with knative installed on eks, here eks configured with fargate-profile. I want to mount efs to my service for that i have created StorageClass,PersistentVolume and PersistentVolumeClaim below are the respective yml storageclass.yml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: fs-1234 directoryPerms: "775" reclaimPolicy: Retain efs-pv.yml kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-sc csi: driver: efs.csi.aws.com volumeHandle: fs-1234 efs-pvc.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pvc spec: accessModes: - ReadWriteMany storageClassName: my-sc resources: requests: storage: 100Gi all the sc, pv and pvc get created successfully Now I am trying to use my-pvc in my Service my-service.yml like below apiVersion: serving.knative.dev/v1 kind: Service metadata: name: myservice annotations: serving.knative.dev/efs-storage-class: my-sc spec: template: spec: containers: - env: - name: key value: val image: image:latest ports: - containerPort: 8080 volumeMounts: - mountPath: /app/data name: data readOnly: true # warning were given for not setting readOnly=true volumes: - name: data persistentVolumeClaim: claimName: my-pvc readOnly: true metadata: annotations: autoscaling.knative.dev/metric: concurrency autoscaling.knative.dev/target: '1' autoscaling.knative.dev/minScale: '1' autoscaling.knative.dev/maxScale: '5' autoscaling.knative.dev/scaleDownDelay: 60s autoscaling.knative.dev/window: 600s when I try to run following command kubectl apply -f my-service.yml it gives validation error, here the complete error Error from server (BadRequest): error when creating "my-service.yml": admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: Persistent volume claim support is disabled, but found persistent volume claim my-pvc: must not set the field(s): spec.template.spec.volumes[0].persistentVolumeClaim Error simply means PVC is disabled, in order to use PVC I need to enable PVC. But How and where(in configMap?)? in In serving-core.yml i found kubernetes.podspec-persistent-volume-claim: "Disabled" which is example. So I have tried to add configMap (Just guess) with kubernetes.podspec-persistent-volume-claim like below: apiVersion: v1 kind: ConfigMap metadata: namespace: knative-serving name: kubernetes.podspec-persistent-volume-claim data: kubernetes.podspec-persistent-volume-claim: enabled But nothing changed only configmap got created, Any help will be appreciated ### Response: You need to set the flag in the config-features ConfigMap. That ConfigMap should already have one key, named _example. You'll need to add your key above or below the example. The _example key is ignored by Knative, but provides a place for documentation for system administrators.
I am trying to deploy service on eks with knative installed on eks, here eks configured with fargate-profile. I want to mount efs to my service for that i have created StorageClass,PersistentVolume and PersistentVolumeClaim below are the respective yml storageclass.yml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: fs-1234 directoryPerms: "775" reclaimPolicy: Retain efs-pv.yml kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-sc csi: driver: efs.csi.aws.com volumeHandle: fs-1234 efs-pvc.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pvc spec: accessModes: - ReadWriteMany storageClassName: my-sc resources: requests: storage: 100Gi all the sc, pv and pvc get created successfully Now I am trying to use my-pvc in my Service my-service.yml like below apiVersion: serving.knative.dev/v1 kind: Service metadata: name: myservice annotations: serving.knative.dev/efs-storage-class: my-sc spec: template: spec: containers: - env: - name: key value: val image: image:latest ports: - containerPort: 8080 volumeMounts: - mountPath: /app/data name: data readOnly: true # warning were given for not setting readOnly=true volumes: - name: data persistentVolumeClaim: claimName: my-pvc readOnly: true metadata: annotations: autoscaling.knative.dev/metric: concurrency autoscaling.knative.dev/target: '1' autoscaling.knative.dev/minScale: '1' autoscaling.knative.dev/maxScale: '5' autoscaling.knative.dev/scaleDownDelay: 60s autoscaling.knative.dev/window: 600s when I try to run following command kubectl apply -f my-service.yml it gives validation error, here the complete error Error from server (BadRequest): error when creating "my-service.yml": admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: Persistent volume claim support is disabled, but found persistent volume claim my-pvc: must not set the field(s): spec.template.spec.volumes[0].persistentVolumeClaim Error simply means PVC is disabled, in order to use PVC I need to enable PVC. But How and where(in configMap?)? in In serving-core.yml i found kubernetes.podspec-persistent-volume-claim: "Disabled" which is example. So I have tried to add configMap (Just guess) with kubernetes.podspec-persistent-volume-claim like below: apiVersion: v1 kind: ConfigMap metadata: namespace: knative-serving name: kubernetes.podspec-persistent-volume-claim data: kubernetes.podspec-persistent-volume-claim: enabled But nothing changed only configmap got created, Any help will be appreciated
k3s uses traefik, not nginx, so those annotations aren't doing anything... The referenced blog assumes you are using nginx instead Restart your k3s cluster, but provide --no-deploy-traefik option, and install nginx ingress controller Otherwise, you will need to refer to Traefik ingress docs on what matching annotations it will use for SSL passthrough. Keeping in mind, Kafka is not an HTTP/S service, so you should not be using ports 80/443 to communicate with it.
###Instruction: I have a local k3s Kubernetes cluster created by multipass. I am trying to set up Kafka using Ingress way based on this tutorial to make it accessible for clients running outside of Kubernetes. Here are my steps: First get my cluster IP by ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION west-master Ready control-plane,master 15m v1.26.3+k3s1 ➜ kubectl get node west-master -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME west-master Ready control-plane,master 16m v1.26.3+k3s1 192.168.205.5 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.19-k3s1 ➜ kubectl cluster-info Kubernetes control plane is running at https://192.168.205.5:6443 CoreDNS is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy which is 192.168.205.5 (Is this correct IP address for the one I used in my-kafka-persistent.yaml I used in next step?). Then I deploy my Kafka by: kubectl create namespace hm-kafka kubectl apply --filename="https://strimzi.io/install/latest?namespace=hm-kafka" --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka my-kafka-persistent.yaml (based on kafka-persistent.yaml): --- apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: ingress tls: true configuration: bootstrap: host: kafka-bootstrap.192.168.205.5.nip.io brokers: - broker: 0 host: kafka-broker-0.192.168.205.5.nip.io - broker: 1 host: kafka-broker-1.192.168.205.5.nip.io - broker: 2 host: kafka-broker-2.192.168.205.5.nip.io config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.4" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} After deployment: Pods Services Ingresses Also, for each Ingress, I can see SSL passthrough in the annotations: Then I succeed following the tutorial creating the truststore. ➜ kubectl get secret hm-kafka-cluster-ca-cert \ --namespace=hm-kafka \ --output=jsonpath="{.data.ca\.crt}" \ | base64 -d \ > ca.crt ➜ keytool -importcert \ -trustcacerts \ -alias root \ -file ca.crt \ -keystore kafka-truststore.jks \ -storepass my_passw0rd \ -noprompt Certificate was added to keystore However, when I try to produce data, I met this issue: ➜ kafka-console-producer \ --broker-list kafka-bootstrap.192.168.205.5.nip.io:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=my_passw0rd \ --producer-property ssl.truststore.location=kafka-truststore.jks \ --topic my-topic >[2023-04-14 15:57:06,047] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,047] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,200] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,201] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) Any guide would be appreciate, thanks! UPDATE 1 Thanks @OneCricketeer pointing the issue! As I am using multipass on macOS, I can provide INSTALL_K3S_EXEC="server --disable traefik", so the updated command to create k3s cluster is: multipass launch --name=west-master --cpus=4 --memory=16g --disk=128g multipass exec west-master -- \ bash -c 'curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" K3S_KUBECONFIG_MODE="644" sh -' I actually switched to Rancher Desktop as it is also using k3s and easy to disable Traefik which can be set up in the UI. UPDATE 2 Regarding how to deploy ingress-nginx and how to resolve another issue "ingress does not contain a valid IngressClass" I met, I posted at Strimzi Kafka brokers not be created because of "ingress does not contain a valid IngressClass" ### Response: k3s uses traefik, not nginx, so those annotations aren't doing anything... The referenced blog assumes you are using nginx instead Restart your k3s cluster, but provide --no-deploy-traefik option, and install nginx ingress controller Otherwise, you will need to refer to Traefik ingress docs on what matching annotations it will use for SSL passthrough. Keeping in mind, Kafka is not an HTTP/S service, so you should not be using ports 80/443 to communicate with it.
I have a local k3s Kubernetes cluster created by multipass. I am trying to set up Kafka using Ingress way based on this tutorial to make it accessible for clients running outside of Kubernetes. Here are my steps: First get my cluster IP by ➜ kubectl get nodes NAME STATUS ROLES AGE VERSION west-master Ready control-plane,master 15m v1.26.3+k3s1 ➜ kubectl get node west-master -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME west-master Ready control-plane,master 16m v1.26.3+k3s1 192.168.205.5 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.19-k3s1 ➜ kubectl cluster-info Kubernetes control plane is running at https://192.168.205.5:6443 CoreDNS is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy which is 192.168.205.5 (Is this correct IP address for the one I used in my-kafka-persistent.yaml I used in next step?). Then I deploy my Kafka by: kubectl create namespace hm-kafka kubectl apply --filename="https://strimzi.io/install/latest?namespace=hm-kafka" --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka my-kafka-persistent.yaml (based on kafka-persistent.yaml): --- apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: ingress tls: true configuration: bootstrap: host: kafka-bootstrap.192.168.205.5.nip.io brokers: - broker: 0 host: kafka-broker-0.192.168.205.5.nip.io - broker: 1 host: kafka-broker-1.192.168.205.5.nip.io - broker: 2 host: kafka-broker-2.192.168.205.5.nip.io config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.4" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} After deployment: Pods Services Ingresses Also, for each Ingress, I can see SSL passthrough in the annotations: Then I succeed following the tutorial creating the truststore. ➜ kubectl get secret hm-kafka-cluster-ca-cert \ --namespace=hm-kafka \ --output=jsonpath="{.data.ca\.crt}" \ | base64 -d \ > ca.crt ➜ keytool -importcert \ -trustcacerts \ -alias root \ -file ca.crt \ -keystore kafka-truststore.jks \ -storepass my_passw0rd \ -noprompt Certificate was added to keystore However, when I try to produce data, I met this issue: ➜ kafka-console-producer \ --broker-list kafka-bootstrap.192.168.205.5.nip.io:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=my_passw0rd \ --producer-property ssl.truststore.location=kafka-truststore.jks \ --topic my-topic >[2023-04-14 15:57:06,047] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,047] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,200] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,201] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) Any guide would be appreciate, thanks! UPDATE 1 Thanks @OneCricketeer pointing the issue! As I am using multipass on macOS, I can provide INSTALL_K3S_EXEC="server --disable traefik", so the updated command to create k3s cluster is: multipass launch --name=west-master --cpus=4 --memory=16g --disk=128g multipass exec west-master -- \ bash -c 'curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" K3S_KUBECONFIG_MODE="644" sh -' I actually switched to Rancher Desktop as it is also using k3s and easy to disable Traefik which can be set up in the UI. UPDATE 2 Regarding how to deploy ingress-nginx and how to resolve another issue "ingress does not contain a valid IngressClass" I met, I posted at Strimzi Kafka brokers not be created because of "ingress does not contain a valid IngressClass"
To allow clients to access the individual brokers, Strimzi needs to create one type: LoadBalancer service for every broker. In addition, it creates one more service targeting all brokers for bootstrapping (that can be disabled). This normally results in Kubernetes provisioning one load balancer per service -> so there is no need to do any complicated hacks with port numbers. If your environment does not support load balancers like this, maybe the best thing is to try one of the other mechanisms for exposing the Kafka cluster. For example NodePorts or Kubernetes Nginx Ingress. You can also do this your self if you want: Configure a regular internal listener instead of the type: loadbalancer one Use the advertisedHost and advertisedPort options to configure the hostname and port you want to use for each broker Create your own services with any configuration (e.g. port) you want (make sure the services have some custom name and do not have the Strimzi labels to avoid having them deleted or overwritten by Strimzi)
###Instruction: I have a local k3s Kubernetes cluster created by Rancher Desktop. I am trying to set up Kafka using load balancer based on this tutorial to make it accessible for clients running outside of Kubernetes. Here are my steps: kubectl create namespace hm-kafka kubectl apply --filename="https://strimzi.io/install/latest?namespace=hm-kafka" --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka my-kafka-persistent.yaml (based on kafka-persistent.yaml): apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: loadbalancer tls: false configuration: brokers: - broker: 0 advertisedPort: 9095 - broker: 1 advertisedPort: 9096 - broker: 2 advertisedPort: 9097 config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.4" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} When I deploy, I got this issue Pods: Services: This is because advertisedPort option in my YAML file doesn’t really change the port used in the load balancer itself. It changes only the port number used in the advertised.listeners Kafka broker configuration parameter. And they all try to use the port 9094. I am wondering is there a way I can change load balancer port so that bootstrap and brokers have different ports? Will it help make Kafka work? Thanks! Also, I read another similar question and its answer, does it mean which port to use depends on k3s Kubernetes cluster created by Rancher Desktop and we have no control? Thanks! ### Response: To allow clients to access the individual brokers, Strimzi needs to create one type: LoadBalancer service for every broker. In addition, it creates one more service targeting all brokers for bootstrapping (that can be disabled). This normally results in Kubernetes provisioning one load balancer per service -> so there is no need to do any complicated hacks with port numbers. If your environment does not support load balancers like this, maybe the best thing is to try one of the other mechanisms for exposing the Kafka cluster. For example NodePorts or Kubernetes Nginx Ingress. You can also do this your self if you want: Configure a regular internal listener instead of the type: loadbalancer one Use the advertisedHost and advertisedPort options to configure the hostname and port you want to use for each broker Create your own services with any configuration (e.g. port) you want (make sure the services have some custom name and do not have the Strimzi labels to avoid having them deleted or overwritten by Strimzi)
I have a local k3s Kubernetes cluster created by Rancher Desktop. I am trying to set up Kafka using load balancer based on this tutorial to make it accessible for clients running outside of Kubernetes. Here are my steps: kubectl create namespace hm-kafka kubectl apply --filename="https://strimzi.io/install/latest?namespace=hm-kafka" --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka my-kafka-persistent.yaml (based on kafka-persistent.yaml): apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: loadbalancer tls: false configuration: brokers: - broker: 0 advertisedPort: 9095 - broker: 1 advertisedPort: 9096 - broker: 2 advertisedPort: 9097 config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.4" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} When I deploy, I got this issue Pods: Services: This is because advertisedPort option in my YAML file doesn’t really change the port used in the load balancer itself. It changes only the port number used in the advertised.listeners Kafka broker configuration parameter. And they all try to use the port 9094. I am wondering is there a way I can change load balancer port so that bootstrap and brokers have different ports? Will it help make Kafka work? Thanks! Also, I read another similar question and its answer, does it mean which port to use depends on k3s Kubernetes cluster created by Rancher Desktop and we have no control? Thanks!
What I'm gonna suggest is along the lines of what Michał has already done, but with a small tweak that might work for you. So, your template can look something like this: {{- if .Values.apply_template.true }} ... {{- end }} and add this to your values.yaml: apply_template: true: true false: false Now, as you mentioned: "to skip those templates that start with "y" and "z"", you can run this simple cmd that would replace true with false in the templates that start with y and z. find . -type f \( -name "y*.yaml" -o -name "z*.yaml" \) -print0 | xargs -0 sed -i '' "s/true/false/g" | helm template . You will still have to change your templates (one time) to add the if statement, but beyond that, you can control which templates you wanna skip going forward. I have only very quickly tested this on a couple of temp templates (and it works fine), but I hope this gives you some headway to achieve what you wanna do. You might have to tweak the cmd a bit given that I'm running it on OSX - in case you are not (I guess you might have to git rid of the single quotes ''.
###Instruction: I'm reading helm documentation, The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes. I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes. I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it: {{- define "statement.resource"}} {{- $statementProvider := lower ( $.Values.statementProvider ) -}} {{- $statementFiles := list -}} {{- range $path, $bytes := .Files.Glob "templates/**" }} {{- if eq $statementProvider "x" -}} {{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}} {{- $statementFiles = append $statementFiles $path -}} {{- end }} {{- $statementFiles -}} {{- end }} {{- end }} {{- end }} ### Response: What I'm gonna suggest is along the lines of what Michał has already done, but with a small tweak that might work for you. So, your template can look something like this: {{- if .Values.apply_template.true }} ... {{- end }} and add this to your values.yaml: apply_template: true: true false: false Now, as you mentioned: "to skip those templates that start with "y" and "z"", you can run this simple cmd that would replace true with false in the templates that start with y and z. find . -type f \( -name "y*.yaml" -o -name "z*.yaml" \) -print0 | xargs -0 sed -i '' "s/true/false/g" | helm template . You will still have to change your templates (one time) to add the if statement, but beyond that, you can control which templates you wanna skip going forward. I have only very quickly tested this on a couple of temp templates (and it works fine), but I hope this gives you some headway to achieve what you wanna do. You might have to tweak the cmd a bit given that I'm running it on OSX - in case you are not (I guess you might have to git rid of the single quotes ''.
I'm reading helm documentation, The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes. I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes. I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it: {{- define "statement.resource"}} {{- $statementProvider := lower ( $.Values.statementProvider ) -}} {{- $statementFiles := list -}} {{- range $path, $bytes := .Files.Glob "templates/**" }} {{- if eq $statementProvider "x" -}} {{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}} {{- $statementFiles = append $statementFiles $path -}} {{- end }} {{- $statementFiles -}} {{- end }} {{- end }} {{- end }}
From this git issue kubernetes_service_account issue has been successfully fixed using this manifest. I found this alternative solution where changes are made using the terraform resource kubernetes_manifest to manually generate the service accounts along with their secret. Can you try the main.tf file and let me know if this works. For more information follow this Issue.
###Instruction: I'm using Terraform workload-identity module , to create Kubernetes service account in Google Cloud. When i apply the changes, I'm getting below warning. "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above │ │ with module.app-workload-identity.kubernetes_service_account_v1.main, │ on ../../modules/workload-identity/main.tf line 57, in resource "kubernetes_service_account_v1" "main": │ 57: resource "kubernetes_service_account_v1" "main" { │ │ Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, "default_secret_name" will be │ empty Workload-Identity main.tf locals { service_account_tmp = var.google_service_account_email== "" ? "projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com" : var.google_service_account_email service_id = "projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com" k8s_sa_gcp_derived_name = "serviceAccount:${var.project_id}.svc.id.goog[${var.namespace}/${local.output_k8s_name}]" gcp_sa_email = var.google_service_account_email # This will cause terraform to block returning outputs until the service account is created k8s_given_name = var.k8s_sa_name != null ? var.k8s_sa_name : var.name output_k8s_name = var.use_existing_k8s_sa ? local.k8s_given_name : kubernetes_service_account.main[0].metadata[0].name output_k8s_namespace = var.use_existing_k8s_sa ? var.namespace : kubernetes_service_account.main[0].metadata[0].namespace } # resource "google_service_account" "cluster_service_account" { # GCP service account ids must be < 30 chars matching regex ^[a-z](?:[-a-z0-9]{4,28}[a-z0-9])$ # KSA do not have this naming restriction. # account_id = substr(var.name, 0, 30) # display_name = substr("GCP SA bound to K8S SA ${local.k8s_given_name}", 0, 100) # project = var.project_id # } resource "kubernetes_namespace" "k8s_namespace" { metadata { name = var.namespace } } # resource "kubernetes_secret_v1" "main" { # metadata { # name = var.name # namespace = var.namespace # annotations = { # "kubernetes.io/service-account.name" = kubernetes_service_account_v1.main.metadata.0.name # "kubernetes.io/service-account.namespace" = kubernetes_service_account_v1.main.metadata.0.namespace # } # generate_name = "${kubernetes_service_account_v1.main.metadata.0.name}-token-" # } # type = "kubernetes.io/service-account-token" # wait_for_service_account_token = true #} resource "kubernetes_service_account" "main" { count = var.use_existing_k8s_sa ? 0 : 1 metadata { name = var.name namespace = var.namespace annotations = { "iam.gke.io/gcp-service-account" = var.google_service_account_email } } } module "annotate-sa" { source = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper" version = "~> 2.0.2" enabled = var.use_existing_k8s_sa && var.annotate_k8s_sa skip_download = true cluster_name = var.cluster_name cluster_location = var.location project_id = var.project_id kubectl_create_command = "kubectl annotate --overwrite sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account=${local.gcp_sa_email}" kubectl_destroy_command = "kubectl annotate sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account-" } resource "google_service_account_iam_member" "main" { service_account_id = local.service_id role = "roles/iam.workloadIdentityUser" member = local.k8s_sa_gcp_derived_name } As per the this documentation , I have tried to add the resource "kubernetes_secret_v1" to create a service account token. But still getting the same warning message. ### Response: From this git issue kubernetes_service_account issue has been successfully fixed using this manifest. I found this alternative solution where changes are made using the terraform resource kubernetes_manifest to manually generate the service accounts along with their secret. Can you try the main.tf file and let me know if this works. For more information follow this Issue.
I'm using Terraform workload-identity module , to create Kubernetes service account in Google Cloud. When i apply the changes, I'm getting below warning. "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above │ │ with module.app-workload-identity.kubernetes_service_account_v1.main, │ on ../../modules/workload-identity/main.tf line 57, in resource "kubernetes_service_account_v1" "main": │ 57: resource "kubernetes_service_account_v1" "main" { │ │ Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, "default_secret_name" will be │ empty Workload-Identity main.tf locals { service_account_tmp = var.google_service_account_email== "" ? "projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com" : var.google_service_account_email service_id = "projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com" k8s_sa_gcp_derived_name = "serviceAccount:${var.project_id}.svc.id.goog[${var.namespace}/${local.output_k8s_name}]" gcp_sa_email = var.google_service_account_email # This will cause terraform to block returning outputs until the service account is created k8s_given_name = var.k8s_sa_name != null ? var.k8s_sa_name : var.name output_k8s_name = var.use_existing_k8s_sa ? local.k8s_given_name : kubernetes_service_account.main[0].metadata[0].name output_k8s_namespace = var.use_existing_k8s_sa ? var.namespace : kubernetes_service_account.main[0].metadata[0].namespace } # resource "google_service_account" "cluster_service_account" { # GCP service account ids must be < 30 chars matching regex ^[a-z](?:[-a-z0-9]{4,28}[a-z0-9])$ # KSA do not have this naming restriction. # account_id = substr(var.name, 0, 30) # display_name = substr("GCP SA bound to K8S SA ${local.k8s_given_name}", 0, 100) # project = var.project_id # } resource "kubernetes_namespace" "k8s_namespace" { metadata { name = var.namespace } } # resource "kubernetes_secret_v1" "main" { # metadata { # name = var.name # namespace = var.namespace # annotations = { # "kubernetes.io/service-account.name" = kubernetes_service_account_v1.main.metadata.0.name # "kubernetes.io/service-account.namespace" = kubernetes_service_account_v1.main.metadata.0.namespace # } # generate_name = "${kubernetes_service_account_v1.main.metadata.0.name}-token-" # } # type = "kubernetes.io/service-account-token" # wait_for_service_account_token = true #} resource "kubernetes_service_account" "main" { count = var.use_existing_k8s_sa ? 0 : 1 metadata { name = var.name namespace = var.namespace annotations = { "iam.gke.io/gcp-service-account" = var.google_service_account_email } } } module "annotate-sa" { source = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper" version = "~> 2.0.2" enabled = var.use_existing_k8s_sa && var.annotate_k8s_sa skip_download = true cluster_name = var.cluster_name cluster_location = var.location project_id = var.project_id kubectl_create_command = "kubectl annotate --overwrite sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account=${local.gcp_sa_email}" kubectl_destroy_command = "kubectl annotate sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account-" } resource "google_service_account_iam_member" "main" { service_account_id = local.service_id role = "roles/iam.workloadIdentityUser" member = local.k8s_sa_gcp_derived_name } As per the this documentation , I have tried to add the resource "kubernetes_secret_v1" to create a service account token. But still getting the same warning message.
A CORS error is related to the fact that you are issuing a request (fetch, XHR, ...) from a Javascript origin to another service, the destination/backend, and the origin service is not allowed to according to the headers provided by that destination service. It means that your service, the origin, the one deployed in AKS, is trying contacting another service. In order to avoid the CORS errors, that service needs to provide an Access-Control-Allow-Origin that at least includes the host for the origin service, in your example: Access-Control-Allow-Origin: https://prodazure.thesolarlabs.com Changing a SSL certificate by itself shouldn't be the cause of the CORS error: please, as mentioned, adapt your destination/backend service CORS configuration to point to the new host of the origin service, if you changed it, and be sure that you are configuring Access-Control-Allow-Origin for the right HTTP scheme, https in this case, as well.
###Instruction: I have updated the SSL certificates on my AKS service. But I am getting CORS error. New certificates are getting reflected though. Can someone provide a solution for it. Here is the TLS part of my ingress file. spec: tls: - hosts: - "prodazure.thesolarlabs.com" secretName: tls-secret ### Response: A CORS error is related to the fact that you are issuing a request (fetch, XHR, ...) from a Javascript origin to another service, the destination/backend, and the origin service is not allowed to according to the headers provided by that destination service. It means that your service, the origin, the one deployed in AKS, is trying contacting another service. In order to avoid the CORS errors, that service needs to provide an Access-Control-Allow-Origin that at least includes the host for the origin service, in your example: Access-Control-Allow-Origin: https://prodazure.thesolarlabs.com Changing a SSL certificate by itself shouldn't be the cause of the CORS error: please, as mentioned, adapt your destination/backend service CORS configuration to point to the new host of the origin service, if you changed it, and be sure that you are configuring Access-Control-Allow-Origin for the right HTTP scheme, https in this case, as well.
I have updated the SSL certificates on my AKS service. But I am getting CORS error. New certificates are getting reflected though. Can someone provide a solution for it. Here is the TLS part of my ingress file. spec: tls: - hosts: - "prodazure.thesolarlabs.com" secretName: tls-secret
I faced a similar issue: have a wildcard certificate from Certificate Manager wanted to use it for multiple k8 services for ingress with tls (in my case all services were grpc, but it doesn't matter would work the same for https) First of all, GKE does not support Certificate Manager (at least not yet). Details here This is a workaround that was suggested in the GitHub issue and worked for me: Create a placeholder ManagedCertificate object: apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: labels: < set labels here as needed> name: placeholder-certificate spec: domains: - foo.example.com Ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: networking.gke.io/managed-certificates: "placeholder-certificate" # connecting to dummy ssl, switched via deployment scripts to wildcard ssl kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: some-service port: number: XXXX This will make GKE setup target https proxy, which can be updated to point to a certificate map of your certificate: kubectl describe ingress XXXX (you will need to wait for it become available, takes little time for everything to launch) find the annotation for ingress.kubernetes.io/https-target-proxy update the target proxy: gcloud compute target-https-proxies update <target proxy> --certificate-map=<map name>
###Instruction: I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ? We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate. What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ? ### Response: I faced a similar issue: have a wildcard certificate from Certificate Manager wanted to use it for multiple k8 services for ingress with tls (in my case all services were grpc, but it doesn't matter would work the same for https) First of all, GKE does not support Certificate Manager (at least not yet). Details here This is a workaround that was suggested in the GitHub issue and worked for me: Create a placeholder ManagedCertificate object: apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: labels: < set labels here as needed> name: placeholder-certificate spec: domains: - foo.example.com Ingress: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: networking.gke.io/managed-certificates: "placeholder-certificate" # connecting to dummy ssl, switched via deployment scripts to wildcard ssl kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: some-service port: number: XXXX This will make GKE setup target https proxy, which can be updated to point to a certificate map of your certificate: kubectl describe ingress XXXX (you will need to wait for it become available, takes little time for everything to launch) find the annotation for ingress.kubernetes.io/https-target-proxy update the target proxy: gcloud compute target-https-proxies update <target proxy> --certificate-map=<map name>
I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ? We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate. What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ?
If the container is not assigned to PVC then assuming ephemeral storage is assigned will be used by default. To ensure that the container does not use more than allocated storage set the memory limits field spec.containers[].resources.limits.memory. Below example is taken from this official kubernetes doc, similarly you can also set limits to your containers. containers: - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" As @larks suggested, this document contains a detailed explanation and more methods which will help you and you can also use resource quotas for storage to set limits. For more information you can also refer to the Resource Management for Pods and Containers doc.
###Instruction: I am familiar with Kubernetes documentation that describes how to setup limits for PVC. However, what if the container is not assigned PVC? Suppose a Kubernetes container that simply defines: - image: 'redis:7' name: redis ... I keep writing data to this Redis instance. How do I set a quota to ensure that the container does not use more than allocated storage? How to inspect how much storage is already used? I have tried setting ResourceQuota for ephemeral resources such as: apiVersion: v1 kind: ResourceQuota metadata: labels: # {{ include "app.resource_labels" . | indent 4 }} name: '{{ .Release.Name }}' spec: hard: configmaps: 10 limits.cpu: 4 limits.ephemeral-storage: 1Gi limits.memory: 10Gi pods: 30 secrets: 5 services: 20 However, when inspecting quota, it always says 0 for ephemeral-storage. kubectl describe quota Name: gaia-review-contra-resource-quota-c79e5b3c Namespace: gaia-review-c79e5b3c Resource Used Hard -------- ---- ---- configmaps 2 10 limits.cpu 21 4 limits.ephemeral-storage 0 1Gi limits.memory 25576Mi 10Gi pods 16 30 secrets 4 5 services 8 20 Therefore, I suspect that something else is not working as it should or I am looking at the wrong place. Meanwhile, the VMs that are running these pods is experiencing disk pressure. My next best theory is that it is actually the Docker image layers that are filling the disk space, but I am unsure how to confirm that or why resources are not being freed. ### Response: If the container is not assigned to PVC then assuming ephemeral storage is assigned will be used by default. To ensure that the container does not use more than allocated storage set the memory limits field spec.containers[].resources.limits.memory. Below example is taken from this official kubernetes doc, similarly you can also set limits to your containers. containers: - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" As @larks suggested, this document contains a detailed explanation and more methods which will help you and you can also use resource quotas for storage to set limits. For more information you can also refer to the Resource Management for Pods and Containers doc.
I am familiar with Kubernetes documentation that describes how to setup limits for PVC. However, what if the container is not assigned PVC? Suppose a Kubernetes container that simply defines: - image: 'redis:7' name: redis ... I keep writing data to this Redis instance. How do I set a quota to ensure that the container does not use more than allocated storage? How to inspect how much storage is already used? I have tried setting ResourceQuota for ephemeral resources such as: apiVersion: v1 kind: ResourceQuota metadata: labels: # {{ include "app.resource_labels" . | indent 4 }} name: '{{ .Release.Name }}' spec: hard: configmaps: 10 limits.cpu: 4 limits.ephemeral-storage: 1Gi limits.memory: 10Gi pods: 30 secrets: 5 services: 20 However, when inspecting quota, it always says 0 for ephemeral-storage. kubectl describe quota Name: gaia-review-contra-resource-quota-c79e5b3c Namespace: gaia-review-c79e5b3c Resource Used Hard -------- ---- ---- configmaps 2 10 limits.cpu 21 4 limits.ephemeral-storage 0 1Gi limits.memory 25576Mi 10Gi pods 16 30 secrets 4 5 services 8 20 Therefore, I suspect that something else is not working as it should or I am looking at the wrong place. Meanwhile, the VMs that are running these pods is experiencing disk pressure. My next best theory is that it is actually the Docker image layers that are filling the disk space, but I am unsure how to confirm that or why resources are not being freed.
Install the AWS Load Balancer Controller and use it to create an NLB for your NGINX ingress service. NLB is more performant than the Classic Load Balancer.
###Instruction: EKS Slowness: All, I have an EKS cluster running and I am using nginx Ingress controller with Classic LB. However, while accessing the LB, it is too dead slow ( Not all the requests, but only a couple of requests ). I have the LB in the public subnet and my EC2 Instances are in the private subnet. And my public subnets also have the elb tag as well. Is there anything that I am missing? ### Response: Install the AWS Load Balancer Controller and use it to create an NLB for your NGINX ingress service. NLB is more performant than the Classic Load Balancer.
EKS Slowness: All, I have an EKS cluster running and I am using nginx Ingress controller with Classic LB. However, while accessing the LB, it is too dead slow ( Not all the requests, but only a couple of requests ). I have the LB in the public subnet and my EC2 Instances are in the private subnet. And my public subnets also have the elb tag as well. Is there anything that I am missing?
I don't think it is possible to get a consolidated list of all permissions that you have in a k8s cluster. Looking at the help & examples for the kubectl auth can-i command it needs a VERB which is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. and a TYPE which is a Kubernetes resource. The result of kubectl auth can-i '*' '*' is understandably no as it checks if you can do everything in your current namespace - which evidently is not true in your case and thus the no response. You will need to run kubectl auth can-i for each resource and each action that you want to check (of course you would be able to use -A or --all-namespaces flag for all namespaces) but I would imagine you can write a script to run a loop on this command for all/some resources to check permissions for all/some actions. Update: If you'd really want to, you could run a script to loop through certain kubectl cmds to achieve this. The script would (could) look like: #!/bin/bash for namespace in $(kubectl get namespaces | awk '{ print $1 }' | tail -n +2); do echo "Current namespace: $namespace" for resource in $(kubectl api-resources --verbs=list --namespaced -o name); do for action in get list watch create update patch delete; do echo "- $action $resource" kubectl auth can-i "$action" "$resource" --namespace="$namespace" done done done Note: There could be variations of how you do this in bash. I'll explain a bit: Firstly, we're fetching all the namespaces, printing only the 1st column, and -n +2 basically means “start passing through on the second line of output”. Next, we're fetching most (if not all) resource types available in your cluster. Picked up from here. Iterating through a few common actions supported in K8s. Found most (if not all) on this page. And then just run kubectl auth can-i cmd on each resource, action, and namespace we got in the earlier commands. While it works, it takes a lot of time as we have a lot of namespaces and you'd have to go through a lot of output to find something or you would end up redirecting the output to a file and searching for fields. Realistically, you would wanna run it only on a few namespaces at a time to check on a few actions on a few resources. Something like this: #!/bin/bash for namespace in default calico-system; do echo "Current namespace: $namespace" for resource in pods deployments.apps; do for action in get create; do echo "- $action $resource" kubectl auth can-i "$action" "$resource" --namespace="$namespace" done done done It is a lot faster and would help you easily find what you're looking for. Sample output: Current namespace: default - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no Current namespace: calico-system - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no Hope it helps!
###Instruction: I am able to list, get pods, secrets, deployments and many other resources. I have cluster wide access for many resources. Is there some way to see the consolidated list of all permissions that I have in a kubernetes cluster? The result of kubectl auth can-i '*' '*' is no though. ### Response: I don't think it is possible to get a consolidated list of all permissions that you have in a k8s cluster. Looking at the help & examples for the kubectl auth can-i command it needs a VERB which is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. and a TYPE which is a Kubernetes resource. The result of kubectl auth can-i '*' '*' is understandably no as it checks if you can do everything in your current namespace - which evidently is not true in your case and thus the no response. You will need to run kubectl auth can-i for each resource and each action that you want to check (of course you would be able to use -A or --all-namespaces flag for all namespaces) but I would imagine you can write a script to run a loop on this command for all/some resources to check permissions for all/some actions. Update: If you'd really want to, you could run a script to loop through certain kubectl cmds to achieve this. The script would (could) look like: #!/bin/bash for namespace in $(kubectl get namespaces | awk '{ print $1 }' | tail -n +2); do echo "Current namespace: $namespace" for resource in $(kubectl api-resources --verbs=list --namespaced -o name); do for action in get list watch create update patch delete; do echo "- $action $resource" kubectl auth can-i "$action" "$resource" --namespace="$namespace" done done done Note: There could be variations of how you do this in bash. I'll explain a bit: Firstly, we're fetching all the namespaces, printing only the 1st column, and -n +2 basically means “start passing through on the second line of output”. Next, we're fetching most (if not all) resource types available in your cluster. Picked up from here. Iterating through a few common actions supported in K8s. Found most (if not all) on this page. And then just run kubectl auth can-i cmd on each resource, action, and namespace we got in the earlier commands. While it works, it takes a lot of time as we have a lot of namespaces and you'd have to go through a lot of output to find something or you would end up redirecting the output to a file and searching for fields. Realistically, you would wanna run it only on a few namespaces at a time to check on a few actions on a few resources. Something like this: #!/bin/bash for namespace in default calico-system; do echo "Current namespace: $namespace" for resource in pods deployments.apps; do for action in get create; do echo "- $action $resource" kubectl auth can-i "$action" "$resource" --namespace="$namespace" done done done It is a lot faster and would help you easily find what you're looking for. Sample output: Current namespace: default - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no Current namespace: calico-system - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no Hope it helps!
I am able to list, get pods, secrets, deployments and many other resources. I have cluster wide access for many resources. Is there some way to see the consolidated list of all permissions that I have in a kubernetes cluster? The result of kubectl auth can-i '*' '*' is no though.
Solved, just forced the Dockerfile to use version 2023.1.0, that fixed the problem and matched the operator dask version.
###Instruction: I'm attempting to deploy a dask application on Kubernetes/Azure. I have a Flask application server that is the client of a Dask scheduler/workers. I installed the Dask operator as described here: helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator This created the scheduler and worker pods, I have them running on Kubernetes without errors. For the Flask application, I have a Docker image with the following Dockerfile: FROM daskdev/dask RUN apt-get -y install python3-pip RUN pip3 install flask RUN pip3 install gunicorn RUN pip3 install "dask[complete]" RUN pip3 install "dask[distributed]" --upgrade RUN pip3 install "dask-ml[complete]" Whenever I try to run a function in the workers using the Client interface, I get this error in the scheduler pod: TypeError: update_graph() got an unexpected keyword argument 'graph_header' It seems to me that the Dask image used to run Flask and the Dask Kubernetes that I installed are not compatible or aligned? How to create an image that includes Dask for the Flask server that can be integrated with the Dask Kubernetes package? I run in Flask client.get_versions(check=True) and this is what I get: {'scheduler': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'workers': {'tcp://10.244.0.3:40749': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.0.4:36757': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.1.7:40561': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}}, 'client': {'host': {'python': '3.8.16.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.16.final.0', 'dask': '2023.4.0', 'distributed': '2023.4.0', 'msgpack': '1.0.5', 'cloudpickle': '2.2.1', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.23.5', 'pandas': '2.0.0', 'lz4': '4.3.2'}}} @ 2023-04-20 13:33:09.921545"} ### Response: Solved, just forced the Dockerfile to use version 2023.1.0, that fixed the problem and matched the operator dask version.
I'm attempting to deploy a dask application on Kubernetes/Azure. I have a Flask application server that is the client of a Dask scheduler/workers. I installed the Dask operator as described here: helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator This created the scheduler and worker pods, I have them running on Kubernetes without errors. For the Flask application, I have a Docker image with the following Dockerfile: FROM daskdev/dask RUN apt-get -y install python3-pip RUN pip3 install flask RUN pip3 install gunicorn RUN pip3 install "dask[complete]" RUN pip3 install "dask[distributed]" --upgrade RUN pip3 install "dask-ml[complete]" Whenever I try to run a function in the workers using the Client interface, I get this error in the scheduler pod: TypeError: update_graph() got an unexpected keyword argument 'graph_header' It seems to me that the Dask image used to run Flask and the Dask Kubernetes that I installed are not compatible or aligned? How to create an image that includes Dask for the Flask server that can be integrated with the Dask Kubernetes package? I run in Flask client.get_versions(check=True) and this is what I get: {'scheduler': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'workers': {'tcp://10.244.0.3:40749': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.0.4:36757': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.1.7:40561': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}}, 'client': {'host': {'python': '3.8.16.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.16.final.0', 'dask': '2023.4.0', 'distributed': '2023.4.0', 'msgpack': '1.0.5', 'cloudpickle': '2.2.1', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.23.5', 'pandas': '2.0.0', 'lz4': '4.3.2'}}} @ 2023-04-20 13:33:09.921545"}
I had the same issue. Locally my pytorch model would return a prediction in 25 ms and then on Kubernetes it would take 5 seconds. The problem had to do with how many threads torch had available to use. I'm not 100% sure why this works, but reducing the number of threads sped up performance significantly. Set the following environment variable on your kubernetes pod. OMP_NUM_THREADS=1 After doing that it performed on kubernetes like it did running it locally ~30ms per call. These are my pod limits: cpu limits 1 mem limits: 1500m I was led to discover this from this blog post: https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html
###Instruction: I would like to make the result of a text classification model (finBERT pytorch model) available through an endpoint that is deployed on Kubernetes. The whole pipeline is working but it's super slow to process (30 seconds for one sentence) when deployed. If I time the same endpoint in local, I'm getting results in 1 or 2 seconds. Running the docker image in local, the endpoint also takes 2 seconds to return a result. When I'm checking the CPU usage of my kubernetes instance while the request is running, it doesn't go above 35% so I'm not sure it's related to a lack of computation power? Did anyone witness such performances issues when making a forward pass to a pytorch model? Any clues on what I should investigate? Any help is greatly appreciated, thank you! I am currently using limits: cpu: "2" requests: cpu: "1" Python : 3.7 Pytorch : 1.8.1 ### Response: I had the same issue. Locally my pytorch model would return a prediction in 25 ms and then on Kubernetes it would take 5 seconds. The problem had to do with how many threads torch had available to use. I'm not 100% sure why this works, but reducing the number of threads sped up performance significantly. Set the following environment variable on your kubernetes pod. OMP_NUM_THREADS=1 After doing that it performed on kubernetes like it did running it locally ~30ms per call. These are my pod limits: cpu limits 1 mem limits: 1500m I was led to discover this from this blog post: https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html
I would like to make the result of a text classification model (finBERT pytorch model) available through an endpoint that is deployed on Kubernetes. The whole pipeline is working but it's super slow to process (30 seconds for one sentence) when deployed. If I time the same endpoint in local, I'm getting results in 1 or 2 seconds. Running the docker image in local, the endpoint also takes 2 seconds to return a result. When I'm checking the CPU usage of my kubernetes instance while the request is running, it doesn't go above 35% so I'm not sure it's related to a lack of computation power? Did anyone witness such performances issues when making a forward pass to a pytorch model? Any clues on what I should investigate? Any help is greatly appreciated, thank you! I am currently using limits: cpu: "2" requests: cpu: "1" Python : 3.7 Pytorch : 1.8.1
This means that sort was sent a SIGKILL signal. On modern systems without something weird going on, this almost always means you're out of RAM. Linux supports something called "memory overcommit". Due to language runtimes (Java among them!) habitually allocating far more virtual memory than they'll ever need matching physical memory for, the kernel allows allocation requests to succeed even if it doesn't have enough memory to back them. When the application actually tries to access those virtual memory pages, the kernel's fault handler gets invoked to try to find physical memory to map to them. Sometimes it can free up space by deleting unused pages -- discarding block cache contents, or memory pages that are mapped to file contents and thus can be reloaded from that file later. Sometimes, however, there's more memory outstanding than can be free'd, and the kernel invokes the "OOM killer" -- where OOM stands for "out of memory" -- to kill some processes and make more free RAM. In the case of sorting a large file specifically: Make sure you're using GNU sort, not a clone like busybox sort. This is because GNU sort, but not all its smaller clones, supports breaking a large stream into pieces, writing those pieces to disk, and then doing a merge sort to reassemble them later; so it can sort files larger than available RAM. Make sure that you have temporary space that is actually disk. If GNU sort tries to conserve RAM by shuffling contents off to disk that is actually RAM itself, that's obviously not going to go well. Use the GNU sort argument -S to limit the amount of memory GNU sort will allocate before shunting data to temporary files on disk. (For example, one can use sort -S 32M to allow 32MB of RAM to be allocated for working space).
###Instruction: I ran bash script from java code to sort a file of tens of millions of strings of <25 characters. The java application is running in a kubernetes pod. The script failed and the error message is bash: line 1: 425 Killed sort /tmp/keys > /tmp/keys_sorted What does 425 actually mean here? How can I fix it? ### Response: This means that sort was sent a SIGKILL signal. On modern systems without something weird going on, this almost always means you're out of RAM. Linux supports something called "memory overcommit". Due to language runtimes (Java among them!) habitually allocating far more virtual memory than they'll ever need matching physical memory for, the kernel allows allocation requests to succeed even if it doesn't have enough memory to back them. When the application actually tries to access those virtual memory pages, the kernel's fault handler gets invoked to try to find physical memory to map to them. Sometimes it can free up space by deleting unused pages -- discarding block cache contents, or memory pages that are mapped to file contents and thus can be reloaded from that file later. Sometimes, however, there's more memory outstanding than can be free'd, and the kernel invokes the "OOM killer" -- where OOM stands for "out of memory" -- to kill some processes and make more free RAM. In the case of sorting a large file specifically: Make sure you're using GNU sort, not a clone like busybox sort. This is because GNU sort, but not all its smaller clones, supports breaking a large stream into pieces, writing those pieces to disk, and then doing a merge sort to reassemble them later; so it can sort files larger than available RAM. Make sure that you have temporary space that is actually disk. If GNU sort tries to conserve RAM by shuffling contents off to disk that is actually RAM itself, that's obviously not going to go well. Use the GNU sort argument -S to limit the amount of memory GNU sort will allocate before shunting data to temporary files on disk. (For example, one can use sort -S 32M to allow 32MB of RAM to be allocated for working space).
I ran bash script from java code to sort a file of tens of millions of strings of <25 characters. The java application is running in a kubernetes pod. The script failed and the error message is bash: line 1: 425 Killed sort /tmp/keys > /tmp/keys_sorted What does 425 actually mean here? How can I fix it?
Create the structure pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] , convert it from YAML, and update pod_data - set_fact: pod_data: "{{ pod_data_update|from_yaml }}" gives what you want pod_data: - {container_name: container1, image_path: path1, pod_name: pod1} - {container_name: container2, image_path: path2, pod_name: pod1} - {container_name: container3, image_path: path3, pod_name: pod2} - {container_name: container4, image_path: path4, pod_name: pod2} Example of a complete playbook for testing - hosts: localhost vars: pod_data: - container_name: - container1 - container2 image_path: - path1 - path2 pod_name: pod1 - container_name: - container3 - container4 image_path: - path3 - path4 pod_name: pod2 pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] tasks: - set_fact: pod_data: "{{ pod_data_update|from_yaml }}" - debug: var: pod_data|to_yaml
###Instruction: I've been trying to find a way to take a data collection that contains three fields where two of the fields can contain multiple items and convert/expand that data to a list where a row is added for every item in one of the fields. The data is about kubernetes pods and the containers and images inside them. I can get the raw data just fine I just can't transform it. the json data is this: - name: condense the raw data set_fact: pod_data: "{{ pod_data|default([]) + [{'pod_name':item.pod_name,'container_name':item.container_name,'image_path':item.image_path }] }}" with_items: "{{ podinfo | json_query('resources[*].{pod_name: metadata.name, container_name: spec.containers[*].name, image_path: spec.containers[*].image }') }}" - debug: var=pod_data "pod_data": [ { "container_name": [ "container1", "container2" ], "image_path": [ "path1", "path2" ], "pod_name": "pod1" }, { "container_name": [ "container3", "container4" ], "image_path": [ "path3", "path4" ], "pod_name": "pod2" } I want to output the data in a 1:1 format like this: "pod_data": [ { "container_name": "container1" , "image_path": "path1" , "pod_name": "pod1" }, { "container_name": "container2" , "image_path": "path2" ], "pod_name": "pod1" }, { "container_name": "container3" , "image_path": "path3" , "pod_name": "pod2" } I'm not good enough with loops to know how to do a nested style loop when there are multiple sub elements in ansible. I could do it in powershell but that doesn't help me here. Is my only choice some sort of include_tasks loop while passing each item and container/image count so I can split it that many times? ### Response: Create the structure pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] , convert it from YAML, and update pod_data - set_fact: pod_data: "{{ pod_data_update|from_yaml }}" gives what you want pod_data: - {container_name: container1, image_path: path1, pod_name: pod1} - {container_name: container2, image_path: path2, pod_name: pod1} - {container_name: container3, image_path: path3, pod_name: pod2} - {container_name: container4, image_path: path4, pod_name: pod2} Example of a complete playbook for testing - hosts: localhost vars: pod_data: - container_name: - container1 - container2 image_path: - path1 - path2 pod_name: pod1 - container_name: - container3 - container4 image_path: - path3 - path4 pod_name: pod2 pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] tasks: - set_fact: pod_data: "{{ pod_data_update|from_yaml }}" - debug: var: pod_data|to_yaml
I've been trying to find a way to take a data collection that contains three fields where two of the fields can contain multiple items and convert/expand that data to a list where a row is added for every item in one of the fields. The data is about kubernetes pods and the containers and images inside them. I can get the raw data just fine I just can't transform it. the json data is this: - name: condense the raw data set_fact: pod_data: "{{ pod_data|default([]) + [{'pod_name':item.pod_name,'container_name':item.container_name,'image_path':item.image_path }] }}" with_items: "{{ podinfo | json_query('resources[*].{pod_name: metadata.name, container_name: spec.containers[*].name, image_path: spec.containers[*].image }') }}" - debug: var=pod_data "pod_data": [ { "container_name": [ "container1", "container2" ], "image_path": [ "path1", "path2" ], "pod_name": "pod1" }, { "container_name": [ "container3", "container4" ], "image_path": [ "path3", "path4" ], "pod_name": "pod2" } I want to output the data in a 1:1 format like this: "pod_data": [ { "container_name": "container1" , "image_path": "path1" , "pod_name": "pod1" }, { "container_name": "container2" , "image_path": "path2" ], "pod_name": "pod1" }, { "container_name": "container3" , "image_path": "path3" , "pod_name": "pod2" } I'm not good enough with loops to know how to do a nested style loop when there are multiple sub elements in ansible. I could do it in powershell but that doesn't help me here. Is my only choice some sort of include_tasks loop while passing each item and container/image count so I can split it that many times?
Yes, you can assign permissions to dashboards in Grafana using Helm, as well as through the Grafana web UI or API. To assign permissions using Helm, you can define a custom Grafana dashboard provisioning configuration file in your Helm chart's values.yaml or in a separate YAML file, and specify the appropriate permissions for each dashboard using the datasources, dashboards, and users sections. Here's an example: values.yaml or custom configuration file grafana: provisioning: datasources: - name: <datasource_name> type: <datasource_type> access: proxy <datasource-specific_configurations> # e.g., url, basicAuth, etc. dashboards: - name: <dashboard_name> uid: <dashboard_uid> # unique identifier for the dashboard url: <dashboard_url> # URL of the JSON file for the dashboard permissions: role: <role_name> # role to assign the dashboard to user: <user_name> # user to assign the dashboard to team: <team_name> # team to assign the dashboard to users: - username: <user_name> role: <role_name> In this example, you can specify the datasource configuration, dashboard configuration (including permissions), and user configuration using Helm values. Once you apply the Helm chart, Grafana will provision the dashboards with the specified permissions. Note: Make sure to use the appropriate values for <datasource_name>, <datasource_type>, <dashboard_name>, <dashboard_uid>, <dashboard_url>, <role_name>, <user_name>, and <team_name> in your configuration. Alternatively, you can also assign permissions to dashboards using the Grafana web UI or API. In the web UI, you can go to the dashboard settings, navigate to the "Permissions" tab, and specify the roles, users, or teams that should have access to the dashboard. You can also use the Grafana API to create, update, or delete dashboards with specific permissions using the appropriate API endpoints and payload. Please note that in order to assign permissions to dashboards, you need to have appropriate permissions and roles configured in Grafana. Also, make sure to follow Grafana's documentation and best practices for securing your deployment and managing permissions effectively.
###Instruction: When I create a dashboard in Grafana and export it in JSON, the role, user and group permissions I define are not saved with it. I am looking for a way to assign permissions for each dashboard in a Grafana deployment with Helm, in which I already include the dashboards to use. Does anyone know if this is possible? I can't find a way to do it, can it only be done from web or from API? Thanks. ### Response: Yes, you can assign permissions to dashboards in Grafana using Helm, as well as through the Grafana web UI or API. To assign permissions using Helm, you can define a custom Grafana dashboard provisioning configuration file in your Helm chart's values.yaml or in a separate YAML file, and specify the appropriate permissions for each dashboard using the datasources, dashboards, and users sections. Here's an example: values.yaml or custom configuration file grafana: provisioning: datasources: - name: <datasource_name> type: <datasource_type> access: proxy <datasource-specific_configurations> # e.g., url, basicAuth, etc. dashboards: - name: <dashboard_name> uid: <dashboard_uid> # unique identifier for the dashboard url: <dashboard_url> # URL of the JSON file for the dashboard permissions: role: <role_name> # role to assign the dashboard to user: <user_name> # user to assign the dashboard to team: <team_name> # team to assign the dashboard to users: - username: <user_name> role: <role_name> In this example, you can specify the datasource configuration, dashboard configuration (including permissions), and user configuration using Helm values. Once you apply the Helm chart, Grafana will provision the dashboards with the specified permissions. Note: Make sure to use the appropriate values for <datasource_name>, <datasource_type>, <dashboard_name>, <dashboard_uid>, <dashboard_url>, <role_name>, <user_name>, and <team_name> in your configuration. Alternatively, you can also assign permissions to dashboards using the Grafana web UI or API. In the web UI, you can go to the dashboard settings, navigate to the "Permissions" tab, and specify the roles, users, or teams that should have access to the dashboard. You can also use the Grafana API to create, update, or delete dashboards with specific permissions using the appropriate API endpoints and payload. Please note that in order to assign permissions to dashboards, you need to have appropriate permissions and roles configured in Grafana. Also, make sure to follow Grafana's documentation and best practices for securing your deployment and managing permissions effectively.
When I create a dashboard in Grafana and export it in JSON, the role, user and group permissions I define are not saved with it. I am looking for a way to assign permissions for each dashboard in a Grafana deployment with Helm, in which I already include the dashboards to use. Does anyone know if this is possible? I can't find a way to do it, can it only be done from web or from API? Thanks.
"Nodeport is internally implemented on top of ClusterIP" - correct. The port in the Kubernetes Service definition is used to specify the port on which the service will listen for traffic within the Kubernetes cluster. This is the port that will be exposed to other pods in the cluster as an endpoint for the service. When a request is made to this port by a client within the cluster, the traffic will be routed to one of the pods selected by the Service based on its load balancing algorithm. The nodePort is used to expose the service on a port on the node itself, which allows the service to be accessed from outside the cluster.
###Instruction: The nodeport takes in 3 parameters in the service yaml. port targetPort nodePort Since all you wanted was to map a port on the node(nodeport) to the port on the container(targetPort), why do we need to provide the port of the service? Is this because Nodeport is internally implemented on top of ClusterIP? ### Response: "Nodeport is internally implemented on top of ClusterIP" - correct. The port in the Kubernetes Service definition is used to specify the port on which the service will listen for traffic within the Kubernetes cluster. This is the port that will be exposed to other pods in the cluster as an endpoint for the service. When a request is made to this port by a client within the cluster, the traffic will be routed to one of the pods selected by the Service based on its load balancing algorithm. The nodePort is used to expose the service on a port on the node itself, which allows the service to be accessed from outside the cluster.
The nodeport takes in 3 parameters in the service yaml. port targetPort nodePort Since all you wanted was to map a port on the node(nodeport) to the port on the container(targetPort), why do we need to provide the port of the service? Is this because Nodeport is internally implemented on top of ClusterIP?
Adding to GSerg suggestion A cluster is a set of nodes running Kubernetes agents, managed by the control plane.You can scale your cluster by adding or removing nodes. The way you do this depends on how your cluster is deployed. The limit is 300000 containers.You can find this in this official doc. As per this doc Containerized applications can get complicated, however. When in production, many might require hundreds to thousands of separate containers in production. This is where container runtime environments such as Docker benefit from the use of other tools to orchestrate or manage all the containers in operation. One of the most popular tools for this purpose is Kubernetes, a container orchestrator that recognizes multiple container runtime environments, including Docker. The software that runs containers is called the container runtime. Kubernetes upholds compartment runtimes, for example, containerd, CRI-O, and some other execution of the Kubernetes CRI (Holder Runtime Point of interaction). Usually, you can let your cluster choose a Pod's default container runtime. You can specify the RuntimeClass for a Pod to ensure that Kubernetes runs those containers using a specific container runtime if you need to use more than one container runtime in your cluster. Using RuntimeClass, you can also use the same container runtime to run multiple Pods with different settings.
###Instruction: I am starting on learning Containers and Kubernetes. When watching tutorials on Conatiners and Kubernetes, often the instructors says "You could have thousands of containers". I am trying to understand how we can end up with thousands of containers? After deploying my single container app, how thousands of container instances are created? I am hope my question makes sense. thanks ### Response: Adding to GSerg suggestion A cluster is a set of nodes running Kubernetes agents, managed by the control plane.You can scale your cluster by adding or removing nodes. The way you do this depends on how your cluster is deployed. The limit is 300000 containers.You can find this in this official doc. As per this doc Containerized applications can get complicated, however. When in production, many might require hundreds to thousands of separate containers in production. This is where container runtime environments such as Docker benefit from the use of other tools to orchestrate or manage all the containers in operation. One of the most popular tools for this purpose is Kubernetes, a container orchestrator that recognizes multiple container runtime environments, including Docker. The software that runs containers is called the container runtime. Kubernetes upholds compartment runtimes, for example, containerd, CRI-O, and some other execution of the Kubernetes CRI (Holder Runtime Point of interaction). Usually, you can let your cluster choose a Pod's default container runtime. You can specify the RuntimeClass for a Pod to ensure that Kubernetes runs those containers using a specific container runtime if you need to use more than one container runtime in your cluster. Using RuntimeClass, you can also use the same container runtime to run multiple Pods with different settings.
I am starting on learning Containers and Kubernetes. When watching tutorials on Conatiners and Kubernetes, often the instructors says "You could have thousands of containers". I am trying to understand how we can end up with thousands of containers? After deploying my single container app, how thousands of container instances are created? I am hope my question makes sense. thanks
Helm includes a fromJson function; in fact, its documentation fairly closely matches your updated setup. So you can use this to read in the file, parse it, and then select the specific key out of it. data: {{- $secrets := .Files.Get "gh-secrets.json" | fromJson -}} {{- range .Values.secrets }} {{ . }}: {{ index $secrets . }} {{- end }} You appear to have some control over the input structure here. If you can wrap the secret values in some top-level key echo '{"secretValues": ${{ toJSON(secrets) }}}' > gh-secrets.json then you can provide this as an extra values file to Helm (valid JSON files are also valid YAML) helm install -f gh-secrets.json ... This file doesn't need to be in the chart directory. Now Helm will parse it for you, and it will be available as .Values.secretValues matching the injected key. (Remember that the range loop resets the value of ., so you will need to save its value outside the loop.) If this technique works for you, it might be possible to use it to directly inject the secret values without the layer of indirection; use something like data: {{ .Values.secretValues | toYaml | indent 2 }}
###Instruction: I am creating a github actions step . . . steps: name: export-secrets run: | echo '${{ toJSON(secrets) }}' >> gh-secrets.json . . . suppose the gh-secrets.json file is as below: # gh-secrets.json {"A": "a1", "B": "b1", "C": "c1"} Then I am going to pull the helm charts and untar them and then move gh-secrets.json into the untar directory. The reason I am doing this is according to helm docs, "Files outside of a helm application subchart, including those of the parent, cannot be accessed" suppose I have these files in the untar directory # values.yaml secrets: - A - B and I have templates/secrets.yaml: apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-secret data: {{- range .Values.secrtes }} {{ . }}: {{ $.Files.Get "gh-secrets.json" | toJson }} {{- end }} this is the result: # Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: "'{\"A\": \"a1\", \"B\": \"b1\", \"C\": \"c1\"}'" B: "'{\"A\": \"a1\", \"B\": \"b1\", \"C\": \"c1\"}'" but I want it to be # Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: "a1" B: "b1" How can I achieve this? ### Response: Helm includes a fromJson function; in fact, its documentation fairly closely matches your updated setup. So you can use this to read in the file, parse it, and then select the specific key out of it. data: {{- $secrets := .Files.Get "gh-secrets.json" | fromJson -}} {{- range .Values.secrets }} {{ . }}: {{ index $secrets . }} {{- end }} You appear to have some control over the input structure here. If you can wrap the secret values in some top-level key echo '{"secretValues": ${{ toJSON(secrets) }}}' > gh-secrets.json then you can provide this as an extra values file to Helm (valid JSON files are also valid YAML) helm install -f gh-secrets.json ... This file doesn't need to be in the chart directory. Now Helm will parse it for you, and it will be available as .Values.secretValues matching the injected key. (Remember that the range loop resets the value of ., so you will need to save its value outside the loop.) If this technique works for you, it might be possible to use it to directly inject the secret values without the layer of indirection; use something like data: {{ .Values.secretValues | toYaml | indent 2 }}
I am creating a github actions step . . . steps: name: export-secrets run: | echo '${{ toJSON(secrets) }}' >> gh-secrets.json . . . suppose the gh-secrets.json file is as below: # gh-secrets.json {"A": "a1", "B": "b1", "C": "c1"} Then I am going to pull the helm charts and untar them and then move gh-secrets.json into the untar directory. The reason I am doing this is according to helm docs, "Files outside of a helm application subchart, including those of the parent, cannot be accessed" suppose I have these files in the untar directory # values.yaml secrets: - A - B and I have templates/secrets.yaml: apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-secret data: {{- range .Values.secrtes }} {{ . }}: {{ $.Files.Get "gh-secrets.json" | toJson }} {{- end }} this is the result: # Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: "'{\"A\": \"a1\", \"B\": \"b1\", \"C\": \"c1\"}'" B: "'{\"A\": \"a1\", \"B\": \"b1\", \"C\": \"c1\"}'" but I want it to be # Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: "a1" B: "b1" How can I achieve this?
I don't think kubectl get pods supports field selectors based on the READY column directly. But kubectl provides a method exporting the resource configuration (YAML) directly into JSON, -o json. Then, we can use jq to read, parse, and mutate K8s object results from kubectl. In your case, you could use a command like this to filter all pods (excluding the pods from namespaces kube-system, monitoring & rtf) not in ready state: kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf -ojson | jq '.items[] | select(.status.containerStatuses[].ready==false) | .metadata.namespace + "/" + .metadata.name' and/or change ready=true to get the pods in ready state. Have a look at this article for many more such uses of jq with kubectl Hope it helps.
###Instruction: I'm running this command: # kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf Which gives me output like this: NAMESPACE NAME READY STATUS RESTARTS AGE 123456-1234-1234-1234-123456789 some-app-123456 2/2 Running 0 10m 123456-1234-1234-1234-123456789 some-app-789112 1/2 Running 0 10m I would like to be able to filter on the READY column, but I can't seem to find the right field-selector value. Is this possible? I've tried searching around for a list of available field-selectors, and haven't had any luck. It's possible that one doesn't exist. ### Response: I don't think kubectl get pods supports field selectors based on the READY column directly. But kubectl provides a method exporting the resource configuration (YAML) directly into JSON, -o json. Then, we can use jq to read, parse, and mutate K8s object results from kubectl. In your case, you could use a command like this to filter all pods (excluding the pods from namespaces kube-system, monitoring & rtf) not in ready state: kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf -ojson | jq '.items[] | select(.status.containerStatuses[].ready==false) | .metadata.namespace + "/" + .metadata.name' and/or change ready=true to get the pods in ready state. Have a look at this article for many more such uses of jq with kubectl Hope it helps.
I'm running this command: # kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf Which gives me output like this: NAMESPACE NAME READY STATUS RESTARTS AGE 123456-1234-1234-1234-123456789 some-app-123456 2/2 Running 0 10m 123456-1234-1234-1234-123456789 some-app-789112 1/2 Running 0 10m I would like to be able to filter on the READY column, but I can't seem to find the right field-selector value. Is this possible? I've tried searching around for a list of available field-selectors, and haven't had any luck. It's possible that one doesn't exist.
Unfortunately, there is no built-in way to do this. To get around this, you would have to attach the disk to a container or VM and inspect its contents You need to add the pvc to a pod and copy the files using kubectl cp or list the files using kubectl exec to check the contents of PVC. A simpler way is to create an inspector POD to inspect the contents of PVC and clean up the pod easily as explained in this blog by Frank Sauerburger.
###Instruction: I'm having a pain point with Kubernetes and I hope someone can help me out. I'm trying to see the contents of a PVC (Persistent Volume Claim), but it seems that the only way to do so is to mount the PVC into a container and check what's inside. As you can imagine, this is quite cumbersome and not very practical. I was wondering if there are any tools or solutions that could make this process easier? I've searched around but haven't found anything yet. If anyone has any ideas or suggestions, please let me know! I'm eager to hear your thoughts and experiences. Thank you in advance! ### Response: Unfortunately, there is no built-in way to do this. To get around this, you would have to attach the disk to a container or VM and inspect its contents You need to add the pvc to a pod and copy the files using kubectl cp or list the files using kubectl exec to check the contents of PVC. A simpler way is to create an inspector POD to inspect the contents of PVC and clean up the pod easily as explained in this blog by Frank Sauerburger.
I'm having a pain point with Kubernetes and I hope someone can help me out. I'm trying to see the contents of a PVC (Persistent Volume Claim), but it seems that the only way to do so is to mount the PVC into a container and check what's inside. As you can imagine, this is quite cumbersome and not very practical. I was wondering if there are any tools or solutions that could make this process easier? I've searched around but haven't found anything yet. If anyone has any ideas or suggestions, please let me know! I'm eager to hear your thoughts and experiences. Thank you in advance!
Looks like my issue was that my worker, scheduler, and web server container had different dag volume mounts from the ones I defined for my git-sync container. This is what I had: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config And the following edits made it work. I removed the dag subpath and plugins volume mount: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config
###Instruction: Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo. So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).   I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo. INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"] I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags" I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags" I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master" I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"   However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver containers:         - name: airflow-scheduler           image: <redacted>           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: "AIRFLOW_SERVICE_NAME-env"           env:                         <redacted>           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - "/usr/bin/dumb-init"             - "--"           args:             <redacted>         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - "-wait=60"             - "-repo=<repo>"             - "-branch=master"             - "-root=/opt/airflow/dags"             - "-username=<redacted>"             - "-password-file=/etc/git-secret/token"           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated! ### Response: Looks like my issue was that my worker, scheduler, and web server container had different dag volume mounts from the ones I defined for my git-sync container. This is what I had: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config And the following edits made it work. I removed the dag subpath and plugins volume mount: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config
Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo. So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).   I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo. INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"] I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags" I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags" I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master" I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"   However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver containers:         - name: airflow-scheduler           image: <redacted>           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: "AIRFLOW_SERVICE_NAME-env"           env:                         <redacted>           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - "/usr/bin/dumb-init"             - "--"           args:             <redacted>         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - "-wait=60"             - "-repo=<repo>"             - "-branch=master"             - "-root=/opt/airflow/dags"             - "-username=<redacted>"             - "-password-file=/etc/git-secret/token"           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!
Container name Container name it's just the name you are providing to identify it. it is useful to debug the POD when failing or running multiple containers inside single POD. Get logs of specific container kubectl logs <pod-name> -c <container-name> Hostname Only service get the DNS name record mapping not POD so you can't call the specific POD over DNS name unless you are using statefulset. hostname: busybox-2 subdomain: busybox-subdomain hostname and subdomain you can use the give DNS mapping to POD. FQDN fully qualified domain name. You will be able to request that POD over the hostname.subdomain.namespace-name.svc.cluster.local you can curl the hostname.subdomain.namespace-name.svc.cluster.local
###Instruction: In a Deployment of Kubernetes, what is the difference between spec.containers.name and spec.hostname? spec: containers: - env: - name: CONFIG_HOME value: /config image: my-image:1.0.0.0-SNAPSHOT name: name-a ports: - containerPort: 8080 ... hostname: name-b ### Response: Container name Container name it's just the name you are providing to identify it. it is useful to debug the POD when failing or running multiple containers inside single POD. Get logs of specific container kubectl logs <pod-name> -c <container-name> Hostname Only service get the DNS name record mapping not POD so you can't call the specific POD over DNS name unless you are using statefulset. hostname: busybox-2 subdomain: busybox-subdomain hostname and subdomain you can use the give DNS mapping to POD. FQDN fully qualified domain name. You will be able to request that POD over the hostname.subdomain.namespace-name.svc.cluster.local you can curl the hostname.subdomain.namespace-name.svc.cluster.local
In a Deployment of Kubernetes, what is the difference between spec.containers.name and spec.hostname? spec: containers: - env: - name: CONFIG_HOME value: /config image: my-image:1.0.0.0-SNAPSHOT name: name-a ports: - containerPort: 8080 ... hostname: name-b
How do you know if your application needs to be configured to use workload identity or not? That will be hard to say, it depends on your application if it's using any GCP service or needs Auth or not. If it's not using and running simple API/Tasks you can run it without workload identity just drain node and migrate to new pool. If you are using the Oauth, Key or another method, storing value in Env variable to Authenticate to Google service will work however best practice is to go with Workload identity for security reasons.
###Instruction: This doc https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool. How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled? Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too. ### Response: How do you know if your application needs to be configured to use workload identity or not? That will be hard to say, it depends on your application if it's using any GCP service or needs Auth or not. If it's not using and running simple API/Tasks you can run it without workload identity just drain node and migrate to new pool. If you are using the Oauth, Key or another method, storing value in Env variable to Authenticate to Google service will work however best practice is to go with Workload identity for security reasons.
This doc https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool. How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled? Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too.
There are a couple of things that should decide the impact. Depending on how the applications authenticate to google. No authentication: can just be moved Authenticating with service account json files: can be moved, but recommended to switch to workload identity for security reasons Authenticating with the default compute engine service account: Not possible, you will probably need to setup workload identity first to provide a more applicable SA for the pods
###Instruction: This doc https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool. How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled? Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too. ### Response: There are a couple of things that should decide the impact. Depending on how the applications authenticate to google. No authentication: can just be moved Authenticating with service account json files: can be moved, but recommended to switch to workload identity for security reasons Authenticating with the default compute engine service account: Not possible, you will probably need to setup workload identity first to provide a more applicable SA for the pods
This doc https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool. How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled? Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too.
Your problem could be probably related to the directory structure you are defining across the different containers. It is unclear in your question but, according to your containers definitions, your git repository should contain at least dags and plugins as top level directories: / ├─ dags/ ├─ plugins/ This structure resembles a typical airflow folder structure: I assume, that is the one you configured. Then, please, try using this slightly modified version of your Kubernetes configuration: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config command: - "/usr/bin/dumb-init" - "--" args: <redacted> - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - "-wait=60" - "-repo=<repo>" - "-branch=master" - "-root=/opt/airflow" - "-username=<redacted>" - "-password-file=/etc/git-secret/token" volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt volumes: - name: scripts configMap: name: AIRFLOW_SERVICE_NAME-scripts defaultMode: 493 - name: dags-data emptyDir: {} - name: variables-pools configMap: name: AIRFLOW_SERVICE_NAME-variables-pools defaultMode: 493 - name: airflow-log-config configMap: name: airflow-log-configmap defaultMode: 493 - name: git-secret secret: secretName: github-token Note that we basically changed the root argument of the git-sync container removing /dags. If it doesn't work, please, try including and tweaking the value of the --dest git-sync flag, I think it could be of help as well.
###Instruction: Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo. So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).   I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo. INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"] I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags" I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags" I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master" I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"   However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver containers:         - name: airflow-scheduler           image: <redacted>           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: "AIRFLOW_SERVICE_NAME-env"           env:                         <redacted>           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - "/usr/bin/dumb-init"             - "--"           args:             <redacted>         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - "-wait=60"             - "-repo=<repo>"             - "-branch=master"             - "-root=/opt/airflow/dags"             - "-username=<redacted>"             - "-password-file=/etc/git-secret/token"           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated! ### Response: Your problem could be probably related to the directory structure you are defining across the different containers. It is unclear in your question but, according to your containers definitions, your git repository should contain at least dags and plugins as top level directories: / ├─ dags/ ├─ plugins/ This structure resembles a typical airflow folder structure: I assume, that is the one you configured. Then, please, try using this slightly modified version of your Kubernetes configuration: containers: - name: airflow-scheduler image: <redacted> imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: "AIRFLOW_SERVICE_NAME-env" env: <redacted> resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config command: - "/usr/bin/dumb-init" - "--" args: <redacted> - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - "-wait=60" - "-repo=<repo>" - "-branch=master" - "-root=/opt/airflow" - "-username=<redacted>" - "-password-file=/etc/git-secret/token" volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt volumes: - name: scripts configMap: name: AIRFLOW_SERVICE_NAME-scripts defaultMode: 493 - name: dags-data emptyDir: {} - name: variables-pools configMap: name: AIRFLOW_SERVICE_NAME-variables-pools defaultMode: 493 - name: airflow-log-config configMap: name: airflow-log-configmap defaultMode: 493 - name: git-secret secret: secretName: github-token Note that we basically changed the root argument of the git-sync container removing /dags. If it doesn't work, please, try including and tweaking the value of the --dest git-sync flag, I think it could be of help as well.
Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo. So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).   I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo. INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] "level"=0 "msg"="starting up" "pid"=12 "args"=["/git-sync","-wait=60","-repo=https://github.com/jorgeavelar98/AirflowProject.git","-branch=master","-root=/opt/airflow/dags","-username=jorgeavelar98","-password-file=/etc/git-secret/token"] I0411 20:50:31.029064 12 main.go:950] "level"=0 "msg"="cloning repo" "origin"="https://github.com/jorgeavelar98/AirflowProject.git" "path"="/opt/airflow/dags" I0411 20:50:31.031728 12 main.go:956] "level"=0 "msg"="git root exists and is not empty (previous crash?), cleaning up" "path"="/opt/airflow/dags" I0411 20:50:31.894074 12 main.go:760] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.907256 12 main.go:800] "level"=0 "msg"="adding worktree" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "branch"="origin/master" I0411 20:50:31.911039 12 main.go:860] "level"=0 "msg"="reset worktree to hash" "path"="/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" "hash"="18d3c8e19fb9049b7bfca9cfd8fbadc032507e03" I0411 20:50:31.911065 12 main.go:865] "level"=0 "msg"="updating submodules"   However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver containers:         - name: airflow-scheduler           image: <redacted>           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: "AIRFLOW_SERVICE_NAME-env"           env:                         <redacted>           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - "/usr/bin/dumb-init"             - "--"           args:             <redacted>         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - "-wait=60"             - "-repo=<repo>"             - "-branch=master"             - "-root=/opt/airflow/dags"             - "-username=<redacted>"             - "-password-file=/etc/git-secret/token"           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!
I was able to resolve the issue by these 3 steps, step 1. You need to update your fluentbit image to latest image: fluent/fluent-bit:2.1.1 You can get the deployment file from here step 2: Add "Suppress_Type_Name On" to output-elasticsearch.conf output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} HTTP_User ${FLUENT_ELASTICSEARCH_USER} HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD} Logstash_Format On Replace_Dots On Retry_Limit False Suppress_Type_Name On Step 3. Delete the fluentbit pods and reapply it kubectl delete -f fluentbit-ds.yaml kubectl apply -f fluentbit-ds.yaml
###Instruction: i wan't to send pod log to ELK, buat after deploying fluentd i get Error, i got from tutorial Official Fluentd documentation EKS Version 1.22 i put Suppress_Type_Name On, it's not solved this issue [2022/06/20 16:23:07] [error] [output:es:es.0] HTTP status=400 URI=/_bulk, response: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400} my configmap fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False ### Response: I was able to resolve the issue by these 3 steps, step 1. You need to update your fluentbit image to latest image: fluent/fluent-bit:2.1.1 You can get the deployment file from here step 2: Add "Suppress_Type_Name On" to output-elasticsearch.conf output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} HTTP_User ${FLUENT_ELASTICSEARCH_USER} HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD} Logstash_Format On Replace_Dots On Retry_Limit False Suppress_Type_Name On Step 3. Delete the fluentbit pods and reapply it kubectl delete -f fluentbit-ds.yaml kubectl apply -f fluentbit-ds.yaml
i wan't to send pod log to ELK, buat after deploying fluentd i get Error, i got from tutorial Official Fluentd documentation EKS Version 1.22 i put Suppress_Type_Name On, it's not solved this issue [2022/06/20 16:23:07] [error] [output:es:es.0] HTTP status=400 URI=/_bulk, response: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400} my configmap fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False
Your sample: ... "resources":{"requests":{ "ephemeral-storage":"116Mi"}, # <-- incorrect ... Try: ... "resources":{ "requests":{ "ephemeral-storage": "116Mi", "memory": "50Mi", "cpu": "50m" }, ... Here's a working example that runs a pod for 5m. Do kubectl describe pod busybox and check the Requests section: kubectl run busybox --image busybox --overrides ' { "apiVersion": "v1", "spec": { "restartPolicy": "Never", "containers": [ { "name": "busybox", "image": "busybox", "command": ["ash", "-c", "sleep 300"], "resources": { "requests":{ "ephemeral-storage": "116Mi", "memory": "50Mi", "cpu": "50m" }}}]}}'
###Instruction: I'm invoking Kaniko (I Docker image that can build Docker images) successfully in this way (EKS environment): cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --stdin --rm --restart=Never --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --env=AWS_SDK_LOAD_CONFIG=true -- --destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-54-DV-6525-kube --context tar://stdin --label commit=8e3a236f702c689891a50a60acf7e05658fa3939 --label build_url=Sin-Jenkins This works ok, except when there is no enough ephemeral storage available. Now I want to specify limits, like ephemereal-storage. As the --limits option has been removed in recent versions of Kubernetes, I have to use the --overrides, and I have to change many things. Here is how I do it: cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --restart=Never -i --rm --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --overrides='{"apiVersion":"v1", "spec": {"containers":[{ "name":"kaniko", "stdin": true, "restartPolicy":"Never", "image":"748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0", "env":[{"name":"AWS_SDK_LOAD_CONFIG","value":"true"}], "resources":{"requests":{ "ephemeral-storage":"116Mi", "memory": "50Mi", "cpu": "50m" }}, "args":[ "--destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-57-DV-6525-kube", "--context","tar://stdin" ,"--label","commit=8e3a236f702c689891a50a60acf7e05658fa3939","--label","build_url=Sin-Jenkins"] }]}' It launches correctly kaniko, but after 65 seconds, kaniko is killed and the only error message is: error: timed out waiting for the condition $ kubectl get ev | grep kaniko 19m Normal Scheduled pod/kaniko-httpd-ex Successfully assigned sbt-david/kaniko-httpd-ex to ip-10-120-134-171.eu-west-1.compute.internal 19m Normal Pulled pod/kaniko-httpd-ex Container image "748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0" already present on machine 19m Normal Created pod/kaniko-httpd-ex Created container kaniko 19m Normal Started pod/kaniko-httpd-ex Started container kaniko 17m Normal Killing pod/kaniko-httpd-ex Stopping container kaniko How to specify limits on resources when using stdin? Related info: https://github.com/kubernetes/kubectl/issues/1101 Notes: I cannot use the --limits flag from kubectl run as it has been removed recently. ### Response: Your sample: ... "resources":{"requests":{ "ephemeral-storage":"116Mi"}, # <-- incorrect ... Try: ... "resources":{ "requests":{ "ephemeral-storage": "116Mi", "memory": "50Mi", "cpu": "50m" }, ... Here's a working example that runs a pod for 5m. Do kubectl describe pod busybox and check the Requests section: kubectl run busybox --image busybox --overrides ' { "apiVersion": "v1", "spec": { "restartPolicy": "Never", "containers": [ { "name": "busybox", "image": "busybox", "command": ["ash", "-c", "sleep 300"], "resources": { "requests":{ "ephemeral-storage": "116Mi", "memory": "50Mi", "cpu": "50m" }}}]}}'
I'm invoking Kaniko (I Docker image that can build Docker images) successfully in this way (EKS environment): cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --stdin --rm --restart=Never --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --env=AWS_SDK_LOAD_CONFIG=true -- --destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-54-DV-6525-kube --context tar://stdin --label commit=8e3a236f702c689891a50a60acf7e05658fa3939 --label build_url=Sin-Jenkins This works ok, except when there is no enough ephemeral storage available. Now I want to specify limits, like ephemereal-storage. As the --limits option has been removed in recent versions of Kubernetes, I have to use the --overrides, and I have to change many things. Here is how I do it: cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --restart=Never -i --rm --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --overrides='{"apiVersion":"v1", "spec": {"containers":[{ "name":"kaniko", "stdin": true, "restartPolicy":"Never", "image":"748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0", "env":[{"name":"AWS_SDK_LOAD_CONFIG","value":"true"}], "resources":{"requests":{ "ephemeral-storage":"116Mi", "memory": "50Mi", "cpu": "50m" }}, "args":[ "--destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-57-DV-6525-kube", "--context","tar://stdin" ,"--label","commit=8e3a236f702c689891a50a60acf7e05658fa3939","--label","build_url=Sin-Jenkins"] }]}' It launches correctly kaniko, but after 65 seconds, kaniko is killed and the only error message is: error: timed out waiting for the condition $ kubectl get ev | grep kaniko 19m Normal Scheduled pod/kaniko-httpd-ex Successfully assigned sbt-david/kaniko-httpd-ex to ip-10-120-134-171.eu-west-1.compute.internal 19m Normal Pulled pod/kaniko-httpd-ex Container image "748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0" already present on machine 19m Normal Created pod/kaniko-httpd-ex Created container kaniko 19m Normal Started pod/kaniko-httpd-ex Started container kaniko 17m Normal Killing pod/kaniko-httpd-ex Stopping container kaniko How to specify limits on resources when using stdin? Related info: https://github.com/kubernetes/kubectl/issues/1101 Notes: I cannot use the --limits flag from kubectl run as it has been removed recently.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prod-demo-ingress namespace: prod labels: env: prod app: demo-ms layer: frontend annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - pathType: Prefix path: "/" backend: service: name: prod-demo-service port: number: 8080
###Instruction: Getting below error while trying to create an ingress endpoint on k8s v1.26.3. i have created a k8s cluster using kops. Error from server (BadRequest): error when creating "ingress.yaml": Ingress in version "v1" cannot be handled as a Ingress: strict decoding error: unknown field "spec.rules[0].http.paths[0].backend.serviceName", unknown field "spec.rules[0].http.paths[0].backend.servicePort" apiVersion: networking.k8s.io/v1 kind: Ingress metadata: labels: env: prod app: demo-ms layer: frontend name: prod-demo-ingress namespace: prod annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - backend: serviceName: prod-demo-service servicePort: 8080 path: / ~~~ ### Response: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prod-demo-ingress namespace: prod labels: env: prod app: demo-ms layer: frontend annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - pathType: Prefix path: "/" backend: service: name: prod-demo-service port: number: 8080
Getting below error while trying to create an ingress endpoint on k8s v1.26.3. i have created a k8s cluster using kops. Error from server (BadRequest): error when creating "ingress.yaml": Ingress in version "v1" cannot be handled as a Ingress: strict decoding error: unknown field "spec.rules[0].http.paths[0].backend.serviceName", unknown field "spec.rules[0].http.paths[0].backend.servicePort" apiVersion: networking.k8s.io/v1 kind: Ingress metadata: labels: env: prod app: demo-ms layer: frontend name: prod-demo-ingress namespace: prod annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - backend: serviceName: prod-demo-service servicePort: 8080 path: / ~~~
First you create a service for your application. You use the default service type (ClusterIP). Then you create an ingres-ngnix service of type load balancer. And you need to create some Ingress rules to make URL paths to the service of your application. And you need to set the annotation for the Ingres nginx controller to your zone (for example fsn1). If you are using cluster API provider hetzner from Syself (which I recommend), then the docs should guide you. Please create an issue if the docs are missing something. (I work for Syself)
###Instruction: I'm fairly new to Kubernetes and having some problems understanding some things. I'm setting up a cluster on Hetzner Cloud for client applications. In order to make the ingresses of the applications work, I deployed Hetzner Cloud Controller Manager. My question is, how do I expose the applications so i can access them from the outside on client1.cluster.example.com, client2.cluster.example.com, etc... The Hetzner loadbalancer that the cloud control manager manages, forwards incoming requests from ports 443 (on the loadbalancer) to 30500 (on the ingress controller) and 80 to 30379. Is this possible? If so, do i need to set up a DNS record for *.cluster.example.com? How do i do this with ssl certificates, how does this work? any help will be appreciated Thank you :) I tried searching for these things on the internet, but since my lack of experience I couldn't find out whether the situations were applicable to my situation. ### Response: First you create a service for your application. You use the default service type (ClusterIP). Then you create an ingres-ngnix service of type load balancer. And you need to create some Ingress rules to make URL paths to the service of your application. And you need to set the annotation for the Ingres nginx controller to your zone (for example fsn1). If you are using cluster API provider hetzner from Syself (which I recommend), then the docs should guide you. Please create an issue if the docs are missing something. (I work for Syself)
I'm fairly new to Kubernetes and having some problems understanding some things. I'm setting up a cluster on Hetzner Cloud for client applications. In order to make the ingresses of the applications work, I deployed Hetzner Cloud Controller Manager. My question is, how do I expose the applications so i can access them from the outside on client1.cluster.example.com, client2.cluster.example.com, etc... The Hetzner loadbalancer that the cloud control manager manages, forwards incoming requests from ports 443 (on the loadbalancer) to 30500 (on the ingress controller) and 80 to 30379. Is this possible? If so, do i need to set up a DNS record for *.cluster.example.com? How do i do this with ssl certificates, how does this work? any help will be appreciated Thank you :) I tried searching for these things on the internet, but since my lack of experience I couldn't find out whether the situations were applicable to my situation.
There is no additional cost running the Ingress controller with GCP however you have to pay for the HTTP LoadBalancer that Ingress controller create. It will be the same for all ingress controllers you use GCP, Nginx, Apache. While Gateway also it's the same, it's controller only provides more flexibility however with Gateway if you are leveraging Managed SSL Certificates, etc then you might have to pay for those services. Incase if you would like to read more about Ingress vs Gateway checkout my article : https://medium.com/google-cloud/kubernetes-ingress-vs-gateway-api-647ee233693d
###Instruction: I was wondering if the ingress controller of GKE comes either any additional costs, like a Google Cloud LoadBalancer, and if so is it possible to expose services with an ingress controller without the Google Cloud LoadBalancer. The same question is for GKE Gateway API. I have tried to look at several pieces of documentation online and it is very contradicting. ### Response: There is no additional cost running the Ingress controller with GCP however you have to pay for the HTTP LoadBalancer that Ingress controller create. It will be the same for all ingress controllers you use GCP, Nginx, Apache. While Gateway also it's the same, it's controller only provides more flexibility however with Gateway if you are leveraging Managed SSL Certificates, etc then you might have to pay for those services. Incase if you would like to read more about Ingress vs Gateway checkout my article : https://medium.com/google-cloud/kubernetes-ingress-vs-gateway-api-647ee233693d
I was wondering if the ingress controller of GKE comes either any additional costs, like a Google Cloud LoadBalancer, and if so is it possible to expose services with an ingress controller without the Google Cloud LoadBalancer. The same question is for GKE Gateway API. I have tried to look at several pieces of documentation online and it is very contradicting.
Before deleting the pods check why the pods are failing to create, if it is due to memory follow below steps: If you are using docker run docker system prune -a to clean up some space taken by docker and then the node will get some space, drain the node and restart docker Any container will write any amount of storage to the filesystem. set a quota (limits.ephemeral-storage, requests.ephemeral-storage) to limit this. You may need to increase storage as kubernetes need more space. For Deployment: scale down the deployment so that if pods are deleted new pods will not try to create. If you scale down Kubernetes will delete the pods. Now scale up the deployment so kubernetes creates new replicas of the pod that the previous command. You can also delete all the pods which are in failed phase without scaling the deployment by running below command: kubectl delete pod --field-selector=status.phase==Failed You can find more methods regarding deleting the pod in this blog by Oren Ninio Generally it is recommended to use an autoscaler to manage deployments.
###Instruction: When low on resources kubernetes starts to re-create pods but newer pods also fail, so they keep growing in number. The cluster becomes unusable. This seems an illogical behaviour. Is it possible to prevent it ? Is it possible to recover without deleting everything ? light@o-node0:~/lh-orchestrator$ k get pod NAME READY STATUS RESTARTS AGE aa344-detect-5cd757f65d-8kz2n 0/1 ContainerStatusUnknown 536 (62m ago) 46h bb756-detect-855f6bcc78-jnfzd 0/1 ContainerStatusUnknown 8 (59m ago) 75m aa344-analyz-5cc6c59d6c-rchkm 0/1 ContainerStatusUnknown 1 46h lh-graphql-77fc996db5-8qcxl 0/1 ContainerStatusUnknown 1 (2d ago) 2d lh-pgadmin-5b598d4d4-shjbz 0/1 ContainerStatusUnknown 1 2d bb756-analyz-8cd7c48f7-k2xh9 0/1 ContainerStatusUnknown 1 75m lh-postgres-698bc448bd-9vkqp 0/1 ContainerStatusUnknown 1 2d lh-pgadmin-5b598d4d4-c4ts4 0/1 ContainerStatusUnknown 1 54m lh-graphql-77fc996db5-btvzx 0/1 ContainerStatusUnknown 1 54m lh-postgres-698bc448bd-99m55 0/1 ContainerStatusUnknown 1 54m aa344-detect-5cd757f65d-qmvcc 0/1 ContainerStatusUnknown 1 58m bb756-detect-855f6bcc78-7lc7g 0/1 ContainerStatusUnknown 1 56m lh-graphql-77fc996db5-7lbms 1/1 Running 0 34m lh-pgadmin-5b598d4d4-l6f7s 0/1 ContainerStatusUnknown 1 34m aa344-analyz-5cc6c59d6c-78ltt 0/1 ContainerStatusUnknown 1 (17m ago) 55m lh-postgres-698bc448bd-gjbf2 0/1 ContainerStatusUnknown 1 34m aa344-detect-5cd757f65d-cbspd 0/1 ContainerStatusUnknown 1 33m bb756-detect-855f6bcc78-qvqsf 0/1 ContainerStatusUnknown 1 32m lh-pgadmin-5b598d4d4-4znww 1/1 Running 0 17m lh-postgres-698bc448bd-xxm28 1/1 Running 0 16m aa344-analyz-5cc6c59d6c-h7vfc 1/1 Running 3 (9m41s ago) 16m bb756-analyz-8cd7c48f7-4tdcp 1/1 Running 7 (10m ago) 54m bb756-detect-855f6bcc78-fgpzx 0/1 Pending 0 2s bb756-detect-855f6bcc78-t4p4q 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-cd6gl 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-dwhf6 0/1 Pending 0 1s ### Response: Before deleting the pods check why the pods are failing to create, if it is due to memory follow below steps: If you are using docker run docker system prune -a to clean up some space taken by docker and then the node will get some space, drain the node and restart docker Any container will write any amount of storage to the filesystem. set a quota (limits.ephemeral-storage, requests.ephemeral-storage) to limit this. You may need to increase storage as kubernetes need more space. For Deployment: scale down the deployment so that if pods are deleted new pods will not try to create. If you scale down Kubernetes will delete the pods. Now scale up the deployment so kubernetes creates new replicas of the pod that the previous command. You can also delete all the pods which are in failed phase without scaling the deployment by running below command: kubectl delete pod --field-selector=status.phase==Failed You can find more methods regarding deleting the pod in this blog by Oren Ninio Generally it is recommended to use an autoscaler to manage deployments.
When low on resources kubernetes starts to re-create pods but newer pods also fail, so they keep growing in number. The cluster becomes unusable. This seems an illogical behaviour. Is it possible to prevent it ? Is it possible to recover without deleting everything ? light@o-node0:~/lh-orchestrator$ k get pod NAME READY STATUS RESTARTS AGE aa344-detect-5cd757f65d-8kz2n 0/1 ContainerStatusUnknown 536 (62m ago) 46h bb756-detect-855f6bcc78-jnfzd 0/1 ContainerStatusUnknown 8 (59m ago) 75m aa344-analyz-5cc6c59d6c-rchkm 0/1 ContainerStatusUnknown 1 46h lh-graphql-77fc996db5-8qcxl 0/1 ContainerStatusUnknown 1 (2d ago) 2d lh-pgadmin-5b598d4d4-shjbz 0/1 ContainerStatusUnknown 1 2d bb756-analyz-8cd7c48f7-k2xh9 0/1 ContainerStatusUnknown 1 75m lh-postgres-698bc448bd-9vkqp 0/1 ContainerStatusUnknown 1 2d lh-pgadmin-5b598d4d4-c4ts4 0/1 ContainerStatusUnknown 1 54m lh-graphql-77fc996db5-btvzx 0/1 ContainerStatusUnknown 1 54m lh-postgres-698bc448bd-99m55 0/1 ContainerStatusUnknown 1 54m aa344-detect-5cd757f65d-qmvcc 0/1 ContainerStatusUnknown 1 58m bb756-detect-855f6bcc78-7lc7g 0/1 ContainerStatusUnknown 1 56m lh-graphql-77fc996db5-7lbms 1/1 Running 0 34m lh-pgadmin-5b598d4d4-l6f7s 0/1 ContainerStatusUnknown 1 34m aa344-analyz-5cc6c59d6c-78ltt 0/1 ContainerStatusUnknown 1 (17m ago) 55m lh-postgres-698bc448bd-gjbf2 0/1 ContainerStatusUnknown 1 34m aa344-detect-5cd757f65d-cbspd 0/1 ContainerStatusUnknown 1 33m bb756-detect-855f6bcc78-qvqsf 0/1 ContainerStatusUnknown 1 32m lh-pgadmin-5b598d4d4-4znww 1/1 Running 0 17m lh-postgres-698bc448bd-xxm28 1/1 Running 0 16m aa344-analyz-5cc6c59d6c-h7vfc 1/1 Running 3 (9m41s ago) 16m bb756-analyz-8cd7c48f7-4tdcp 1/1 Running 7 (10m ago) 54m bb756-detect-855f6bcc78-fgpzx 0/1 Pending 0 2s bb756-detect-855f6bcc78-t4p4q 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-cd6gl 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-dwhf6 0/1 Pending 0 1s
Curl Magic and trial and error. Make sure that the user has the right permissions so you don't get the 403. curl -k \ --http1.1 \ -H "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \ -H "Sec-WebSocket-Version: 13" \ -i \ -N \ -L \ -k \ -H "Connection: Upgrade" \ -H "Upgrade: websocket" \ -H "Authorization: Bearer $TOKEN" \ "https://127.0.0.1:53329/api/v1/namespaces/default/pods/single-wrong-arch/exec?command=ls&command=-l&stdin=true&stdout=true&stderr=true" HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: qGEgH3En71di5rrssAZTmtRTyFk= Sec-WebSocket-Protocol: total 56 ~drwxr-xr-x 2 root root 4096 Mar 29 14:45 bin drwxr-xr-x 5 root root 360 Apr 27 11:03 dev drwxr-xr-x 1 root root 4096 Apr 27 11:03 etc drwxr-xr-x 2 root root 4096 Mar 29 14:45 home drwxr-xr-x 7 root root 4096 Mar 29 14:45 lib drwxr-xr-x 5 root root 4096 Mar 29 14:45 media drwxr-xr-x 2 root root 4096 Mar 29 14:45 mnt drwxr-xr-x 2 root root 4096 Mar 29 14:45 opt dr-xr-xr-x 248 root root 0 Apr 27 11:03 proc drwx------ 2 root root 4096 Mar 29 14:45 root drwxr-xr-x 1 root root 4096 Apr 27 11:03 run drwxr-xr-x 2 root root 4096 Mar 29 14:45 sbin drwxr-xr-x 2 root root 4096 Mar 29 14:45 srv dr-xr-xr-x 13 root root 0 Apr 27 11:03 sys drwxrwxrwt 2 root root 4096 Mar 29 14:45 tmp drwxr-xr-x 7 root root 4096 Mar 29 14:45 usr drwxr-xr-x 12 root root 4096 Mar 29 14:45 var That's against a minikube cluster, which should explain the port choice and here's the rest of the environemt: $ curl --version curl 7.87.0 (x86_64-apple-darwin22.0) libcurl/7.87.0 (SecureTransport) LibreSSL/3.3.6 zlib/1.2.11 nghttp2/1.51.0 Release-Date: 2022-12-21 Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS GSS-API HSTS HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL threadsafe UnixSockets $ minikube version minikube version: v1.30.1 commit: 08896fd1dc362c097c925146c4a0d0dac715ace0 $ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:04:23Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/arm64"} $ uname -a Darwin 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 arm64
###Instruction: While using Kubernetes API, I came across this issue where I am not able to pod exec into another pod using the API. I made sure I am running a pod with a serviceAccount that has correct RBAC permissions (and both pods are in the same namespace). I tried with curl (7.74.0) but I stumbled upon this SO answer that curl may not be the right utility to do this. I have already tried other possible options (this and this) mentioned in the comments in the same post, but no luck. TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) curl -k -H "Upgrade: SPDY/3.1" -H "Authorization: Bearer $TOKEN" -H "Connection: Upgrade" -H "Sec-WebSocket-Version: 13" -H "Accept: application/json" "https://kuberne tes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls" { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Upgrade request required", "reason": "BadRequest", "code": 400 } Have already tried adding/removing few of those headers. The next thing I tried was wscat (5.2.0). Tried a couple of variations, but needless to say, "no luck" again. wscat -n -H "Authorization: Bearer $TOKEN" -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls error: Unexpected server response: 403 This command outputs nothing (at all): wscat -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&stdin=true&stdout=true&tty=true&token=$TOKEN&namespace=$NAMESPACE I looked at this SO post, but it doesn't work for me. wscat -n -H "Authorization: Bearer $TOKEN" -c 'https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&stdin=true&stdout=true&tty=true' ... SyntaxError: The URL's protocol must be one of "ws:", "wss:", or "ws+unix:" ... Not sure how the fella even managed to use http(s) in the first place. Can someone kindly advise if there's something wrong with what I doing or an alternative (can't use kubectl) to do the same? TIA ### Response: Curl Magic and trial and error. Make sure that the user has the right permissions so you don't get the 403. curl -k \ --http1.1 \ -H "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \ -H "Sec-WebSocket-Version: 13" \ -i \ -N \ -L \ -k \ -H "Connection: Upgrade" \ -H "Upgrade: websocket" \ -H "Authorization: Bearer $TOKEN" \ "https://127.0.0.1:53329/api/v1/namespaces/default/pods/single-wrong-arch/exec?command=ls&command=-l&stdin=true&stdout=true&stderr=true" HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: qGEgH3En71di5rrssAZTmtRTyFk= Sec-WebSocket-Protocol: total 56 ~drwxr-xr-x 2 root root 4096 Mar 29 14:45 bin drwxr-xr-x 5 root root 360 Apr 27 11:03 dev drwxr-xr-x 1 root root 4096 Apr 27 11:03 etc drwxr-xr-x 2 root root 4096 Mar 29 14:45 home drwxr-xr-x 7 root root 4096 Mar 29 14:45 lib drwxr-xr-x 5 root root 4096 Mar 29 14:45 media drwxr-xr-x 2 root root 4096 Mar 29 14:45 mnt drwxr-xr-x 2 root root 4096 Mar 29 14:45 opt dr-xr-xr-x 248 root root 0 Apr 27 11:03 proc drwx------ 2 root root 4096 Mar 29 14:45 root drwxr-xr-x 1 root root 4096 Apr 27 11:03 run drwxr-xr-x 2 root root 4096 Mar 29 14:45 sbin drwxr-xr-x 2 root root 4096 Mar 29 14:45 srv dr-xr-xr-x 13 root root 0 Apr 27 11:03 sys drwxrwxrwt 2 root root 4096 Mar 29 14:45 tmp drwxr-xr-x 7 root root 4096 Mar 29 14:45 usr drwxr-xr-x 12 root root 4096 Mar 29 14:45 var That's against a minikube cluster, which should explain the port choice and here's the rest of the environemt: $ curl --version curl 7.87.0 (x86_64-apple-darwin22.0) libcurl/7.87.0 (SecureTransport) LibreSSL/3.3.6 zlib/1.2.11 nghttp2/1.51.0 Release-Date: 2022-12-21 Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS GSS-API HSTS HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL threadsafe UnixSockets $ minikube version minikube version: v1.30.1 commit: 08896fd1dc362c097c925146c4a0d0dac715ace0 $ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:04:23Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/arm64"} $ uname -a Darwin 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 arm64
While using Kubernetes API, I came across this issue where I am not able to pod exec into another pod using the API. I made sure I am running a pod with a serviceAccount that has correct RBAC permissions (and both pods are in the same namespace). I tried with curl (7.74.0) but I stumbled upon this SO answer that curl may not be the right utility to do this. I have already tried other possible options (this and this) mentioned in the comments in the same post, but no luck. TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) curl -k -H "Upgrade: SPDY/3.1" -H "Authorization: Bearer $TOKEN" -H "Connection: Upgrade" -H "Sec-WebSocket-Version: 13" -H "Accept: application/json" "https://kuberne tes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls" { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Upgrade request required", "reason": "BadRequest", "code": 400 } Have already tried adding/removing few of those headers. The next thing I tried was wscat (5.2.0). Tried a couple of variations, but needless to say, "no luck" again. wscat -n -H "Authorization: Bearer $TOKEN" -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls error: Unexpected server response: 403 This command outputs nothing (at all): wscat -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&stdin=true&stdout=true&tty=true&token=$TOKEN&namespace=$NAMESPACE I looked at this SO post, but it doesn't work for me. wscat -n -H "Authorization: Bearer $TOKEN" -c 'https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&stdin=true&stdout=true&tty=true' ... SyntaxError: The URL's protocol must be one of "ws:", "wss:", or "ws+unix:" ... Not sure how the fella even managed to use http(s) in the first place. Can someone kindly advise if there's something wrong with what I doing or an alternative (can't use kubectl) to do the same? TIA
gRPC supports DNS as the default name-system.The following format is supported which is related to IPV4 address. ipv4:address[:port][,address[:port],...] -- IPv4 addresses Here,you can specify multiple comma-delimited addresses of the form address[:port]: address is the IPv4 address to use. port is the port to use. If not specified, 443 is used. This is the reason you are able to access the grpc service via 192.168.70.1:30378 You can refer the link for more useful information.
###Instruction: I have a grpc service, here is the yaml: apiVersion: apps/v1 kind: Deployment metadata: name: staging labels: app: staging spec: replicas: 4 selector: matchLabels: app: staging template: metadata: labels: app: staging spec: containers: - name: staging image: ... imagePullPolicy: Always ports: - containerPort: 5274 - containerPort: 5900 --- apiVersion: v1 kind: Service metadata: name: staging-service spec: type: NodePort selector: app: staging ports: - name: staging protocol: TCP port: 5274 targetPort: 5274 nodePort: 30277 - name : staging protocol: TCP port: 5900 targetPort: 5900 nodePort: 30278 As you can see, the grpc is on 5900 port, now I have a ingres yaml: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-rpc-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: ingressClassName: nginx rules: -http: paths: - path: /st(/|$)(.*) pathType: Prefix backend: service: name: staging-service port: number: 5900 now, the k8s master is 192.168.70.1, so if I access 192.168.70.1/st in my nestjs project like: url: ‘192.168.70.1/st’, package: ‘test’, I will get the error: details: ‘Name resolution failed for target dns:192.168.70.1/st’, if I access the grpc service via 192.168.70.1:30378 everything is fine. Am I missing something here? Thank you ### Response: gRPC supports DNS as the default name-system.The following format is supported which is related to IPV4 address. ipv4:address[:port][,address[:port],...] -- IPv4 addresses Here,you can specify multiple comma-delimited addresses of the form address[:port]: address is the IPv4 address to use. port is the port to use. If not specified, 443 is used. This is the reason you are able to access the grpc service via 192.168.70.1:30378 You can refer the link for more useful information.
I have a grpc service, here is the yaml: apiVersion: apps/v1 kind: Deployment metadata: name: staging labels: app: staging spec: replicas: 4 selector: matchLabels: app: staging template: metadata: labels: app: staging spec: containers: - name: staging image: ... imagePullPolicy: Always ports: - containerPort: 5274 - containerPort: 5900 --- apiVersion: v1 kind: Service metadata: name: staging-service spec: type: NodePort selector: app: staging ports: - name: staging protocol: TCP port: 5274 targetPort: 5274 nodePort: 30277 - name : staging protocol: TCP port: 5900 targetPort: 5900 nodePort: 30278 As you can see, the grpc is on 5900 port, now I have a ingres yaml: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-rpc-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: ingressClassName: nginx rules: -http: paths: - path: /st(/|$)(.*) pathType: Prefix backend: service: name: staging-service port: number: 5900 now, the k8s master is 192.168.70.1, so if I access 192.168.70.1/st in my nestjs project like: url: ‘192.168.70.1/st’, package: ‘test’, I will get the error: details: ‘Name resolution failed for target dns:192.168.70.1/st’, if I access the grpc service via 192.168.70.1:30378 everything is fine. Am I missing something here? Thank you
Disclaimer: At the time of writing I am not an expert on Kubernetes internals nor do I have any knowledge about Go. (However, I am familiar with the general functioning of K8s components and also with high/low level programming languages...) After about 10 min of research I've gained satisfactory answers: Question what units do we see here after the period? Are these nanoseconds? Short answer The short answer lies in time.go. Long answer The Kubernetes event regarding "Successfully pulled image <image-name> in <time>" is generated by the kubelet component. Kubelet is basically responsible for managing (the life cycle of) pods and their containers. It generates events for various stages of the container's execution, including the image pulling. Thus the event you mentioned is generated by the kubelet during the image pulling stage of container execution. (At least I knew that part already). You seem to have a kubelet <= release-1.24 in use. The corresponding event is generated here: ..., fmt.Sprintf("Successfully pulled image %q in %v", container.Image, time.Since(startTime)), ... The duration string comes from the time.Since(startTime) (Duration) stament. The fmt.Sprintf calls the func (d Duration) String() string method on that Duration value, which appears to produce a string output according to the following criteria: If the duration is less than one second, it is displayed with smaller units such as "12.345µs" or "678ns". If the duration is between one second and one minute, it is displayed in seconds, such as "5s" or "42.123456s". If the duration is between one minute and one hour, it is displayed in minutes and seconds, such as "3m45s" or "59m59s". If the duration is more than one hour, it is displayed in hours, minutes, and seconds, such as "1h42m" or "12h34m56s". Maybe you can develop suitable patterns in Splunk to transform this format into the desired time unit for visualization (using if-else conditions may also help, e.g. value includes "h", "m" and "s"? then use the regex (\d+)h(\d+)m(\d+\.?\d*)s to extract values from e.g. "2h10m10.100152244s"). Side note Newer versions of kubelet (> release-1.24) seem to use a slightly different log ..., fmt.Sprintf("Successfully pulled image %q in %v (%v including waiting)", container.Image, imagePullResult.pullDuration, time.Since(startTime)), ... e.g. Successfully pulled image "xyz" in 8.831719579s (8.831722421s including waiting)
###Instruction: I'm looking at the Kubernetes logs we see in our log aggregation tool (Splunk, but not important), and I'm trying to visualise image pull speeds over the day. message that appears in the logs is something like: Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.10" in 1.100152244s The value 1.100152244s is a string so obviously doesn't work for visualisation. The string itself can appear in multiple formats: 1.100152244s 4m4.004131504s 64.10064ms Silly question but what units do we see here after the period? Are these nanoseconds? Many thanks in advance ### Response: Disclaimer: At the time of writing I am not an expert on Kubernetes internals nor do I have any knowledge about Go. (However, I am familiar with the general functioning of K8s components and also with high/low level programming languages...) After about 10 min of research I've gained satisfactory answers: Question what units do we see here after the period? Are these nanoseconds? Short answer The short answer lies in time.go. Long answer The Kubernetes event regarding "Successfully pulled image <image-name> in <time>" is generated by the kubelet component. Kubelet is basically responsible for managing (the life cycle of) pods and their containers. It generates events for various stages of the container's execution, including the image pulling. Thus the event you mentioned is generated by the kubelet during the image pulling stage of container execution. (At least I knew that part already). You seem to have a kubelet <= release-1.24 in use. The corresponding event is generated here: ..., fmt.Sprintf("Successfully pulled image %q in %v", container.Image, time.Since(startTime)), ... The duration string comes from the time.Since(startTime) (Duration) stament. The fmt.Sprintf calls the func (d Duration) String() string method on that Duration value, which appears to produce a string output according to the following criteria: If the duration is less than one second, it is displayed with smaller units such as "12.345µs" or "678ns". If the duration is between one second and one minute, it is displayed in seconds, such as "5s" or "42.123456s". If the duration is between one minute and one hour, it is displayed in minutes and seconds, such as "3m45s" or "59m59s". If the duration is more than one hour, it is displayed in hours, minutes, and seconds, such as "1h42m" or "12h34m56s". Maybe you can develop suitable patterns in Splunk to transform this format into the desired time unit for visualization (using if-else conditions may also help, e.g. value includes "h", "m" and "s"? then use the regex (\d+)h(\d+)m(\d+\.?\d*)s to extract values from e.g. "2h10m10.100152244s"). Side note Newer versions of kubelet (> release-1.24) seem to use a slightly different log ..., fmt.Sprintf("Successfully pulled image %q in %v (%v including waiting)", container.Image, imagePullResult.pullDuration, time.Since(startTime)), ... e.g. Successfully pulled image "xyz" in 8.831719579s (8.831722421s including waiting)
I'm looking at the Kubernetes logs we see in our log aggregation tool (Splunk, but not important), and I'm trying to visualise image pull speeds over the day. message that appears in the logs is something like: Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.10" in 1.100152244s The value 1.100152244s is a string so obviously doesn't work for visualisation. The string itself can appear in multiple formats: 1.100152244s 4m4.004131504s 64.10064ms Silly question but what units do we see here after the period? Are these nanoseconds? Many thanks in advance
If could use the yaml-update-action, as example, adding this step before the deploy: - name: Update values.yaml uses: fjogeleit/yaml-update-action@main with: valueFile: '<the-manifest-filename>' propertyPath: data['api-key'] value: ${{ secrets. API_KEY_DEV_QA }} commitChange: false `` Tested here https://github.com/mbiagetti/github-action-poc/pull/3
###Instruction: I want to use github actions to deploy a kubernetes secret to my cluster but I want to leverage the secrets in github actions to store the secret value and inject into my manifest file. I'm new to github actions so not sure how it can be done This is my github actions file on: push: branches: ["main"] workflow_dispatch: env: RESOURCE_GROUP: "rg" CLUSTER_NAME: "cluster" DEPLOYMENT_MANIFEST_PATH: "kustomize/overlay/uat" jobs: deploy: permissions: actions: read contents: read id-token: write runs-on: ubuntu-latest steps: # Checks out the repository this file is in - uses: actions/checkout@v3 # Logs in with your Azure credentials - name: Log in with Azure uses: azure/login@v1 with: creds: '${{ secrets.AZURE_CREDENTIALS_UAT }}' # Use kubelogin to configure your kubeconfig for Azure auth - name: Set up kubelogin for non-interactive login uses: azure/use-kubelogin@v1 with: kubelogin-version: 'v0.0.25' # Retrieves your Azure Kubernetes Service cluster's kubeconfig file - name: Get K8s context uses: azure/aks-set-context@v3 with: resource-group: ${{ env.RESOURCE_GROUP }} cluster-name: ${{ env.CLUSTER_NAME }} admin: 'false' use-kubelogin: 'true' # Deploys application based on given manifest file - name: Deploys application uses: Azure/k8s-deploy@v4 with: action: deploy manifests: ${{ env.DEPLOYMENT_MANIFEST_PATH }} This is my manifest apiVersion: v1 data: api-key: kind: Secret metadata: name: datadog-api-key type: Opaque I've created the secret in github as API_KEY_DEV_QA but unsure how to get github actions to inject that into the above yaml during the workflow. ### Response: If could use the yaml-update-action, as example, adding this step before the deploy: - name: Update values.yaml uses: fjogeleit/yaml-update-action@main with: valueFile: '<the-manifest-filename>' propertyPath: data['api-key'] value: ${{ secrets. API_KEY_DEV_QA }} commitChange: false `` Tested here https://github.com/mbiagetti/github-action-poc/pull/3
I want to use github actions to deploy a kubernetes secret to my cluster but I want to leverage the secrets in github actions to store the secret value and inject into my manifest file. I'm new to github actions so not sure how it can be done This is my github actions file on: push: branches: ["main"] workflow_dispatch: env: RESOURCE_GROUP: "rg" CLUSTER_NAME: "cluster" DEPLOYMENT_MANIFEST_PATH: "kustomize/overlay/uat" jobs: deploy: permissions: actions: read contents: read id-token: write runs-on: ubuntu-latest steps: # Checks out the repository this file is in - uses: actions/checkout@v3 # Logs in with your Azure credentials - name: Log in with Azure uses: azure/login@v1 with: creds: '${{ secrets.AZURE_CREDENTIALS_UAT }}' # Use kubelogin to configure your kubeconfig for Azure auth - name: Set up kubelogin for non-interactive login uses: azure/use-kubelogin@v1 with: kubelogin-version: 'v0.0.25' # Retrieves your Azure Kubernetes Service cluster's kubeconfig file - name: Get K8s context uses: azure/aks-set-context@v3 with: resource-group: ${{ env.RESOURCE_GROUP }} cluster-name: ${{ env.CLUSTER_NAME }} admin: 'false' use-kubelogin: 'true' # Deploys application based on given manifest file - name: Deploys application uses: Azure/k8s-deploy@v4 with: action: deploy manifests: ${{ env.DEPLOYMENT_MANIFEST_PATH }} This is my manifest apiVersion: v1 data: api-key: kind: Secret metadata: name: datadog-api-key type: Opaque I've created the secret in github as API_KEY_DEV_QA but unsure how to get github actions to inject that into the above yaml during the workflow.
Interesting. Even though the yaml you provided is incorrect (.spec.containers map key must be unique), k8s seems to be fine with that. In your case the pod is started with only the second container (name: nginx) in it though. Is this login in a Pod not a Container?? Container. So, with kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash, you login/exec into the nginx container. After correcting the yaml, two containers would be started in the pod and you can log into the desired container via its name (e.g. name: django) using the -c / --container parameter. ... containers: - name: django image: django:local ports: - containerPort: 8011 - name: nginx image: nginx:local ports: - containerPort: 80 login: kubectl exec -it POD_NAME -c CONTAINER_NAME -- /bin/bash Note that if you do not specify the name of the container (by omitting -c CONTAINER_NAME), you will login into the first defined container by default (in your case django).
###Instruction: I have one deployment which has two containers. apiVersion: apps/v1 kind: Deployment metadata: name: ol000dep spec: selector: matchLabels: app: ol000kube replicas : 2 template: metadata: labels: app: ol000kube spec: containers: - name: django image: django:local ports: - containerPort: 8011 containers: - name: nginx image: nginx:local ports: - containerPort: 80 This makes two replicad pods, and on my understanding, each pod has two containers. kubectl get pods NAME READY STATUS RESTARTS AGE ol000dep-bc96bfc98-r9lrj 1/1 Running 0 21m ol000dep-bc96bfc98-t6flw 1/1 Running 0 21m Now I can login kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash Then, I wonder, Is this login in a Pod not a Container?? If so ,how can I select the container I want to login? ### Response: Interesting. Even though the yaml you provided is incorrect (.spec.containers map key must be unique), k8s seems to be fine with that. In your case the pod is started with only the second container (name: nginx) in it though. Is this login in a Pod not a Container?? Container. So, with kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash, you login/exec into the nginx container. After correcting the yaml, two containers would be started in the pod and you can log into the desired container via its name (e.g. name: django) using the -c / --container parameter. ... containers: - name: django image: django:local ports: - containerPort: 8011 - name: nginx image: nginx:local ports: - containerPort: 80 login: kubectl exec -it POD_NAME -c CONTAINER_NAME -- /bin/bash Note that if you do not specify the name of the container (by omitting -c CONTAINER_NAME), you will login into the first defined container by default (in your case django).
I have one deployment which has two containers. apiVersion: apps/v1 kind: Deployment metadata: name: ol000dep spec: selector: matchLabels: app: ol000kube replicas : 2 template: metadata: labels: app: ol000kube spec: containers: - name: django image: django:local ports: - containerPort: 8011 containers: - name: nginx image: nginx:local ports: - containerPort: 80 This makes two replicad pods, and on my understanding, each pod has two containers. kubectl get pods NAME READY STATUS RESTARTS AGE ol000dep-bc96bfc98-r9lrj 1/1 Running 0 21m ol000dep-bc96bfc98-t6flw 1/1 Running 0 21m Now I can login kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash Then, I wonder, Is this login in a Pod not a Container?? If so ,how can I select the container I want to login?
After the action not working, then I decided to try to write the manifests directly in Azure through the Cloud Shell. The manifests worked as intended which resulted in the understanding that the problem is related to the Deploy action. Finally running the action with debugger activated, the problem finally exposed itself. The image wasn't receiving the version from an earlier job. This resulted in the image being displayed as "ghcr.io/*****/****:". This was then perceived as an object in the Deployment.yml
###Instruction: I'm setting up a GitHub action for deploying to Azure kubernetes. The deploy job looks as follows: deploy: name: Deploy to kubernetes runs-on: ubuntu-latest needs: docker steps: - name: Checkout uses: actions/checkout@v3 - name: Setup kubectl uses: Azure/setup-kubectl@v3 - name: Setup cluster context uses: Azure/k8s-set-context@v3 with: method: kubeconfig kubeconfig: ${{ secrets.AZURE_K8_KUBECONFIG }} - name: Deploy to Azure uses: Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/${{ env.image_name }}:${{ needs.release-on-push.outputs.version }} pull-images: false annotate-namespace: false When it gets to the deploy phase it runs through a manifest, which looks like this: apiVersion: v1 kind: Service metadata: name: localizer spec: selector: app: localizer ports: - name: std-api-ports port: 8080 targetPort: http-in --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: localizer labels: app: localizer annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTP" nginx.ingress.kubernetes.io/proxy-body-size: "300M" spec: ingressClassName: nginx-default rules: - host: localizer.UUID.LOCATION.aksapp.io http: paths: - backend: service: name: localizer port: name: std-api-ports path: / pathType: Prefix --- apiVersion: apps/v1 kind: Deployment metadata: name: localizer labels: app: localizer domain: frontend spec: replicas: 2 selector: matchLabels: app: localizer strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 template: metadata: labels: app: localizer spec: containers: - name: localizer image: ghcr.io/urltorepo resources: requests: cpu: 0.5 memory: 2G limits: memory: 2G ports: - name: http-in containerPort: 8080 env: - name: NODE_ENV value: "test" imagePullPolicy: Always livenessProbe: httpGet: path: /ping port: http-in failureThreshold: 3 periodSeconds: 30 startupProbe: httpGet: path: /ping port: http-in failureThreshold: 12 periodSeconds: 10 The workflow then fails due to an error which is as follows: Run Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/***/localizer: pull-images: false annotate-namespace: false namespace: default strategy: basic route-method: service version-switch-buffer: 0 traffic-split-method: pod percentage: 0 action: deploy force: false token: *** private-cluster: false skip-tls-verify: false env: image_name: ***/localizer AZURE_WEBAPP_NAME: localizer-app KUBECONFIG: /home/runner/work/_temp/kubeconfig_1682630327009 Deploying manifests /opt/hostedtoolcache/kubectl/1.27.1/x64/kubectl apply -f /tmp/manifest.yml --namespace default service/localizer unchanged ingress.networking.k8s.io/localizer configured error: error parsing /tmp/manifest.yml: error converting YAML to JSON: yaml: line 25: mapping values are not allowed in this context Error: Error: undefined From the console it would seem that the Service is setup and the Ingress is setup, which leads me to believe that the Deployment is the one with the error. I've run the manifest through a YAML validator and it passes. I've tried commenting out values from the manifest, hence the console showing that the Service is unchanged and Ingress is configured instead of created. I seem to be blind as to what the error could be now. UPDATE: I've now tried to split the manifest file into three files(Deployment, Ingress and Service). This has allowed me to figure out that it definitely is the Deployment manifest that is causing the error. I've also found that the error is related to "image" element. I've tried to follow a course from Microsoft on Azure kubernetes and if I change out my image with the one in the example, then there isn't a problem. ### Response: After the action not working, then I decided to try to write the manifests directly in Azure through the Cloud Shell. The manifests worked as intended which resulted in the understanding that the problem is related to the Deploy action. Finally running the action with debugger activated, the problem finally exposed itself. The image wasn't receiving the version from an earlier job. This resulted in the image being displayed as "ghcr.io/*****/****:". This was then perceived as an object in the Deployment.yml
I'm setting up a GitHub action for deploying to Azure kubernetes. The deploy job looks as follows: deploy: name: Deploy to kubernetes runs-on: ubuntu-latest needs: docker steps: - name: Checkout uses: actions/checkout@v3 - name: Setup kubectl uses: Azure/setup-kubectl@v3 - name: Setup cluster context uses: Azure/k8s-set-context@v3 with: method: kubeconfig kubeconfig: ${{ secrets.AZURE_K8_KUBECONFIG }} - name: Deploy to Azure uses: Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/${{ env.image_name }}:${{ needs.release-on-push.outputs.version }} pull-images: false annotate-namespace: false When it gets to the deploy phase it runs through a manifest, which looks like this: apiVersion: v1 kind: Service metadata: name: localizer spec: selector: app: localizer ports: - name: std-api-ports port: 8080 targetPort: http-in --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: localizer labels: app: localizer annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTP" nginx.ingress.kubernetes.io/proxy-body-size: "300M" spec: ingressClassName: nginx-default rules: - host: localizer.UUID.LOCATION.aksapp.io http: paths: - backend: service: name: localizer port: name: std-api-ports path: / pathType: Prefix --- apiVersion: apps/v1 kind: Deployment metadata: name: localizer labels: app: localizer domain: frontend spec: replicas: 2 selector: matchLabels: app: localizer strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 template: metadata: labels: app: localizer spec: containers: - name: localizer image: ghcr.io/urltorepo resources: requests: cpu: 0.5 memory: 2G limits: memory: 2G ports: - name: http-in containerPort: 8080 env: - name: NODE_ENV value: "test" imagePullPolicy: Always livenessProbe: httpGet: path: /ping port: http-in failureThreshold: 3 periodSeconds: 30 startupProbe: httpGet: path: /ping port: http-in failureThreshold: 12 periodSeconds: 10 The workflow then fails due to an error which is as follows: Run Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/***/localizer: pull-images: false annotate-namespace: false namespace: default strategy: basic route-method: service version-switch-buffer: 0 traffic-split-method: pod percentage: 0 action: deploy force: false token: *** private-cluster: false skip-tls-verify: false env: image_name: ***/localizer AZURE_WEBAPP_NAME: localizer-app KUBECONFIG: /home/runner/work/_temp/kubeconfig_1682630327009 Deploying manifests /opt/hostedtoolcache/kubectl/1.27.1/x64/kubectl apply -f /tmp/manifest.yml --namespace default service/localizer unchanged ingress.networking.k8s.io/localizer configured error: error parsing /tmp/manifest.yml: error converting YAML to JSON: yaml: line 25: mapping values are not allowed in this context Error: Error: undefined From the console it would seem that the Service is setup and the Ingress is setup, which leads me to believe that the Deployment is the one with the error. I've run the manifest through a YAML validator and it passes. I've tried commenting out values from the manifest, hence the console showing that the Service is unchanged and Ingress is configured instead of created. I seem to be blind as to what the error could be now. UPDATE: I've now tried to split the manifest file into three files(Deployment, Ingress and Service). This has allowed me to figure out that it definitely is the Deployment manifest that is causing the error. I've also found that the error is related to "image" element. I've tried to follow a course from Microsoft on Azure kubernetes and if I change out my image with the one in the example, then there isn't a problem.
Unfortunately, the v1 Ingress API in Kubernetes does not have sufficient capabilities to express Knative's routing requirements. Knative does support several ingress implementations (including Istio, Contour, and the Gateway API), but no one has written a plugin for the Nginx Ingress annotations. Some of the capabilities that are missing from the Kubernetes Ingress API which are needed by Knative include: Backend traffic splits / weights Setting request headers to the backend server Requesting HTTP/2 or websockets protocol support If you're willing to use bets software, the Gateway API plugin is mostly feature complete and should plug into a variety of ingress providers. Unfortunately, Nginx does not appear to be on that list.
###Instruction: Is it possible to have Knative automatically create K8s Ingress resources? Hello all, Based on the following lines from documentation, I was wondering if I can have Knative automatically create the Ingress resources for my service? I haven't found details on this in the documentation. After the service has been created, Knative performs the following tasks: - Creates a new immutable revision for this version of the app. - Performs network programming to create a route, ingress, service, and load balancer for your app. - Automatically scales your pods up and down based on traffic, including to zero active pods. Example: Taking the Service and Ingress definition below, would it be possible to abstract away the Ingress yaml and have knative take care of its creation automatically for services? apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello namespace: knative spec: template: metadata: labels: app: nonprofit annotations: queue.sidecar.serving.knative.dev/resourcePercentage: "10" autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev" autoscaling.knative.dev/target: "40" autoscaling.knative.dev/min-scale: "1" autoscaling.knative.dev/max-scale: "3" spec: containers: - image: gcr.io/knative-samples/helloworld-java resources: requests: cpu: 50m memory: 100M limits: cpu: 200m memory: 200M ports: - containerPort: 8080 env: - name: TARGET value: "Sunny Day" traffic: - tag: latest latestRevision: true percent: 100 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: knative-hello-ingress namespace: knative annotations: nginx.ingress.kubernetes.io/upstream-vhost: "hello.knative" spec: ingressClassName: "ingress-generic" rules: - host: "hello-knative.com" http: paths: - pathType: Prefix path: "/" backend: service: name: hello port: number: 80 Thank you, Haven't tried anything as I haven't found details in the documentation regarding this. ### Response: Unfortunately, the v1 Ingress API in Kubernetes does not have sufficient capabilities to express Knative's routing requirements. Knative does support several ingress implementations (including Istio, Contour, and the Gateway API), but no one has written a plugin for the Nginx Ingress annotations. Some of the capabilities that are missing from the Kubernetes Ingress API which are needed by Knative include: Backend traffic splits / weights Setting request headers to the backend server Requesting HTTP/2 or websockets protocol support If you're willing to use bets software, the Gateway API plugin is mostly feature complete and should plug into a variety of ingress providers. Unfortunately, Nginx does not appear to be on that list.
Is it possible to have Knative automatically create K8s Ingress resources? Hello all, Based on the following lines from documentation, I was wondering if I can have Knative automatically create the Ingress resources for my service? I haven't found details on this in the documentation. After the service has been created, Knative performs the following tasks: - Creates a new immutable revision for this version of the app. - Performs network programming to create a route, ingress, service, and load balancer for your app. - Automatically scales your pods up and down based on traffic, including to zero active pods. Example: Taking the Service and Ingress definition below, would it be possible to abstract away the Ingress yaml and have knative take care of its creation automatically for services? apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello namespace: knative spec: template: metadata: labels: app: nonprofit annotations: queue.sidecar.serving.knative.dev/resourcePercentage: "10" autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev" autoscaling.knative.dev/target: "40" autoscaling.knative.dev/min-scale: "1" autoscaling.knative.dev/max-scale: "3" spec: containers: - image: gcr.io/knative-samples/helloworld-java resources: requests: cpu: 50m memory: 100M limits: cpu: 200m memory: 200M ports: - containerPort: 8080 env: - name: TARGET value: "Sunny Day" traffic: - tag: latest latestRevision: true percent: 100 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: knative-hello-ingress namespace: knative annotations: nginx.ingress.kubernetes.io/upstream-vhost: "hello.knative" spec: ingressClassName: "ingress-generic" rules: - host: "hello-knative.com" http: paths: - pathType: Prefix path: "/" backend: service: name: hello port: number: 80 Thank you, Haven't tried anything as I haven't found details in the documentation regarding this.
NOTE: this assumes that you're using a docker runtime in K8s! If not, use kaniko. The docker daemon isn't mounted into the pod - this is the reason for your error. In order to build images, you will need one of these: get the docker daemon mounted into each container of the pod run the containers in the pod in privileged mode apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: privileged: true # this should do the trick There are some issues w/ building/running containers w/in a K8s pod, from a security standpoint (more on this here and here). Much safer approach is to use sysbox but that's perhaps more detailed than we want to get here. I'm assuming that it's not possible for you to be able to build this image outside of K8s.
###Instruction: I am using the docker dind image and creating a pod. When I try to run docker build inside the pod, I get an error. apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: runAsUser: 0 The pod is getting created, but when I execute a docker build inside the pod, I get the following error: ERROR: Cannot connect to the docker daemon at unix://var/run/docker.sock ### Response: NOTE: this assumes that you're using a docker runtime in K8s! If not, use kaniko. The docker daemon isn't mounted into the pod - this is the reason for your error. In order to build images, you will need one of these: get the docker daemon mounted into each container of the pod run the containers in the pod in privileged mode apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: privileged: true # this should do the trick There are some issues w/ building/running containers w/in a K8s pod, from a security standpoint (more on this here and here). Much safer approach is to use sysbox but that's perhaps more detailed than we want to get here. I'm assuming that it's not possible for you to be able to build this image outside of K8s.
I am using the docker dind image and creating a pod. When I try to run docker build inside the pod, I get an error. apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: runAsUser: 0 The pod is getting created, but when I execute a docker build inside the pod, I get the following error: ERROR: Cannot connect to the docker daemon at unix://var/run/docker.sock
As per the grafana docker documentation, any config in the grafana.ini file can be set at runtime using environment variables of the format GF_<SectionName>_<KeyName>__FILE, which represent paths to files containing the actual value. So we can use the environment variables GF_SECURITY_ADMIN_USER__FILE, GF_SECURITY_ADMIN_PASSWORD__FILE and GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE, to set the admin username, password and the dashboard that is opened at launch. According to the instructions in this answer, we can use a dashboard provider of type file to import dashboards from files. Using both these techniques together, I think I was able to get what you wanted. Steps performed: Created a secret that contains the admin password. At runtime, the password gets mounted into the grafana container at /etc/secrets/admin_password and environment variable GF_SECURITY_ADMIN_PASSWORD__FILE gets set to /etc/secrets/admin_password Created a config map that contains the dashboard provider config which gets mounted as /etc/grafana/provisioning/dashboards/main.yaml Created another config map that contains the admin user and default dashboard path. These values are exposed to the container as files /etc/config/admin_user and /etc/config/default_home_dashboard_path, with those file paths set to the GF_SECURITY_ADMIN_PASSWORD__FILE and GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE environment variables Created another config map which contains the JSON for the default dashboard. The JSON file gets mounted as /var/lib/grafana/dashboards/test-dashboard.json, with the text /var/lib/grafana/dashboards/test-dashboard.json exposed as /etc/config/default_home_dashboard_path(set in the previous step) Config map for the dashboard JSON: --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboards data: test-dashboard.json: |- { "annotations": { "list": [ ...truncated... } Other config maps and deployment spec: --- apiVersion: v1 kind: Secret metadata: namespace: default labels: app: grafana name: grafana-secrets type: Opaque stringData: admin_password: testpwd --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboard-provider data: dashboard.yml: |- --- apiVersion: 1 providers: - name: "Dashboard provider" orgId: 1 type: file disableDeletion: false updateIntervalSeconds: 10 allowUiUpdates: false options: path: /var/lib/grafana/dashboards foldersFromFilesStructure: true --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-config data: admin_user: testuser default_home_dashboard_path: /var/lib/grafana/dashboards/test-dashboard.json --- apiVersion: apps/v1 kind: Deployment metadata: namespace: default labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: volumes: - name: secret-volume secret: secretName: grafana-secrets items: - key: admin_password path: admin_password - name: dashboard-provider-volume configMap: name: grafana-dashboard-provider items: - key: dashboard.yml path: main.yaml - name: dashboards-volume configMap: name: grafana-dashboards items: - key: test-dashboard.json path: test-dashboard.json - name: config-volume configMap: name: grafana-config items: - key: admin_user path: admin_user - key: default_home_dashboard_path path: default_home_dashboard_path containers: - name: grafana image: grafana/grafana-oss:9.5.1 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 volumeMounts: - name: secret-volume mountPath: /etc/secrets - name: dashboard-provider-volume mountPath: /etc/grafana/provisioning/dashboards - name: dashboards-volume mountPath: /var/lib/grafana/dashboards - name: config-volume mountPath: /etc/config env: - name: GF_SECURITY_ADMIN_PASSWORD__FILE value: /etc/secrets/admin_password - name: GF_SECURITY_ADMIN_USER__FILE value: /etc/config/admin_user - name: GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE value: /etc/config/default_home_dashboard_path The complete config files can be found on github
###Instruction: I am trying to get default Grafana dashboard and admin username/password on deploy. I tried creating values.yaml but doesn't seem like its working --- apiVersion: v1 kind: ConfigMap metadata: name: grafana namespace: monitoring labels: grafana_dashboard: "1" data: dashboard-k8.json: | (3) ${indent(4, data)} Is there any better working way to do this? and I used same method for default username and password for grafana. I am working on a maintenance project so its not new installation of Grafana. But, the requirement is to have default dashboard set with new admin username and password when the service is deployed. ### Response: As per the grafana docker documentation, any config in the grafana.ini file can be set at runtime using environment variables of the format GF_<SectionName>_<KeyName>__FILE, which represent paths to files containing the actual value. So we can use the environment variables GF_SECURITY_ADMIN_USER__FILE, GF_SECURITY_ADMIN_PASSWORD__FILE and GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE, to set the admin username, password and the dashboard that is opened at launch. According to the instructions in this answer, we can use a dashboard provider of type file to import dashboards from files. Using both these techniques together, I think I was able to get what you wanted. Steps performed: Created a secret that contains the admin password. At runtime, the password gets mounted into the grafana container at /etc/secrets/admin_password and environment variable GF_SECURITY_ADMIN_PASSWORD__FILE gets set to /etc/secrets/admin_password Created a config map that contains the dashboard provider config which gets mounted as /etc/grafana/provisioning/dashboards/main.yaml Created another config map that contains the admin user and default dashboard path. These values are exposed to the container as files /etc/config/admin_user and /etc/config/default_home_dashboard_path, with those file paths set to the GF_SECURITY_ADMIN_PASSWORD__FILE and GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE environment variables Created another config map which contains the JSON for the default dashboard. The JSON file gets mounted as /var/lib/grafana/dashboards/test-dashboard.json, with the text /var/lib/grafana/dashboards/test-dashboard.json exposed as /etc/config/default_home_dashboard_path(set in the previous step) Config map for the dashboard JSON: --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboards data: test-dashboard.json: |- { "annotations": { "list": [ ...truncated... } Other config maps and deployment spec: --- apiVersion: v1 kind: Secret metadata: namespace: default labels: app: grafana name: grafana-secrets type: Opaque stringData: admin_password: testpwd --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboard-provider data: dashboard.yml: |- --- apiVersion: 1 providers: - name: "Dashboard provider" orgId: 1 type: file disableDeletion: false updateIntervalSeconds: 10 allowUiUpdates: false options: path: /var/lib/grafana/dashboards foldersFromFilesStructure: true --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-config data: admin_user: testuser default_home_dashboard_path: /var/lib/grafana/dashboards/test-dashboard.json --- apiVersion: apps/v1 kind: Deployment metadata: namespace: default labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: volumes: - name: secret-volume secret: secretName: grafana-secrets items: - key: admin_password path: admin_password - name: dashboard-provider-volume configMap: name: grafana-dashboard-provider items: - key: dashboard.yml path: main.yaml - name: dashboards-volume configMap: name: grafana-dashboards items: - key: test-dashboard.json path: test-dashboard.json - name: config-volume configMap: name: grafana-config items: - key: admin_user path: admin_user - key: default_home_dashboard_path path: default_home_dashboard_path containers: - name: grafana image: grafana/grafana-oss:9.5.1 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 volumeMounts: - name: secret-volume mountPath: /etc/secrets - name: dashboard-provider-volume mountPath: /etc/grafana/provisioning/dashboards - name: dashboards-volume mountPath: /var/lib/grafana/dashboards - name: config-volume mountPath: /etc/config env: - name: GF_SECURITY_ADMIN_PASSWORD__FILE value: /etc/secrets/admin_password - name: GF_SECURITY_ADMIN_USER__FILE value: /etc/config/admin_user - name: GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE value: /etc/config/default_home_dashboard_path The complete config files can be found on github
I am trying to get default Grafana dashboard and admin username/password on deploy. I tried creating values.yaml but doesn't seem like its working --- apiVersion: v1 kind: ConfigMap metadata: name: grafana namespace: monitoring labels: grafana_dashboard: "1" data: dashboard-k8.json: | (3) ${indent(4, data)} Is there any better working way to do this? and I used same method for default username and password for grafana. I am working on a maintenance project so its not new installation of Grafana. But, the requirement is to have default dashboard set with new admin username and password when the service is deployed.
Two things that caught my eye: The Ingress controller you are using seems to might require an ingressClassName defined in the Ingress resource, so that the ingress controller takes this resource (and its configs) into account. More about Ingress class: Ingress class Default IngressClass You should change the path: /* to path: /. path: /* makes the Ingress controller create a location block à la location /* { ... } /* is interpreted as a normal prefix in this case and has nothing to do with regex (probably not what you assumed). In your case: the 404 comes rather from the ingress itself (since request URI / not found - must be '/*' instead). In order for a request to be proxied to the httpd server at all, the request must be as follows: "http://demo.localdev.me/*", to which the httpd would again respond with 404 (since the resource '/*' also doesn't exist on httpd by default). Whereas path: / does the following: location / { ... } The location / block is a special case that matches any URI that starts with a slash (/), which includes all URIs. (This is also the default location block that Nginx uses if no other location block matches the request's URI.) More about nginx location: nginx doc Guide on how to use regex in Nginx location block section? Final result: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx # add rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: / # change backend: service: name: apa000ser port: number: 8099
###Instruction: I installed pods and services with deployment.yml,service.yml and ingress.yml like this below. deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: apa000dep spec: selector: matchLabels: app: apa000kube replicas : 3 template: metadata: labels: app: apa000kube spec: containers: - name: apa000ex91 image: httpd ports: - containerPort: 80 service.yml apiVersion: v1 kind: Service metadata: name: apa000ser spec: type: NodePort ports: - port: 8099 targetPort: 80 protocol: TCP nodePort: 30080 selector: app: apa000kube Both works well and, I can access pod directly localhost:30080 then i installed ingress. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: /* backend: service: name: apa000ser port: number: 8099 The hostname is set correctly $kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress <none> demo.localdev.me 80 95m However when I access http://demo.localdev.me it returns 404 error. http://localhost:30080/ returns <span>it works!</span> So, I guess some ingress setting is wrong. where should I check? Even I stop example-ingress, result is same. So it means. nginx-ingress -> ingress -> service -> pods. Every request is fetchd in first nginx-ingress, and not sent to ingress? ### Response: Two things that caught my eye: The Ingress controller you are using seems to might require an ingressClassName defined in the Ingress resource, so that the ingress controller takes this resource (and its configs) into account. More about Ingress class: Ingress class Default IngressClass You should change the path: /* to path: /. path: /* makes the Ingress controller create a location block à la location /* { ... } /* is interpreted as a normal prefix in this case and has nothing to do with regex (probably not what you assumed). In your case: the 404 comes rather from the ingress itself (since request URI / not found - must be '/*' instead). In order for a request to be proxied to the httpd server at all, the request must be as follows: "http://demo.localdev.me/*", to which the httpd would again respond with 404 (since the resource '/*' also doesn't exist on httpd by default). Whereas path: / does the following: location / { ... } The location / block is a special case that matches any URI that starts with a slash (/), which includes all URIs. (This is also the default location block that Nginx uses if no other location block matches the request's URI.) More about nginx location: nginx doc Guide on how to use regex in Nginx location block section? Final result: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx # add rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: / # change backend: service: name: apa000ser port: number: 8099
I installed pods and services with deployment.yml,service.yml and ingress.yml like this below. deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: apa000dep spec: selector: matchLabels: app: apa000kube replicas : 3 template: metadata: labels: app: apa000kube spec: containers: - name: apa000ex91 image: httpd ports: - containerPort: 80 service.yml apiVersion: v1 kind: Service metadata: name: apa000ser spec: type: NodePort ports: - port: 8099 targetPort: 80 protocol: TCP nodePort: 30080 selector: app: apa000kube Both works well and, I can access pod directly localhost:30080 then i installed ingress. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: /* backend: service: name: apa000ser port: number: 8099 The hostname is set correctly $kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress <none> demo.localdev.me 80 95m However when I access http://demo.localdev.me it returns 404 error. http://localhost:30080/ returns <span>it works!</span> So, I guess some ingress setting is wrong. where should I check? Even I stop example-ingress, result is same. So it means. nginx-ingress -> ingress -> service -> pods. Every request is fetchd in first nginx-ingress, and not sent to ingress?
The issue is from configuration. One possible work around is reinstalling the filebeat and sending the logs to elastic search. Follow the content in the blog by Semih Sezer which has the process of sending Airflow logs to elastic search using filebeat.
###Instruction: i want to ingested containers json log data using filebeat deployed on kubernetes, i am able to ingest the logs to but i am unable to format the json logs in to fields following is the logs visible in kibana I want to take out the fields from messages above e.g. field for log.level, message, service.name and so on Following are the filebeat configuration we are using --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log - /var/log/containers/*.json processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: filebeat.autodiscover: providers: - type: kubernetes node: ${NODE_NAME} templates: - condition: contains: kubernetes.container.name: "no-json-logging" config: - type: container paths: - "/var/log/containers/*-${data.kubernetes.container.id}.log" - condition: contains: kubernetes.container.name: "json-logging" config: - type: container paths: - "/var/log/containers/*-${data.kubernetes.container.id}.log" json.keys_under_root: true json.add_error_key: true json.message_key: message processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:XX.XX.XX.XX}:${ELASTICSEARCH_PORT:9201}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:8.5.3 args: [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: XX.XX.XX.XX - name: ELASTICSEARCH_PORT value: "9201" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: elastic - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat-kubeadm-config namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat-kubeadm-config apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: ["apps"] resources: - replicasets verbs: ["get", "list", "watch"] - apiGroups: ["batch"] resources: - jobs verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat # should be the namespace where filebeat is running namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: ["get", "create", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat-kubeadm-config namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: [""] resources: - configmaps resourceNames: - kubeadm-config verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat --- How can i take out the fields from json message? ### Response: The issue is from configuration. One possible work around is reinstalling the filebeat and sending the logs to elastic search. Follow the content in the blog by Semih Sezer which has the process of sending Airflow logs to elastic search using filebeat.
i want to ingested containers json log data using filebeat deployed on kubernetes, i am able to ingest the logs to but i am unable to format the json logs in to fields following is the logs visible in kibana I want to take out the fields from messages above e.g. field for log.level, message, service.name and so on Following are the filebeat configuration we are using --- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log - /var/log/containers/*.json processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: "/var/log/containers/" # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: filebeat.autodiscover: providers: - type: kubernetes node: ${NODE_NAME} templates: - condition: contains: kubernetes.container.name: "no-json-logging" config: - type: container paths: - "/var/log/containers/*-${data.kubernetes.container.id}.log" - condition: contains: kubernetes.container.name: "json-logging" config: - type: container paths: - "/var/log/containers/*-${data.kubernetes.container.id}.log" json.keys_under_root: true json.add_error_key: true json.message_key: message processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:XX.XX.XX.XX}:${ELASTICSEARCH_PORT:9201}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:8.5.3 args: [ "-c", "/etc/filebeat.yml", "-e", ] env: - name: ELASTICSEARCH_HOST value: XX.XX.XX.XX - name: ELASTICSEARCH_PORT value: "9201" - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: elastic - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat-kubeadm-config namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat-kubeadm-config apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: ["apps"] resources: - replicasets verbs: ["get", "list", "watch"] - apiGroups: ["batch"] resources: - jobs verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat # should be the namespace where filebeat is running namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: ["get", "create", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat-kubeadm-config namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: [""] resources: - configmaps resourceNames: - kubeadm-config verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat --- How can i take out the fields from json message?
It is possible for CPU throttling to indirectly cause memory issues by making garbage collection in the JVM less efficient or escalate an already inefficient memory usage, as such it may result in slower response times for your application, leading to more requests being queued up and more memory being used making the application store more objects in memory while waiting for requests to be processed. The JVM flags that you have set are a good starting point. To further investigate any memory leaks you might want to use the followings to dump the heap on an OOM and analyze the dump with a tool like Java VisualVM to find the root cause. -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump.bin Throttling is an expected behavior for peaks to slow down. Yes, but I would consider CPU throttling more of a handbrake here and not the only solution. Instead I would implement an appropriate mechanism (like rate limiting, request queuing, circuit breakers, or backpressure throttling) either in the application or at the load balancer/reverse proxy level to prevent queues from forming.
###Instruction: My pod is running in k8s and restarts every 20-30 hours, caused by an OOMKilled error, here are the configured limits: limits: cpu: 600m memory: 1536Mi requests: cpu: 150m memory: 1536Mi In the container is a JVM (Spring Boot) running with the following Options: -Xms256m -Xmx1G -Xdebug -XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=26 Analysis of the JVM Metrics From my point of view, I can see, that there is a load peak, but not high enough to reach the memory limits, however the CPU Usage increases rapidly: Analysis of the Pod Metrics Let me show you the metrics of the pod provided by k8s: Memory increases much more than in the JVM shows. CPU Throttling is active. But I do not find the root cause. It seems that the container reaches the Limits and not the JVM, but why? Can CPU throttling causes Memory issues? Throttling is an expected behavior for peaks to slow down. I do not expect any memory issues in that case. ### Response: It is possible for CPU throttling to indirectly cause memory issues by making garbage collection in the JVM less efficient or escalate an already inefficient memory usage, as such it may result in slower response times for your application, leading to more requests being queued up and more memory being used making the application store more objects in memory while waiting for requests to be processed. The JVM flags that you have set are a good starting point. To further investigate any memory leaks you might want to use the followings to dump the heap on an OOM and analyze the dump with a tool like Java VisualVM to find the root cause. -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump.bin Throttling is an expected behavior for peaks to slow down. Yes, but I would consider CPU throttling more of a handbrake here and not the only solution. Instead I would implement an appropriate mechanism (like rate limiting, request queuing, circuit breakers, or backpressure throttling) either in the application or at the load balancer/reverse proxy level to prevent queues from forming.
My pod is running in k8s and restarts every 20-30 hours, caused by an OOMKilled error, here are the configured limits: limits: cpu: 600m memory: 1536Mi requests: cpu: 150m memory: 1536Mi In the container is a JVM (Spring Boot) running with the following Options: -Xms256m -Xmx1G -Xdebug -XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=26 Analysis of the JVM Metrics From my point of view, I can see, that there is a load peak, but not high enough to reach the memory limits, however the CPU Usage increases rapidly: Analysis of the Pod Metrics Let me show you the metrics of the pod provided by k8s: Memory increases much more than in the JVM shows. CPU Throttling is active. But I do not find the root cause. It seems that the container reaches the Limits and not the JVM, but why? Can CPU throttling causes Memory issues? Throttling is an expected behavior for peaks to slow down. I do not expect any memory issues in that case.
you can install the ingress controller and expose it with a TCP port opening with the controller or you can change the service type to LB instead clusterIP which will expose the service directly however you might have to take care about the type of LB coz MySQL will be using TCP maybe so HTTP LB may not work. Doc for ref : https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ With doc, you will get the idea to expose the TCP or UDP service with Nginx ingress controller which is more or like port opening and exposing service.
###Instruction: I have been trying to expose MySQL InnoDB Cluster setup on OCI OKE to Internet. This cluster was installed using MySQL Operator and the pods are listening on to port 6446 and 6447. I have ClusterIP to connect to pods by exec'ing into the pod. However, I have failed to find any best way to expose this cluster to internet using OCI Loadbalancer. As I am pretty new to OCI, trying to get some understanding on how to get this to work. Cluster was installed using this repo ### Response: you can install the ingress controller and expose it with a TCP port opening with the controller or you can change the service type to LB instead clusterIP which will expose the service directly however you might have to take care about the type of LB coz MySQL will be using TCP maybe so HTTP LB may not work. Doc for ref : https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ With doc, you will get the idea to expose the TCP or UDP service with Nginx ingress controller which is more or like port opening and exposing service.
I have been trying to expose MySQL InnoDB Cluster setup on OCI OKE to Internet. This cluster was installed using MySQL Operator and the pods are listening on to port 6446 and 6447. I have ClusterIP to connect to pods by exec'ing into the pod. However, I have failed to find any best way to expose this cluster to internet using OCI Loadbalancer. As I am pretty new to OCI, trying to get some understanding on how to get this to work. Cluster was installed using this repo
For that, you can write a TelemetryProcessor: Telemetry processors allow you to completely replace or discard a telemetry item. It could look like this: public class ServiceBusTelemetryReducer : ITelemetryProcessor { private readonly ITelemetryProcessor _next; public ServiceBusTelemetryReducer(ITelemetryProcessor next) { _next = next; } public void Process(ITelemetry item) { var isServiceBusReceiveTelemetry = item is DependencyTelemetry telemetry && telemetry.Type == "Azure Service Bus" && telemetry.Name == "ServiceBusReceiver.Receive"; // Only process telemetry that is relevant if (!isServiceBusReceiveTelemetry) _next.Process(item); } } Do not forget to register the processor: services.AddApplicationInsightsTelemetryProcessor<ServiceBusTelemetryReducer>();
###Instruction: I have a Kubernetes app that is constantly logging ServiceBusReceiver.Receive Dependency calls. It is creating 2000 logs per hour, per instance. In the TelemtryClient there are only custom methods for TrackEvent and TrackException so these look like they are coming from somewhere else and I haven't been able to trace it to disable or find out why its logging so much. The TrackDependency method is part of the built in Microsoft.ApplicationInsights.TelemetryClient package. I have changed versions of packages to match another messaging app I have with no luck, and also updated packages to latest versions also with no luck. There isn't much other info in the logs to trace it. SERVICEBUS ServiceBusReceiver.Receive Dependency Properties Type: servicebus Call status: true Duration: 1.0 mins Name: ServiceBusReceiver.Receive Telemetry type: dependency Application version: 4.19.0.0 SDK version dotnetc:2.21.0-429 Sample rate: 1 Performance: 1min-2min Base name: ServiceBusReceiver.Receive Other info about packages and versions installed: Sdk="Microsoft.NET.Sdk" net6.0 AzureFunctionsVersion v4 "AutoMapper.Extensions.Microsoft.DependencyInjection" Version="4.0.1" "Azure.Messaging.ServiceBus" Version="7.10.0" "Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="5.4.0" "Microsoft.Azure.WebJobs.Logging.ApplicationInsights" Version="3.0.33" "Microsoft.NET.Sdk.Functions" Version="4.0.1" "Microsoft.Azure.Functions.Extensions" Version="1.1.0" "Microsoft.Extensions.Azure" Version="1.2.0" "Microsoft.Extensions.Configuration.AzureAppConfiguration" Version="5.1.0" "Microsoft.Extensions.Caching.Memory" Version="6.0.1" "Polly" Version="7.1.0" "Scrutor" Version="4.1.0" ### Response: For that, you can write a TelemetryProcessor: Telemetry processors allow you to completely replace or discard a telemetry item. It could look like this: public class ServiceBusTelemetryReducer : ITelemetryProcessor { private readonly ITelemetryProcessor _next; public ServiceBusTelemetryReducer(ITelemetryProcessor next) { _next = next; } public void Process(ITelemetry item) { var isServiceBusReceiveTelemetry = item is DependencyTelemetry telemetry && telemetry.Type == "Azure Service Bus" && telemetry.Name == "ServiceBusReceiver.Receive"; // Only process telemetry that is relevant if (!isServiceBusReceiveTelemetry) _next.Process(item); } } Do not forget to register the processor: services.AddApplicationInsightsTelemetryProcessor<ServiceBusTelemetryReducer>();
I have a Kubernetes app that is constantly logging ServiceBusReceiver.Receive Dependency calls. It is creating 2000 logs per hour, per instance. In the TelemtryClient there are only custom methods for TrackEvent and TrackException so these look like they are coming from somewhere else and I haven't been able to trace it to disable or find out why its logging so much. The TrackDependency method is part of the built in Microsoft.ApplicationInsights.TelemetryClient package. I have changed versions of packages to match another messaging app I have with no luck, and also updated packages to latest versions also with no luck. There isn't much other info in the logs to trace it. SERVICEBUS ServiceBusReceiver.Receive Dependency Properties Type: servicebus Call status: true Duration: 1.0 mins Name: ServiceBusReceiver.Receive Telemetry type: dependency Application version: 4.19.0.0 SDK version dotnetc:2.21.0-429 Sample rate: 1 Performance: 1min-2min Base name: ServiceBusReceiver.Receive Other info about packages and versions installed: Sdk="Microsoft.NET.Sdk" net6.0 AzureFunctionsVersion v4 "AutoMapper.Extensions.Microsoft.DependencyInjection" Version="4.0.1" "Azure.Messaging.ServiceBus" Version="7.10.0" "Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="5.4.0" "Microsoft.Azure.WebJobs.Logging.ApplicationInsights" Version="3.0.33" "Microsoft.NET.Sdk.Functions" Version="4.0.1" "Microsoft.Azure.Functions.Extensions" Version="1.1.0" "Microsoft.Extensions.Azure" Version="1.2.0" "Microsoft.Extensions.Configuration.AzureAppConfiguration" Version="5.1.0" "Microsoft.Extensions.Caching.Memory" Version="6.0.1" "Polly" Version="7.1.0" "Scrutor" Version="4.1.0"
It depends on your use case and what you are planning for. If you are expecting your service to forward a request to any of the available POD of statefulset you can create a normal clusterIP service. You have the requirement of getting specific DNS records or You need the IP of each available POD headless (with selector) service will do it for you. There is also headless service without selector could be used with Endpointslice to forward traffic to other or External thrid party services. Ref : https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
###Instruction: This is first time I am using statefulsets in openshift and would like to understand how do you expose the services to outside world? Do you create two services headless and another normal one? How to get this working? ### Response: It depends on your use case and what you are planning for. If you are expecting your service to forward a request to any of the available POD of statefulset you can create a normal clusterIP service. You have the requirement of getting specific DNS records or You need the IP of each available POD headless (with selector) service will do it for you. There is also headless service without selector could be used with Endpointslice to forward traffic to other or External thrid party services. Ref : https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
This is first time I am using statefulsets in openshift and would like to understand how do you expose the services to outside world? Do you create two services headless and another normal one? How to get this working?
Telepresence version 1 is no longer supported but you can install the new version here. Feel free also to join the Telepresence open source slack channel if you have further questions or want to join our weekly help session for further assistance.
###Instruction: I am trying to install telepresence version 1 in ubuntu 22.04 but I don't have the download links to it. ### Response: Telepresence version 1 is no longer supported but you can install the new version here. Feel free also to join the Telepresence open source slack channel if you have further questions or want to join our weekly help session for further assistance.
I am trying to install telepresence version 1 in ubuntu 22.04 but I don't have the download links to it.
According to your https://pastebin.com/7x3NjD8s, you got 401 Unauthorized from ECR. Since you have an On-Premises Kubernetes instance you need to use a process to set your AWS credentials on ImagePullSecrets on your PODs. I would use this one: https://github.com/upmc-enterprises/registry-creds, it will collect your AWS credentials and update the ImagePullSecrets automatically with token to login on AWS ECR.
###Instruction: I have a helm chart which is using aws ecr to pull images and deploy them into native Kubernetes: Chart.yml: apiVersion: v2 description: helm chart for microservice-registry name: microservice-registry type: application version: 0.0.1 values.yaml namespace: replica: 1 labels: app: microservice-registry imagePullSecrets: microservice-registry-secret image: repository: 123456789.dkr.ecr.us-east-1.amazonaws.com/microservice-registry tag: 0.0.1 imagePullPolicy: Always strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate service: containerPort: 8761 port: 80 type: NodePort nodePort: 30061 secrets: data: .dockerconfigjson: 123456789 type: kubernetes.io/dockerconfigjson awscreds: data: SECRET_KEY: 123456789 ACCESS_KEY: 123456789 name: awscreds-registry region: us-east-1 cronjob: name: microservice-registry-job image: gtsopour/awscli-kubectl:latest When I deploy the helm chart I get: root@node1:~# kubectl get pods NAME READY STATUS RESTARTS AGE microservice-registry-7c4654467c-q292p 0/1 ImagePullBackOff 0 17m cicd 1/1 Running 1 (22d ago) 44d grafana-f6d989bbf-hrr6c 1/1 Running 1 (22d ago) 45d ingress-nginx-controller-6b8bfd7f69-cwd2b 1/1 Running 1 (22d ago) 35d nfs-subdir-external-provisioner-5f7dff96d9-rj75j 1/1 Running 39 (2d3h ago) 38d zookeeper-0 1/1 Running 1 (22d ago) 38d root@node1:~# log: root@node1:~# kubectl logs microservice-registry-7c4654467c-q292p Error from server (BadRequest): container "microservice-registry" in pod "microservice-registry-7c4654467c-q292p" is waiting to start: trying and failing to pull image root@node1:~# I have a similar service which is using the same aws ecr credentials. It's deployed successfully. Do you know how I can fix this issue? Policy JSON: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", "ecr:GetAuthorizationToken" ], "Resource": "*" } ] } ### Response: According to your https://pastebin.com/7x3NjD8s, you got 401 Unauthorized from ECR. Since you have an On-Premises Kubernetes instance you need to use a process to set your AWS credentials on ImagePullSecrets on your PODs. I would use this one: https://github.com/upmc-enterprises/registry-creds, it will collect your AWS credentials and update the ImagePullSecrets automatically with token to login on AWS ECR.
I have a helm chart which is using aws ecr to pull images and deploy them into native Kubernetes: Chart.yml: apiVersion: v2 description: helm chart for microservice-registry name: microservice-registry type: application version: 0.0.1 values.yaml namespace: replica: 1 labels: app: microservice-registry imagePullSecrets: microservice-registry-secret image: repository: 123456789.dkr.ecr.us-east-1.amazonaws.com/microservice-registry tag: 0.0.1 imagePullPolicy: Always strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate service: containerPort: 8761 port: 80 type: NodePort nodePort: 30061 secrets: data: .dockerconfigjson: 123456789 type: kubernetes.io/dockerconfigjson awscreds: data: SECRET_KEY: 123456789 ACCESS_KEY: 123456789 name: awscreds-registry region: us-east-1 cronjob: name: microservice-registry-job image: gtsopour/awscli-kubectl:latest When I deploy the helm chart I get: root@node1:~# kubectl get pods NAME READY STATUS RESTARTS AGE microservice-registry-7c4654467c-q292p 0/1 ImagePullBackOff 0 17m cicd 1/1 Running 1 (22d ago) 44d grafana-f6d989bbf-hrr6c 1/1 Running 1 (22d ago) 45d ingress-nginx-controller-6b8bfd7f69-cwd2b 1/1 Running 1 (22d ago) 35d nfs-subdir-external-provisioner-5f7dff96d9-rj75j 1/1 Running 39 (2d3h ago) 38d zookeeper-0 1/1 Running 1 (22d ago) 38d root@node1:~# log: root@node1:~# kubectl logs microservice-registry-7c4654467c-q292p Error from server (BadRequest): container "microservice-registry" in pod "microservice-registry-7c4654467c-q292p" is waiting to start: trying and failing to pull image root@node1:~# I have a similar service which is using the same aws ecr credentials. It's deployed successfully. Do you know how I can fix this issue? Policy JSON: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", "ecr:GetAuthorizationToken" ], "Resource": "*" } ] }
Looks like there are 2 versions of ingress controller. Which K8S community has been maintaining which is https://github.com/kubernetes/ingress-nginx Which Nginx is maintaining(Opensource & Paid): https://github.com/nginxinc/kubernetes-ingress The second one seems to be doing strict round robin(still testing) after adding nginx.org/lb-method: “round_robin” while the first one does 50:50 aggregate load balancing between replica's In my opinion its an important difference but with lot of confusion with names, difference between them can be read here I composed this answer with help of comments from @hiiamelliott...
###Instruction: I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service. My testing setup is a single Kubernetes node, with a deployment of two nginx pods. The deployment is then exposed with a NodePort service. I've then deployed an ingress contoller (I've tried both Kubernetes Nginx Ingress Controller and Nginx Kubernetes Ingress Controller, separately) and created an ingress rule for the NodePort service. I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then curls the NodePort service 100 times. It greps "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end. As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods. Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well. However, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please? I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need. ### Response: Looks like there are 2 versions of ingress controller. Which K8S community has been maintaining which is https://github.com/kubernetes/ingress-nginx Which Nginx is maintaining(Opensource & Paid): https://github.com/nginxinc/kubernetes-ingress The second one seems to be doing strict round robin(still testing) after adding nginx.org/lb-method: “round_robin” while the first one does 50:50 aggregate load balancing between replica's In my opinion its an important difference but with lot of confusion with names, difference between them can be read here I composed this answer with help of comments from @hiiamelliott...
I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service. My testing setup is a single Kubernetes node, with a deployment of two nginx pods. The deployment is then exposed with a NodePort service. I've then deployed an ingress contoller (I've tried both Kubernetes Nginx Ingress Controller and Nginx Kubernetes Ingress Controller, separately) and created an ingress rule for the NodePort service. I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then curls the NodePort service 100 times. It greps "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end. As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods. Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well. However, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please? I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.
The kubernetes provider configuration contains an output from the module declared as aks. Prior to version 2.4.0 of the Kubernetes provider it was possible to simultaneously manage a Kubernetes cluster backing infrastructure and the cluster itself at initial provisioning (or subsequent Delete/Create) by configuring the provider with either resource attributes, or with data attributes. At version 2.4.0 of the Kubernetes provider, the new experimental Kubernetes provider with the latest Terraform SDK and Kubernetes Go SDK bindings (and consequently Kubernetes API) stabilized and replaced the former Kubernetes provider (ergo why some resources are marked v1 and v2 as the provider still contained some of the legacy code for backwards support, and only the minor version of the provider was iterated and not the major version according to semantic versioning rules). With this new provider and its use of the modern Kubernetes API it became no longer possible for this simultaneous management at initial provisioning. Therefore the error in the question is observed as Terraform's Kubernetes provider is attempting to configure with a non-existent cluster instead of charting a dependency hierarchy that would imply the provider is dependent upon the aks module. With all this in mind it becomes clear there are two workarounds. The first is to downgrade the Kubernetes provider to the last version of the old provider: terraform { required_providers { kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.3.2" } } } The other solution would be to -target the aks module first to manually enforce the dependency and populate the outputs, and then subsequently manage the entire Terraform config: terraform plan -target=module.aks
###Instruction: In my terraform project, I have my own module which creates an AKS cluster with RBAC enabled. This module has an output aks_public_fqdn which holds the FQDN of the cluster. module "aks" { source = "./aks" environment = local.environment region = local.region instance_id = local.workload_id application = local.workload_name resource_group = local.rg_name kubernetes_version = local.kubernetes_version } Once the cluster is up and running, I would like to create some K8S resources through Terraform with the Kubernetes provider. To authenticate I am using the following code snippet: provider "kubernetes" { host = "https://${module.aks.aks_public_fqdn}" insecure = true exec { api_version = "client.authentication.k8s.io/v1beta1" command = "kubelogin" args = [ "get-token", "--environment", "AzurePublicCloud", "--server-id", "3333333-3333333-3333333", "--login", "azurecli" ] } } When I try to run terraform apply I get: Error: Provider configuration: cannot load Kubernetes client config invalid configuration: default cluster has no server defined The only entry I have in my kubeconfig file is the context for my local kind cluster. What is missing here? Am I hit by the following github issue? Provider Issue ### Response: The kubernetes provider configuration contains an output from the module declared as aks. Prior to version 2.4.0 of the Kubernetes provider it was possible to simultaneously manage a Kubernetes cluster backing infrastructure and the cluster itself at initial provisioning (or subsequent Delete/Create) by configuring the provider with either resource attributes, or with data attributes. At version 2.4.0 of the Kubernetes provider, the new experimental Kubernetes provider with the latest Terraform SDK and Kubernetes Go SDK bindings (and consequently Kubernetes API) stabilized and replaced the former Kubernetes provider (ergo why some resources are marked v1 and v2 as the provider still contained some of the legacy code for backwards support, and only the minor version of the provider was iterated and not the major version according to semantic versioning rules). With this new provider and its use of the modern Kubernetes API it became no longer possible for this simultaneous management at initial provisioning. Therefore the error in the question is observed as Terraform's Kubernetes provider is attempting to configure with a non-existent cluster instead of charting a dependency hierarchy that would imply the provider is dependent upon the aks module. With all this in mind it becomes clear there are two workarounds. The first is to downgrade the Kubernetes provider to the last version of the old provider: terraform { required_providers { kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.3.2" } } } The other solution would be to -target the aks module first to manually enforce the dependency and populate the outputs, and then subsequently manage the entire Terraform config: terraform plan -target=module.aks
In my terraform project, I have my own module which creates an AKS cluster with RBAC enabled. This module has an output aks_public_fqdn which holds the FQDN of the cluster. module "aks" { source = "./aks" environment = local.environment region = local.region instance_id = local.workload_id application = local.workload_name resource_group = local.rg_name kubernetes_version = local.kubernetes_version } Once the cluster is up and running, I would like to create some K8S resources through Terraform with the Kubernetes provider. To authenticate I am using the following code snippet: provider "kubernetes" { host = "https://${module.aks.aks_public_fqdn}" insecure = true exec { api_version = "client.authentication.k8s.io/v1beta1" command = "kubelogin" args = [ "get-token", "--environment", "AzurePublicCloud", "--server-id", "3333333-3333333-3333333", "--login", "azurecli" ] } } When I try to run terraform apply I get: Error: Provider configuration: cannot load Kubernetes client config invalid configuration: default cluster has no server defined The only entry I have in my kubeconfig file is the context for my local kind cluster. What is missing here? Am I hit by the following github issue? Provider Issue
Not sure how you are setting up the cluster but if you using Helm chart or something it might be creating the statefulsets. Statefulsets will be having separate PVC unless you setup or plan to use shared Block storage (PVC). Each stateful set suppose 3 Node and 3 Salve total have 6 PVC and between those Redis bus will take care of replication. Unless you with using some NFS or shared volume option which in the background uses 2-3 Block volume that's a whole different case. With a normal cluster setup of Redis with Helm or Operator scenario will be something like the above.
###Instruction: We currently have a 3 node redis cluster running on OKE which has 3 Master nodes only( no replica nodes, each running on their own pod ) with no persistent storage . As an enhancement we are adding 3 replica nodes and Block volumes for persistent storage. We will have 6 pods ( one for each redis cluster node with 3 master and 3 replica nodes) each pod will have its own block volume. Master1 , Master2 ,Master3 ,Replica1, Replica2, Replica3 BV1 , BV2 ,BV3 ,BV4 ,BV5 ,BV6 I wanted to know if Master pod goes down and Replica pod becomes the new Master then how does the data from Block volume of master will get replicated to Replica's block volume ? or do we only need to have 3 block volumes ( only for master nodes) so that when master node goes down and replica becomes the new master then it can start using Master pods block volume. ### Response: Not sure how you are setting up the cluster but if you using Helm chart or something it might be creating the statefulsets. Statefulsets will be having separate PVC unless you setup or plan to use shared Block storage (PVC). Each stateful set suppose 3 Node and 3 Salve total have 6 PVC and between those Redis bus will take care of replication. Unless you with using some NFS or shared volume option which in the background uses 2-3 Block volume that's a whole different case. With a normal cluster setup of Redis with Helm or Operator scenario will be something like the above.
We currently have a 3 node redis cluster running on OKE which has 3 Master nodes only( no replica nodes, each running on their own pod ) with no persistent storage . As an enhancement we are adding 3 replica nodes and Block volumes for persistent storage. We will have 6 pods ( one for each redis cluster node with 3 master and 3 replica nodes) each pod will have its own block volume. Master1 , Master2 ,Master3 ,Replica1, Replica2, Replica3 BV1 , BV2 ,BV3 ,BV4 ,BV5 ,BV6 I wanted to know if Master pod goes down and Replica pod becomes the new Master then how does the data from Block volume of master will get replicated to Replica's block volume ? or do we only need to have 3 block volumes ( only for master nodes) so that when master node goes down and replica becomes the new master then it can start using Master pods block volume.
I was able to fix the issue by adding below annotation "alb.ingress.kubernetes.io/target-type": ip Referece - 1695
###Instruction: While this question might seem duplicate at first, I would ask people to go through it once. I have checked SO, for all similar questions before posting this. I have an ALB ingress controller which has a registered Target Group for an application that I am trying to access via the ALB. However the target group binding is not getting created for the application due to which the "registered targets" under the target group always comes as 0. Also the LoadBalancerAssociated also comes as None. This can be seen from the image below. I have the checked the ALB pod logs and there is no error w.r.t creating the targetgroupbinding. Based on some documentation here : https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/ I see that the ALB is supposed to create the targetgroupbinding itself: The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. Since there is no error in the pod logs, I am wondering how can I debug this issue? Any help would be appreciated. Update 1 (Current scenario) : The ALB is supposed to load balance a number of applications. So the ingress has many services under it. The targetgroupbindings have been created for all the service except the one mentioned above. ### Response: I was able to fix the issue by adding below annotation "alb.ingress.kubernetes.io/target-type": ip Referece - 1695
While this question might seem duplicate at first, I would ask people to go through it once. I have checked SO, for all similar questions before posting this. I have an ALB ingress controller which has a registered Target Group for an application that I am trying to access via the ALB. However the target group binding is not getting created for the application due to which the "registered targets" under the target group always comes as 0. Also the LoadBalancerAssociated also comes as None. This can be seen from the image below. I have the checked the ALB pod logs and there is no error w.r.t creating the targetgroupbinding. Based on some documentation here : https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/ I see that the ALB is supposed to create the targetgroupbinding itself: The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. Since there is no error in the pod logs, I am wondering how can I debug this issue? Any help would be appreciated. Update 1 (Current scenario) : The ALB is supposed to load balance a number of applications. So the ingress has many services under it. The targetgroupbindings have been created for all the service except the one mentioned above.
With the help of a colleague, we were able to solve this with vars: # kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt configurations: - configurations/var-reference.yaml vars: - name: MY-VARIABLE objref: kind: CustomResource name: my-custom-resource apiVersion: some.crd.io/v1 fieldref: fieldpath: metadata.name # configurations/var-reference.yaml varReference: - kind: Deployment path: spec/template/spec/containers/envFrom/secretRef/name # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-$(MY-VARIABLE) I'm aware that vars are deprecated and might try to find a solution with replacements, but for now I'm good with this solution.
###Instruction: I'm using kustomize to manage a rather standard deployment. I have a namePrefix to modify the resource names. I need to add a custom resource to my configuration which itself autogenerates a secret after creation. The secret name consists of a fixed prefix and the name of the custom resource. I want to reference this secret in my deployment. # kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-my-custom-resource <-- does not work # custom-resource.yaml apiVersion: some.crd.io/v1 kind: CustomResource metadata: name: my-custom-resource The custom resource will autogenerate: (not result of kubectl kustomize .) apiVersion: v1 kind: Secret metadata: name: prefix-my-custom-resource Due to the use of the PrefixTransformer, the name of the custom resource is changed to my-prefix-my-custom-resource. Therefore, the secretRef in the deployment yaml needs to be updated to prefix-my-prefix-my-custom-resource. I tried to solve this with a nameReference configuration, but I don't think the fieldSpec allows for a substring. Is there any solution to this? ### Response: With the help of a colleague, we were able to solve this with vars: # kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt configurations: - configurations/var-reference.yaml vars: - name: MY-VARIABLE objref: kind: CustomResource name: my-custom-resource apiVersion: some.crd.io/v1 fieldref: fieldpath: metadata.name # configurations/var-reference.yaml varReference: - kind: Deployment path: spec/template/spec/containers/envFrom/secretRef/name # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-$(MY-VARIABLE) I'm aware that vars are deprecated and might try to find a solution with replacements, but for now I'm good with this solution.
I'm using kustomize to manage a rather standard deployment. I have a namePrefix to modify the resource names. I need to add a custom resource to my configuration which itself autogenerates a secret after creation. The secret name consists of a fixed prefix and the name of the custom resource. I want to reference this secret in my deployment. # kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-my-custom-resource <-- does not work # custom-resource.yaml apiVersion: some.crd.io/v1 kind: CustomResource metadata: name: my-custom-resource The custom resource will autogenerate: (not result of kubectl kustomize .) apiVersion: v1 kind: Secret metadata: name: prefix-my-custom-resource Due to the use of the PrefixTransformer, the name of the custom resource is changed to my-prefix-my-custom-resource. Therefore, the secretRef in the deployment yaml needs to be updated to prefix-my-prefix-my-custom-resource. I tried to solve this with a nameReference configuration, but I don't think the fieldSpec allows for a substring. Is there any solution to this?
Easiest way you can use the allowVolumeExpansion in StorageClass , i did for Elasticsearch Database Checkout my article on that : https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b strogeclasss apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true Update PVC after this with the desired size you are looking forward to keep and restart the POD.
###Instruction: I want to resize Kubernetes Persistent Volume. So I tried by editing kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pvc labels: app: postgres spec: storageClassName: aws-gp2 accessModes: - ReadWriteOnce resources: requests: storage: 50Gi But I got error following error error: persistentvolumeclaims "postgres-pvc" could not be patched: persistentvolumeclaims "postgres-pvc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Then I went to AWS console and resized the PVC to 50GB. Now I see volume size to 50GB root@postgres-8654848bb4-6897j:/# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:1 0 100G 0 part /etc/hosts └─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 5G 0 disk nvme2n1 259:4 0 5G 0 disk nvme3n1 259:5 0 5G 0 disk nvme4n1 259:6 0 1G 0 disk nvme5n1 259:7 0 5G 0 disk nvme6n1 259:8 0 50G 0 disk /var/lib/postgresql/data nvme7n1 259:9 0 1G 0 disk nvme8n1 259:10 0 1G 0 disk nvme9n1 259:11 0 200G 0 disk But its not reflecting in df -h root@postgres-8654848bb4-6897j:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 100G 30G 71G 30% / tmpfs 64M 0 64M 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/nvme0n1p1 100G 30G 71G 30% /etc/hosts shm 64M 236K 64M 1% /dev/shm /dev/nvme6n1 9.7G 8.8G 930M 91% /var/lib/postgresql/data tmpfs 7.0G 12K 7.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 3.9G 0 3.9G 0% /proc/acpi tmpfs 3.9G 0 3.9G 0% /sys/firmware Then I tried resize2fs command but got following error root@postgres-8654848bb4-6897j:/# resize2fs /dev/nvme6n1 resize2fs 1.46.2 (28-Feb-2021) open: No such file or directory while opening /dev/nvme6n1 root@postgres-8654848bb4-6897j:/# resize2fs /var/lib/postgresql/data resize2fs 1.46.2 (28-Feb-2021) open: Is a directory while opening /var/lib/postgresql/data Any help with this here ? ### Response: Easiest way you can use the allowVolumeExpansion in StorageClass , i did for Elasticsearch Database Checkout my article on that : https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b strogeclasss apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true Update PVC after this with the desired size you are looking forward to keep and restart the POD.
I want to resize Kubernetes Persistent Volume. So I tried by editing kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pvc labels: app: postgres spec: storageClassName: aws-gp2 accessModes: - ReadWriteOnce resources: requests: storage: 50Gi But I got error following error error: persistentvolumeclaims "postgres-pvc" could not be patched: persistentvolumeclaims "postgres-pvc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize Then I went to AWS console and resized the PVC to 50GB. Now I see volume size to 50GB root@postgres-8654848bb4-6897j:/# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:1 0 100G 0 part /etc/hosts └─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 5G 0 disk nvme2n1 259:4 0 5G 0 disk nvme3n1 259:5 0 5G 0 disk nvme4n1 259:6 0 1G 0 disk nvme5n1 259:7 0 5G 0 disk nvme6n1 259:8 0 50G 0 disk /var/lib/postgresql/data nvme7n1 259:9 0 1G 0 disk nvme8n1 259:10 0 1G 0 disk nvme9n1 259:11 0 200G 0 disk But its not reflecting in df -h root@postgres-8654848bb4-6897j:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 100G 30G 71G 30% / tmpfs 64M 0 64M 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/nvme0n1p1 100G 30G 71G 30% /etc/hosts shm 64M 236K 64M 1% /dev/shm /dev/nvme6n1 9.7G 8.8G 930M 91% /var/lib/postgresql/data tmpfs 7.0G 12K 7.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 3.9G 0 3.9G 0% /proc/acpi tmpfs 3.9G 0 3.9G 0% /sys/firmware Then I tried resize2fs command but got following error root@postgres-8654848bb4-6897j:/# resize2fs /dev/nvme6n1 resize2fs 1.46.2 (28-Feb-2021) open: No such file or directory while opening /dev/nvme6n1 root@postgres-8654848bb4-6897j:/# resize2fs /var/lib/postgresql/data resize2fs 1.46.2 (28-Feb-2021) open: Is a directory while opening /var/lib/postgresql/data Any help with this here ?
The error “Unable to connect to the Server TCP I/O timeout” happens usually due to some common causes and you can try to troubleshoot based on below steps : 1)Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address. 2)There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place which is preventing the access. 3)You have configured a cluster that does not exist any more. Also error might have resulted due to the IP address difference in the kubelet configuration. 4)Refer to this official doc about how to install ArgoCD in a local machine, as mentioned in the doc you need to run in the same namespace where Argo CD is installed. Try setting the current context as the default namespace by using below steps : kubectl config set-context --current --namespace=argocd To see your current context: kubectl config current-context To see the contexts you have: kubectl config view To switch context: kubectl config use-context context-cluster-name` Make sure you are using the correct kubectl context. Also you can refer to this doc authored by Aruna Lakmal for more information about this error.
###Instruction: Hi I was just trying to install argo CD in my local machine. I have installed and running minikube. After creating argocd namespace, I just try these commands kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml This error persists: Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout Could I get some help? Byw I'm new to argo... ### Response: The error “Unable to connect to the Server TCP I/O timeout” happens usually due to some common causes and you can try to troubleshoot based on below steps : 1)Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address. 2)There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place which is preventing the access. 3)You have configured a cluster that does not exist any more. Also error might have resulted due to the IP address difference in the kubelet configuration. 4)Refer to this official doc about how to install ArgoCD in a local machine, as mentioned in the doc you need to run in the same namespace where Argo CD is installed. Try setting the current context as the default namespace by using below steps : kubectl config set-context --current --namespace=argocd To see your current context: kubectl config current-context To see the contexts you have: kubectl config view To switch context: kubectl config use-context context-cluster-name` Make sure you are using the correct kubectl context. Also you can refer to this doc authored by Aruna Lakmal for more information about this error.
Hi I was just trying to install argo CD in my local machine. I have installed and running minikube. After creating argocd namespace, I just try these commands kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml This error persists: Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout Could I get some help? Byw I'm new to argo...
Simply replace the following line (wrong): .dockerconfigjson: '{% include "template-1.yaml.j2" | b64decode %}' with (right): .dockerconfigjson: '{{ lookup("template", "template-1.yaml.j2") | b64encode }}' The template lookup will fetch the result of the given template as a string which can then be passed to the b64encode filter
###Instruction: I have 2 templates in my Ansible playbook: template-1.yaml.j2: { "auths": { "https://index.docker.io/v1/": { "auth": {{ (user.name + ':' + user.pass) | b64encode }} } } template-2.yaml.j2: --- apiVersion: v1 data: .dockerconfigjson: '{% include "template-1.yaml.j2" %}' kind: Secret metadata: name: pull-secret namespace: default type: kubernetes.io/dockerconfigjson The nested template works great, but I want template-2.yaml.j2 to encode template-1.yaml.j2 in base64, I tried adding a filter like so: {% include "template-1.yaml.j2" | b64encode %} And it fails with the following error: An unhandled exception occured while running the lookup plugin 'template'. Error was a <class 'jinja2.exceptions.TemplateNotFound'>, original message: dGVtcGxhdGUtMi55YW1sLmoy What actually happens is that the literal string template-2.yaml.j2 is encoded and fails. Is there any solution beside separating the templates to different tasks in my Ansible playbook? ### Response: Simply replace the following line (wrong): .dockerconfigjson: '{% include "template-1.yaml.j2" | b64decode %}' with (right): .dockerconfigjson: '{{ lookup("template", "template-1.yaml.j2") | b64encode }}' The template lookup will fetch the result of the given template as a string which can then be passed to the b64encode filter
I have 2 templates in my Ansible playbook: template-1.yaml.j2: { "auths": { "https://index.docker.io/v1/": { "auth": {{ (user.name + ':' + user.pass) | b64encode }} } } template-2.yaml.j2: --- apiVersion: v1 data: .dockerconfigjson: '{% include "template-1.yaml.j2" %}' kind: Secret metadata: name: pull-secret namespace: default type: kubernetes.io/dockerconfigjson The nested template works great, but I want template-2.yaml.j2 to encode template-1.yaml.j2 in base64, I tried adding a filter like so: {% include "template-1.yaml.j2" | b64encode %} And it fails with the following error: An unhandled exception occured while running the lookup plugin 'template'. Error was a <class 'jinja2.exceptions.TemplateNotFound'>, original message: dGVtcGxhdGUtMi55YW1sLmoy What actually happens is that the literal string template-2.yaml.j2 is encoded and fails. Is there any solution beside separating the templates to different tasks in my Ansible playbook?
There isn't anything wrong with how you've done it, you could potentially restructure it a little to reduce the if nesting. err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp) // Object exists, patch it if err == nil { patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmpp, patch); err != nil { return ctrl.Result{}, err } return patch, nil // Adjust appropriately } else if apierrors.IsNotFound(err) { // Object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } return ctrl.Result{}, err That CreateOrUpdate function looks like it should also do what you want. It doesn't do any comparison with a timestamp. The example on there is doing that specifically but if you look at the underlying function controllerutil.go#L195 you will see that there is no timestamp comparison there. What you will need to do is either define your mutate function (a function with the patch logic) and pass that as a parameter or create an anonymous function and pass that in. For example: op, err := controllerutil.CreateOrUpdate(context.TODO(), client, object, func() error { // Your mutate logic goes in here return nil }) If you look at the CreateOrUpdate function, it's essentially doing the same thing you are in your code except for the mutate (patching) of the resource.
###Instruction: Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask: if there is some helper that can help me to reduce this boilarplate code ? something like createOrUpdate func am I doing it right ? if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } } if enverything is as recomended please let me know. I need also to do the stratgic merge but the code doesnt support it I found the following https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate but I dont want it to be related to timestamp, just if something was change -> update it or something doesnt exist -> create it when trying to update the CR with the following code it doesnt works , any idea? // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } ### Response: There isn't anything wrong with how you've done it, you could potentially restructure it a little to reduce the if nesting. err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp) // Object exists, patch it if err == nil { patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmpp, patch); err != nil { return ctrl.Result{}, err } return patch, nil // Adjust appropriately } else if apierrors.IsNotFound(err) { // Object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } return ctrl.Result{}, err That CreateOrUpdate function looks like it should also do what you want. It doesn't do any comparison with a timestamp. The example on there is doing that specifically but if you look at the underlying function controllerutil.go#L195 you will see that there is no timestamp comparison there. What you will need to do is either define your mutate function (a function with the patch logic) and pass that as a parameter or create an anonymous function and pass that in. For example: op, err := controllerutil.CreateOrUpdate(context.TODO(), client, object, func() error { // Your mutate logic goes in here return nil }) If you look at the CreateOrUpdate function, it's essentially doing the same thing you are in your code except for the mutate (patching) of the resource.
Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask: if there is some helper that can help me to reduce this boilarplate code ? something like createOrUpdate func am I doing it right ? if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } } if enverything is as recomended please let me know. I need also to do the stratgic merge but the code doesnt support it I found the following https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate but I dont want it to be related to timestamp, just if something was change -> update it or something doesnt exist -> create it when trying to update the CR with the following code it doesnt works , any idea? // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err }
you can use the controllerutil.CreateOrUpdate() function from the sigs.k8s.io/controller-runtime/pkg/controller/controllerutil package to reduce boilerplate code. use controllerutil.CreateOrUpdate() function: if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error { return r.Patch(ctx, &eCmp, client.MergeFrom(eCmp.DeepCopy())) }); err != nil { return ctrl.Result{}, err } strategic merge, you can add the strategic merge patch to the callback function to patch the object strategically if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error { // Create a strategic merge patch strategicMergePatch, err := strategicpatch.CreateTwoWayMergePatch(eCmp, &newECmp, eCmp) if err != nil { return err } // Patch the object strategically return r.Patch(ctx, &eCmp, client.ConstantPatch(types.StrategicMergePatchType, strategicMergePatch)) }); err != nil { return ctrl.Result{}, err }
###Instruction: Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask: if there is some helper that can help me to reduce this boilarplate code ? something like createOrUpdate func am I doing it right ? if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } } if enverything is as recomended please let me know. I need also to do the stratgic merge but the code doesnt support it I found the following https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate but I dont want it to be related to timestamp, just if something was change -> update it or something doesnt exist -> create it when trying to update the CR with the following code it doesnt works , any idea? // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } ### Response: you can use the controllerutil.CreateOrUpdate() function from the sigs.k8s.io/controller-runtime/pkg/controller/controllerutil package to reduce boilerplate code. use controllerutil.CreateOrUpdate() function: if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error { return r.Patch(ctx, &eCmp, client.MergeFrom(eCmp.DeepCopy())) }); err != nil { return ctrl.Result{}, err } strategic merge, you can add the strategic merge patch to the callback function to patch the object strategically if err := controllerutil.CreateOrUpdate(ctx, r.Client, &eCmp, func() error { // Create a strategic merge patch strategicMergePatch, err := strategicpatch.CreateTwoWayMergePatch(eCmp, &newECmp, eCmp) if err != nil { return err } // Patch the object strategically return r.Patch(ctx, &eCmp, client.ConstantPatch(types.StrategicMergePatchType, strategicMergePatch)) }); err != nil { return ctrl.Result{}, err }
Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask: if there is some helper that can help me to reduce this boilarplate code ? something like createOrUpdate func am I doing it right ? if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err } } if enverything is as recomended please let me know. I need also to do the stratgic merge but the code doesnt support it I found the following https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate but I dont want it to be related to timestamp, just if something was change -> update it or something doesnt exist -> create it when trying to update the CR with the following code it doesnt works , any idea? // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &eCmp, patch); err != nil { return ctrl.Result{}, err }
You can the service name, which will be the same each time K8s DNS will take care of it. - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service K8s work that way, internal communication happens over the service name use the IP if required else ClusterIP in nature may changes IP with new deployment of service or any changes. DNS resolution will be taken care by Kubernetes(coredns, kubedns), so we just mention the service-name. Here is example for ref : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ Wordpess connecting to MySQL database with MySQL service name wordpress-mysql variable WORDPRESS_DB_HOST
###Instruction: I have followed the directives of this web site How To Deploy Apache Kafka With Kubernetes (DZone) to deploy a Kafka broker with its Zookeeper. (I've only changed the namespaces names). apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: ecoemploi spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: ecoemploi spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper:latest imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 But to make it working, the writers of that solution do wait for the cluster to start a first time, then copy the IP address of zookeeper-service kubectl get services -n kafka NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE zookeeper-service  NodePort  10.100.69.243  <none>      2181:30181/TCP 3m4s and paste it into the kafka-broker deployment file: [...] spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: 10.100.69.243:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 [...] Then re-applying kafka-broker makes it working. I'm using this solution, but in a Vagrant box it isn't convenient: each time I'm recreating my vagrant, that IP changes... Is there a solution to get the Cluster IP of that zookeeper-service and prepare the correct KAFKA_ZOOKEEPER_CONNECT environment variable for kafka-broker automatically? ### Response: You can the service name, which will be the same each time K8s DNS will take care of it. - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service K8s work that way, internal communication happens over the service name use the IP if required else ClusterIP in nature may changes IP with new deployment of service or any changes. DNS resolution will be taken care by Kubernetes(coredns, kubedns), so we just mention the service-name. Here is example for ref : https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ Wordpess connecting to MySQL database with MySQL service name wordpress-mysql variable WORDPRESS_DB_HOST
I have followed the directives of this web site How To Deploy Apache Kafka With Kubernetes (DZone) to deploy a Kafka broker with its Zookeeper. (I've only changed the namespaces names). apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: ecoemploi spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: ecoemploi spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper:latest imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 But to make it working, the writers of that solution do wait for the cluster to start a first time, then copy the IP address of zookeeper-service kubectl get services -n kafka NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE zookeeper-service  NodePort  10.100.69.243  <none>      2181:30181/TCP 3m4s and paste it into the kafka-broker deployment file: [...] spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: 10.100.69.243:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 [...] Then re-applying kafka-broker makes it working. I'm using this solution, but in a Vagrant box it isn't convenient: each time I'm recreating my vagrant, that IP changes... Is there a solution to get the Cluster IP of that zookeeper-service and prepare the correct KAFKA_ZOOKEEPER_CONNECT environment variable for kafka-broker automatically?
We had the same problem. How we resolved it: Copy output from "kubectl get pvc" to get the link between the pvc and the pv. Locate the disk in azure portal and create a snapshot of the disk. (In the MC_ resource group to the aks Edit deployment in kubernetes and set replicacout to 0. Save and see that the pod are stopping ang removed. Delete the PVC for this pod. Edit deployment in kubernetes and set replicacout to 1. Save and see that there is a new PVC and a new PV created. Edit deployment again and set replicacount to 0. Locate the new disk in azure portal. Use "kubectl get pvc" to locate. Delete new disk in azure portal Locate snapshot created in pt 2. Create a new disk based on the snapshot. New disk should have the same name as the disk deleted in pt 6. Edit deployment in kubernetes and set replicacount to 1. It should now start using the old disk with the new pvc and pv. Take backup of what ever referances and disk you can before starting
###Instruction: In a Azure AKS kubernetes cluster, after a cluster version upgrade the nodepool nodes, I have a PV that has this node affinity: Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/region in [westeurope] The nodes don't have the label so the Deployment creates a Pod that cannot be scheduled for the corresponding PVC for this PV. The Pod is never started: Warning FailedScheduling 15m default-scheduler 0/3 nodes are available: 3 node(s) had volume node affinity conflict. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. How can I add the label to the node or remove the label from the PV? I have tried to add the label to the node but I get: Error from server: admission webhook "aks-node-validating-webhook.azmk8s.io" denied the request: (UID: 931bf139-1579-4e96-b164-e4e6e2fdae65) User is trying to add or update a non-changeable system label (failure-domain.beta.kubernetes.io/region:westeurope). This action is denied.. Is the only solution to backup and restore the PV into a new one that does not have that deprecated label? What would the best process to do it (or any alternative solution) ### Response: We had the same problem. How we resolved it: Copy output from "kubectl get pvc" to get the link between the pvc and the pv. Locate the disk in azure portal and create a snapshot of the disk. (In the MC_ resource group to the aks Edit deployment in kubernetes and set replicacout to 0. Save and see that the pod are stopping ang removed. Delete the PVC for this pod. Edit deployment in kubernetes and set replicacout to 1. Save and see that there is a new PVC and a new PV created. Edit deployment again and set replicacount to 0. Locate the new disk in azure portal. Use "kubectl get pvc" to locate. Delete new disk in azure portal Locate snapshot created in pt 2. Create a new disk based on the snapshot. New disk should have the same name as the disk deleted in pt 6. Edit deployment in kubernetes and set replicacount to 1. It should now start using the old disk with the new pvc and pv. Take backup of what ever referances and disk you can before starting
In a Azure AKS kubernetes cluster, after a cluster version upgrade the nodepool nodes, I have a PV that has this node affinity: Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/region in [westeurope] The nodes don't have the label so the Deployment creates a Pod that cannot be scheduled for the corresponding PVC for this PV. The Pod is never started: Warning FailedScheduling 15m default-scheduler 0/3 nodes are available: 3 node(s) had volume node affinity conflict. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. How can I add the label to the node or remove the label from the PV? I have tried to add the label to the node but I get: Error from server: admission webhook "aks-node-validating-webhook.azmk8s.io" denied the request: (UID: 931bf139-1579-4e96-b164-e4e6e2fdae65) User is trying to add or update a non-changeable system label (failure-domain.beta.kubernetes.io/region:westeurope). This action is denied.. Is the only solution to backup and restore the PV into a new one that does not have that deprecated label? What would the best process to do it (or any alternative solution)
I was able to make it work. Some more context is that we are leveraging spark on k8s operator, so we pass in the zip file via pyFiles https://github.com/GoogleCloudPlatform/spark-on-k8s-operator This work the same as the following spark_session.sparkContext.addPyFile('/opt/spark/pyfiles/python.zip') if we set it up in spark script Initially, we pass in as pyFiles: - local:///opt/spark/pyfiles/spinner-python.zip But local:/// point to the working directory, we have to change it to the following with extra slash(/) to point to absolute. pyFiles: - local:////opt/spark/pyfiles/spinner-python.zip When Pyspark serializes a UDF, it sends a copy of the UDF code to all the worker nodes, we have to point PyFiles to absolute path instead of relative path
###Instruction: Running into the following error when use custom UDF Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 603, in main func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 449, in read_udfs udfs.append(read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=i)) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 251, in read_single_udf f, return_type = read_command(pickleSer, infile) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 71, in read_command command = serializer._read_with_length(file) File "/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py", line 160, in _read_with_length return self.loads(obj) File "/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py", line 430, in loads return pickle.loads(obj, encoding=encoding) ModuleNotFoundError: No module named 'jobs' The import spark scripts looks something like this from jobs.lib_a import a from jobs.udf import udf_function #This is a UDF The scripts itself is located in jobs/scripts/test_script.py, the entire jobs folder is zipped and then added to spark using pyFiles. The weird thing is that the other import from jobs module works, only fail for udf. I have tried approach in this post, creating a separate zip file called udf.zip, putting udf at top level and then add it to spark via pyFiles, but still run into ModuleNotFoundError when I try to import udf. I have also tried sys.path.append(<the udf path>) The only approach works is when I copy the udf_function into the spark script test_script.py. This wouldn't work in reality as the udf_function can be shared by other spark script. The underlying system is: Python 3.8 Spark 3.2 Spark is running in kubernetes ### Response: I was able to make it work. Some more context is that we are leveraging spark on k8s operator, so we pass in the zip file via pyFiles https://github.com/GoogleCloudPlatform/spark-on-k8s-operator This work the same as the following spark_session.sparkContext.addPyFile('/opt/spark/pyfiles/python.zip') if we set it up in spark script Initially, we pass in as pyFiles: - local:///opt/spark/pyfiles/spinner-python.zip But local:/// point to the working directory, we have to change it to the following with extra slash(/) to point to absolute. pyFiles: - local:////opt/spark/pyfiles/spinner-python.zip When Pyspark serializes a UDF, it sends a copy of the UDF code to all the worker nodes, we have to point PyFiles to absolute path instead of relative path
Running into the following error when use custom UDF Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 603, in main func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 449, in read_udfs udfs.append(read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=i)) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 251, in read_single_udf f, return_type = read_command(pickleSer, infile) File "/usr/local/lib/python3.8/dist-packages/pyspark/worker.py", line 71, in read_command command = serializer._read_with_length(file) File "/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py", line 160, in _read_with_length return self.loads(obj) File "/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py", line 430, in loads return pickle.loads(obj, encoding=encoding) ModuleNotFoundError: No module named 'jobs' The import spark scripts looks something like this from jobs.lib_a import a from jobs.udf import udf_function #This is a UDF The scripts itself is located in jobs/scripts/test_script.py, the entire jobs folder is zipped and then added to spark using pyFiles. The weird thing is that the other import from jobs module works, only fail for udf. I have tried approach in this post, creating a separate zip file called udf.zip, putting udf at top level and then add it to spark via pyFiles, but still run into ModuleNotFoundError when I try to import udf. I have also tried sys.path.append(<the udf path>) The only approach works is when I copy the udf_function into the spark script test_script.py. This wouldn't work in reality as the udf_function can be shared by other spark script. The underlying system is: Python 3.8 Spark 3.2 Spark is running in kubernetes
The syntax that worked for me was: spec: ... template: ... spec ... containers: env: - name: PORT valueFrom: configMapKeyRef: name: env-configmap key: PORT - name: ELASTIC_PASSWORD valueFrom: secretKeyRef: name: env-secret key: ELASTIC_PASSWORD The other syntax may also work and be acceptable, I'm just offering this answer as another alternative syntax for mapping Kubernetes configuration and secret values into environment variables. When I checked the URL referenced in the answer from @Valentin BOUSSON. The page now (2023-05-16) says: To use a Secret in an environment variable in a Pod: For each container in your Pod specification, add an environment variable for each Secret key that you want to use to the env[].valueFrom.secretKeyRef field. Modify your image and/or command line so that the program looks for values in the specified environment variables.
###Instruction: I'm currently working on a Kubernetes deployment file and I need to set the environment variables that the container will have. Is there a way to set Kubernetes secrets as environment variables in the deployment file? Here is a dummy file to help explain: ### Response: The syntax that worked for me was: spec: ... template: ... spec ... containers: env: - name: PORT valueFrom: configMapKeyRef: name: env-configmap key: PORT - name: ELASTIC_PASSWORD valueFrom: secretKeyRef: name: env-secret key: ELASTIC_PASSWORD The other syntax may also work and be acceptable, I'm just offering this answer as another alternative syntax for mapping Kubernetes configuration and secret values into environment variables. When I checked the URL referenced in the answer from @Valentin BOUSSON. The page now (2023-05-16) says: To use a Secret in an environment variable in a Pod: For each container in your Pod specification, add an environment variable for each Secret key that you want to use to the env[].valueFrom.secretKeyRef field. Modify your image and/or command line so that the program looks for values in the specified environment variables.
I'm currently working on a Kubernetes deployment file and I need to set the environment variables that the container will have. Is there a way to set Kubernetes secrets as environment variables in the deployment file? Here is a dummy file to help explain:
As per official doc: After disabling Kubernetes Select Apply & Restart to save the settings.This stops and removes Kubernetes containers, and also removes the /usr/local/bin/kubectl command. Below troubleshooting steps can help you: If you don't have kubectl installed in /usr/local/bin, Docker will install it for you on startup, so check where the kubectl is located then remove the kubectl binary. Check if kubernetes is still in the starting state. Check if any existing repositories are there using brew search kubectl. Try clean / purge data or reset to factory defaults in the troubleshooting menu. Also as @David Maze suggested you can install minikube using brew install minikube without removing kubectl.
###Instruction: I can still access kubectl after disabling Kubernetes from Docker Desktop v4.18.0 On running the kubectl version command, following output is displayed: Environment details: - OS: MacOS Ventura 13.3.1 (M1) - Docker Desktop: v4.18.0 I want to entirely remove kubectl and related services, so that I can do a fresh install of minikube for local development. Please suggest required steps for the same. ### Response: As per official doc: After disabling Kubernetes Select Apply & Restart to save the settings.This stops and removes Kubernetes containers, and also removes the /usr/local/bin/kubectl command. Below troubleshooting steps can help you: If you don't have kubectl installed in /usr/local/bin, Docker will install it for you on startup, so check where the kubectl is located then remove the kubectl binary. Check if kubernetes is still in the starting state. Check if any existing repositories are there using brew search kubectl. Try clean / purge data or reset to factory defaults in the troubleshooting menu. Also as @David Maze suggested you can install minikube using brew install minikube without removing kubectl.
I can still access kubectl after disabling Kubernetes from Docker Desktop v4.18.0 On running the kubectl version command, following output is displayed: Environment details: - OS: MacOS Ventura 13.3.1 (M1) - Docker Desktop: v4.18.0 I want to entirely remove kubectl and related services, so that I can do a fresh install of minikube for local development. Please suggest required steps for the same.
Play with argocd in the killercoda playground. This is probably the easiest way to get started learning and messing around with it. https://killercoda.com/mabusaa/course/argocd-endusers-scenarios/00-argocd-playground Or follow a guide like this: https://dev.to/abhinavd26/all-about-argocd-a-beginners-guide-33c9 Make sure your endpoints and services are up and you can curl the endpoint. Make sure to read the sections about ingresses and where to curl or connect to. You may also need to ensure your network is set up properly to allow you to access the endpoint. Here is a quick example from running the killercoda example... $ kubectl create namespace argocd namespace/argocd created $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created ... networkpolicy.networking.k8s.io/argocd-server-network-policy created Look for the argocd-server pod to be running (bottom one) $ kubectl get pods --namespace argocd NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 55s argocd-applicationset-controller-84c8d6fc9b-nstm5 1/1 Running 0 55s argocd-dex-server-59dc6fc75b-d8pnz 1/1 Running 0 55s argocd-notifications-controller-5d7dd8c5b7-j2hb4 1/1 Running 0 55s argocd-redis-74cb89f466-52d22 1/1 Running 0 55s argocd-repo-server-c456f6997-4qcww 1/1 Running 0 55s argocd-server-7555f5c778-fx2s5 1/1 Running 0 55s Look at the endpoints - note the argocd-server endpoint on 192.168.1.21:8080 $ kubectl --namespace argocd get endpoints NAME ENDPOINTS AGE argocd-applicationset-controller 192.168.1.18:8080,192.168.1.18:7000 85s argocd-dex-server 192.168.1.20:5558,192.168.1.20:5557,192.168.1.20:5556 85s argocd-metrics 192.168.1.23:8082 85s argocd-notifications-controller-metrics 192.168.1.19:9001 85s argocd-redis 192.168.1.22:6379 85s argocd-repo-server 192.168.1.17:8084,192.168.1.17:8081 85s argocd-server 192.168.1.21:8080,192.168.1.21:8080 85s argocd-server-metrics 192.168.1.21:8083 85s Look at the services - note the argocd-server service with ClusterIP 10.105.38.254 port(s) 80/TCP,443/TCP $ kubectl --namespace argocd get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-applicationset-controller ClusterIP 10.107.62.45 <none> 7000/TCP,8080/TCP 113s argocd-dex-server ClusterIP 10.102.41.206 <none> 5556/TCP,5557/TCP,5558/TCP 113s argocd-metrics ClusterIP 10.111.77.173 <none> 8082/TCP 113s argocd-notifications-controller-metrics ClusterIP 10.100.24.197 <none> 9001/TCP 113s argocd-redis ClusterIP 10.100.133.38 <none> 6379/TCP 113s argocd-repo-server ClusterIP 10.106.117.208 <none> 8081/TCP,8084/TCP 113s argocd-server ClusterIP 10.105.38.254 <none> 80/TCP,443/TCP 113s argocd-server-metrics ClusterIP 10.98.252.11 <none> 8083/TCP 113s If you curl the argocd server cluster ip and port (10.105.38.254:443) or endpoint (192.168.1.21:8080), you should get something like this: $ curl -k https://192.168.1.21:8080/swagger-ui <!DOCTYPE html> <html> <head> <title>API documentation</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1"> <link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet"> <style> body { margin: 0; padding: 0; } </style> </head> <body> <redoc spec-url='/swagger.json'></redoc> <script src="/assets/scripts/redoc.standalone.js"> </script> </body> </html> If the curl responds, you can try to put the url in your browser and see if it connects. Otherwise look at the options in step 3 of this url for exposing the service (https://argo-cd.readthedocs.io/en/stable/getting_started/) $ kubectl port-forward svc/argocd-server -n argocd 8080:443
###Instruction: Hi I was just trying to install argo CD in my local machine. I have installed and running minikube. After creating argocd namespace, I just try these commands kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml This error persists: Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout Could I get some help? Byw I'm new to argo... ### Response: Play with argocd in the killercoda playground. This is probably the easiest way to get started learning and messing around with it. https://killercoda.com/mabusaa/course/argocd-endusers-scenarios/00-argocd-playground Or follow a guide like this: https://dev.to/abhinavd26/all-about-argocd-a-beginners-guide-33c9 Make sure your endpoints and services are up and you can curl the endpoint. Make sure to read the sections about ingresses and where to curl or connect to. You may also need to ensure your network is set up properly to allow you to access the endpoint. Here is a quick example from running the killercoda example... $ kubectl create namespace argocd namespace/argocd created $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created ... networkpolicy.networking.k8s.io/argocd-server-network-policy created Look for the argocd-server pod to be running (bottom one) $ kubectl get pods --namespace argocd NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 55s argocd-applicationset-controller-84c8d6fc9b-nstm5 1/1 Running 0 55s argocd-dex-server-59dc6fc75b-d8pnz 1/1 Running 0 55s argocd-notifications-controller-5d7dd8c5b7-j2hb4 1/1 Running 0 55s argocd-redis-74cb89f466-52d22 1/1 Running 0 55s argocd-repo-server-c456f6997-4qcww 1/1 Running 0 55s argocd-server-7555f5c778-fx2s5 1/1 Running 0 55s Look at the endpoints - note the argocd-server endpoint on 192.168.1.21:8080 $ kubectl --namespace argocd get endpoints NAME ENDPOINTS AGE argocd-applicationset-controller 192.168.1.18:8080,192.168.1.18:7000 85s argocd-dex-server 192.168.1.20:5558,192.168.1.20:5557,192.168.1.20:5556 85s argocd-metrics 192.168.1.23:8082 85s argocd-notifications-controller-metrics 192.168.1.19:9001 85s argocd-redis 192.168.1.22:6379 85s argocd-repo-server 192.168.1.17:8084,192.168.1.17:8081 85s argocd-server 192.168.1.21:8080,192.168.1.21:8080 85s argocd-server-metrics 192.168.1.21:8083 85s Look at the services - note the argocd-server service with ClusterIP 10.105.38.254 port(s) 80/TCP,443/TCP $ kubectl --namespace argocd get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-applicationset-controller ClusterIP 10.107.62.45 <none> 7000/TCP,8080/TCP 113s argocd-dex-server ClusterIP 10.102.41.206 <none> 5556/TCP,5557/TCP,5558/TCP 113s argocd-metrics ClusterIP 10.111.77.173 <none> 8082/TCP 113s argocd-notifications-controller-metrics ClusterIP 10.100.24.197 <none> 9001/TCP 113s argocd-redis ClusterIP 10.100.133.38 <none> 6379/TCP 113s argocd-repo-server ClusterIP 10.106.117.208 <none> 8081/TCP,8084/TCP 113s argocd-server ClusterIP 10.105.38.254 <none> 80/TCP,443/TCP 113s argocd-server-metrics ClusterIP 10.98.252.11 <none> 8083/TCP 113s If you curl the argocd server cluster ip and port (10.105.38.254:443) or endpoint (192.168.1.21:8080), you should get something like this: $ curl -k https://192.168.1.21:8080/swagger-ui <!DOCTYPE html> <html> <head> <title>API documentation</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1"> <link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet"> <style> body { margin: 0; padding: 0; } </style> </head> <body> <redoc spec-url='/swagger.json'></redoc> <script src="/assets/scripts/redoc.standalone.js"> </script> </body> </html> If the curl responds, you can try to put the url in your browser and see if it connects. Otherwise look at the options in step 3 of this url for exposing the service (https://argo-cd.readthedocs.io/en/stable/getting_started/) $ kubectl port-forward svc/argocd-server -n argocd 8080:443
Hi I was just trying to install argo CD in my local machine. I have installed and running minikube. After creating argocd namespace, I just try these commands kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml This error persists: Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout Could I get some help? Byw I'm new to argo...
I'm also confused by this lately. Here are some findings after a while of digging in the source code and the docs. 'Eviction' here actually means two slightly different concepts, which are both documented in the official docs: Node-Pressure Eviction and API-Initiated Eviction. They can really be mixed up when we just talk about 'Eviction' because they both do the same thing: evict pods from nodes. Actually the doc of 'Node-Pressure Eviction' states: Node-pressure eviction is not the same as API-initiated eviction. The difference between these two is that 'API-Initiated Eviction' is, as the doc said: performing a policy-controlled DELETE operation on the Pod. So it will eventually delete the object stored in API server if the pod is evicted. But 'Node-Pressure Eviction' is issued directly by the kubelet and what it does is set the PodPhase in pod's status to 'Failed' and the Reason to 'Evicted' During a node-pressure eviction, the kubelet sets the PodPhase for the selected pods to Failed. This terminates the pods. This will result in the Evicted pods shown when running kubectl get pod. So the direct answer to your question is: If the pod is evicted using the Eviction API, the pod object will be deleted. If the pod is evicted by kubelet due to node pressure, the pod object will remain and will be in Failed status.
###Instruction: I'm having trouble understanding what pod Eviction means mechanically in terms of K8s's actions -- what exactly does K8s do with the pod during eviction? Specifically, my main question is this: Under what conditions is an Evicted pod actually deleted from ETCD? Under what conditions is an Evicted pod just killed without being deleted from the API server? If I Evict a pod directly using the Eviction API, the pod object is actually deleted. On the other hand, I've definitely seen pods hang in "Evicted" in the status column after I run "kubectl get pod". Edit: Removed follow-up questions about Preemption and OOM-Killing to conform to the guideline of one question per post. Might post a separate question about OOM management later. ### Response: I'm also confused by this lately. Here are some findings after a while of digging in the source code and the docs. 'Eviction' here actually means two slightly different concepts, which are both documented in the official docs: Node-Pressure Eviction and API-Initiated Eviction. They can really be mixed up when we just talk about 'Eviction' because they both do the same thing: evict pods from nodes. Actually the doc of 'Node-Pressure Eviction' states: Node-pressure eviction is not the same as API-initiated eviction. The difference between these two is that 'API-Initiated Eviction' is, as the doc said: performing a policy-controlled DELETE operation on the Pod. So it will eventually delete the object stored in API server if the pod is evicted. But 'Node-Pressure Eviction' is issued directly by the kubelet and what it does is set the PodPhase in pod's status to 'Failed' and the Reason to 'Evicted' During a node-pressure eviction, the kubelet sets the PodPhase for the selected pods to Failed. This terminates the pods. This will result in the Evicted pods shown when running kubectl get pod. So the direct answer to your question is: If the pod is evicted using the Eviction API, the pod object will be deleted. If the pod is evicted by kubelet due to node pressure, the pod object will remain and will be in Failed status.
I'm having trouble understanding what pod Eviction means mechanically in terms of K8s's actions -- what exactly does K8s do with the pod during eviction? Specifically, my main question is this: Under what conditions is an Evicted pod actually deleted from ETCD? Under what conditions is an Evicted pod just killed without being deleted from the API server? If I Evict a pod directly using the Eviction API, the pod object is actually deleted. On the other hand, I've definitely seen pods hang in "Evicted" in the status column after I run "kubectl get pod". Edit: Removed follow-up questions about Preemption and OOM-Killing to conform to the guideline of one question per post. Might post a separate question about OOM management later.
STDOUT is usually a command’s normal output, and STDERR is typically used to output error messages. By default, docker/container logs show the command’s STDOUT and STDERR. This information that is logged and the format of the log depends almost entirely on the container’s endpoint command. When you use docker/ container as CRI to write the logs in Json it shows the default as STDERR and this needs to be parsed before moving to the logging agent. Refer to this GIT LINK by stevenAcoffman for more information and here they used fluentbit as log parser. You can also use sematext as a Log agent which supports CRI-O log format as well and install its Plugin to parse the logs. By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container. {"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"} Along with the above, you need to install JSON Logging file driver
###Instruction: when i use docker as CRI: {"log":"I0421 14:23:18.944348 1 node.go:172] Successfully retrieved node IP: 192.168.49.2\n","stream":"stderr","time":"2023-04-21T14:23:18.944635198Z"} {"log":"I0421 14:23:18.944724 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation\n","stream":"stderr","time":"2023-04-21T14:23:18.944838628Z"} {"log":"W0421 14:23:19.008388 1 server_others.go:578] Unknown proxy mode \"\", assuming iptables proxy\n","stream":"stderr","time":"2023-04-21T14:23:19.008544314Z"} {"log":"I0421 14:23:19.008581 1 server_others.go:185] Using iptables Proxier.\n","stream":"stderr","time":"2023-04-21T14:23:19.008653777Z"} {"log":"I0421 14:23:19.008904 1 server.go:650] Version: v1.20.0\n","stream":"stderr","time":"2023-04-21T14:23:19.008963124Z"} {"log":"I0421 14:23:19.009762 1 config.go:315] Starting service config controller\n","stream":"stderr","time":"2023-04-21T14:23:19.009986673Z"} {"log":"I0421 14:23:19.009867 1 shared_informer.go:240] Waiting for caches to sync for service config\n","stream":"stderr","time":"2023-04-21T14:23:19.009999075Z"} {"log":"I0421 14:23:19.009973 1 config.go:224] Starting endpoint slice config controller\n","stream":"stderr","time":"2023-04-21T14:23:19.010041688Z"} {"log":"I0421 14:23:19.009979 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\n","stream":"stderr","time":"2023-04-21T14:23:19.01004961Z"} {"log":"I0421 14:23:19.110110 1 shared_informer.go:247] Caches are synced for endpoint slice config \n","stream":"stderr","time":"2023-04-21T14:23:19.110273306Z"} {"log":"I0421 14:23:19.110120 1 shared_informer.go:247] Caches are synced for service config \n","stream":"stderr","time":"2023-04-21T14:23:19.110525274Z"} And when i use containerd as CRI: 2022-07-19T11:13:32.228206987+08:00 stderr F I0719 11:13:32.228176 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288 2022-07-19T11:13:32.228229469+08:00 stderr F I0719 11:13:32.228203 1 conntrack.go:52] Setting nf_conntrack_max to 524288 2022-07-19T11:13:32.228460095+08:00 stderr F I0719 11:13:32.228431 1 conntrack.go:83] Setting conntrack hashsize to 131072 2022-07-19T11:13:32.237601129+08:00 stderr F I0719 11:13:32.237568 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 2022-07-19T11:13:32.237638594+08:00 stderr F I0719 11:13:32.237613 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 300 2022-07-19T11:13:32.237787689+08:00 stderr F I0719 11:13:32.237756 1 config.go:315] Starting service config controller Any body know why the stream of log file is always stderr? i expect that info messages is writing to stdout and err messages is writing to stderr. ### Response: STDOUT is usually a command’s normal output, and STDERR is typically used to output error messages. By default, docker/container logs show the command’s STDOUT and STDERR. This information that is logged and the format of the log depends almost entirely on the container’s endpoint command. When you use docker/ container as CRI to write the logs in Json it shows the default as STDERR and this needs to be parsed before moving to the logging agent. Refer to this GIT LINK by stevenAcoffman for more information and here they used fluentbit as log parser. You can also use sematext as a Log agent which supports CRI-O log format as well and install its Plugin to parse the logs. By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container. {"log":"Log line is here\n","stream":"stdout","time":"2019-01-01T11:11:11.111111111Z"} Along with the above, you need to install JSON Logging file driver
when i use docker as CRI: {"log":"I0421 14:23:18.944348 1 node.go:172] Successfully retrieved node IP: 192.168.49.2\n","stream":"stderr","time":"2023-04-21T14:23:18.944635198Z"} {"log":"I0421 14:23:18.944724 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation\n","stream":"stderr","time":"2023-04-21T14:23:18.944838628Z"} {"log":"W0421 14:23:19.008388 1 server_others.go:578] Unknown proxy mode \"\", assuming iptables proxy\n","stream":"stderr","time":"2023-04-21T14:23:19.008544314Z"} {"log":"I0421 14:23:19.008581 1 server_others.go:185] Using iptables Proxier.\n","stream":"stderr","time":"2023-04-21T14:23:19.008653777Z"} {"log":"I0421 14:23:19.008904 1 server.go:650] Version: v1.20.0\n","stream":"stderr","time":"2023-04-21T14:23:19.008963124Z"} {"log":"I0421 14:23:19.009762 1 config.go:315] Starting service config controller\n","stream":"stderr","time":"2023-04-21T14:23:19.009986673Z"} {"log":"I0421 14:23:19.009867 1 shared_informer.go:240] Waiting for caches to sync for service config\n","stream":"stderr","time":"2023-04-21T14:23:19.009999075Z"} {"log":"I0421 14:23:19.009973 1 config.go:224] Starting endpoint slice config controller\n","stream":"stderr","time":"2023-04-21T14:23:19.010041688Z"} {"log":"I0421 14:23:19.009979 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\n","stream":"stderr","time":"2023-04-21T14:23:19.01004961Z"} {"log":"I0421 14:23:19.110110 1 shared_informer.go:247] Caches are synced for endpoint slice config \n","stream":"stderr","time":"2023-04-21T14:23:19.110273306Z"} {"log":"I0421 14:23:19.110120 1 shared_informer.go:247] Caches are synced for service config \n","stream":"stderr","time":"2023-04-21T14:23:19.110525274Z"} And when i use containerd as CRI: 2022-07-19T11:13:32.228206987+08:00 stderr F I0719 11:13:32.228176 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288 2022-07-19T11:13:32.228229469+08:00 stderr F I0719 11:13:32.228203 1 conntrack.go:52] Setting nf_conntrack_max to 524288 2022-07-19T11:13:32.228460095+08:00 stderr F I0719 11:13:32.228431 1 conntrack.go:83] Setting conntrack hashsize to 131072 2022-07-19T11:13:32.237601129+08:00 stderr F I0719 11:13:32.237568 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 2022-07-19T11:13:32.237638594+08:00 stderr F I0719 11:13:32.237613 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 300 2022-07-19T11:13:32.237787689+08:00 stderr F I0719 11:13:32.237756 1 config.go:315] Starting service config controller Any body know why the stream of log file is always stderr? i expect that info messages is writing to stdout and err messages is writing to stderr.
The error suggests that the container is already using more memory than the limit you're trying to impose. If the limit were to be imposed in this case, then any further attempts to allocate memory would fail. That would usually cause processes to abort, which is bad. Also, the container would already be violating the limit you just set. The limit isn't allowed to be imposed in this case because data loss is highly likely to occur, and the choice of which data to lose should not be left to chance. I suggest if you want to lower the limit, you need to lower the current usage to below that limit before you do so, to retain control of what memory is freed.
###Instruction: I am using 'crictl' tool to work with containerd runtime containers (under kubernetes) in a managed cluster. I'm trying to set the memory limit (in bytes) to 16MB with the command: crictl -r unix:///run/containerd/containerd.sock update --memory 16777216 c60df9ef3381e And get the following error: E1219 11:10:11.616194 1241 remote_runtime.go:640] "UpdateContainerResources from runtime service failed" err=< rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown > containerID="c60df9ef3381e" FATA[0000] updating container resources for "c60df9ef3381e": rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown At first I thought that maybe I cannot set a memory limit directly to a running container lower than the limit that appears in the kubernetes yaml. Here Are the limits from K8s: Requests:{"cpu":"100m","memory":"64Mi"} Limits:{"cpu":"200m","memory":"128Mi"} But not, even setting a memory limit above the K8S request (e.g. 65MB) gives this same error! This works on Docker runtime - I'm able to limit the memory of the container. Yes, it might crash, but the operation works.. Then, I tried to give a memory limit higher than the current usage, and it succeeded... Can anyone help understanding this error and what might be causing it on containerd runtime?? Is this indeed a limitation that I cannot limit to a lower memory currently used by the container? Is there a way to overcome that? Thanks a lot for your time!!! ### Response: The error suggests that the container is already using more memory than the limit you're trying to impose. If the limit were to be imposed in this case, then any further attempts to allocate memory would fail. That would usually cause processes to abort, which is bad. Also, the container would already be violating the limit you just set. The limit isn't allowed to be imposed in this case because data loss is highly likely to occur, and the choice of which data to lose should not be left to chance. I suggest if you want to lower the limit, you need to lower the current usage to below that limit before you do so, to retain control of what memory is freed.
I am using 'crictl' tool to work with containerd runtime containers (under kubernetes) in a managed cluster. I'm trying to set the memory limit (in bytes) to 16MB with the command: crictl -r unix:///run/containerd/containerd.sock update --memory 16777216 c60df9ef3381e And get the following error: E1219 11:10:11.616194 1241 remote_runtime.go:640] "UpdateContainerResources from runtime service failed" err=< rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown > containerID="c60df9ef3381e" FATA[0000] updating container resources for "c60df9ef3381e": rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown At first I thought that maybe I cannot set a memory limit directly to a running container lower than the limit that appears in the kubernetes yaml. Here Are the limits from K8s: Requests:{"cpu":"100m","memory":"64Mi"} Limits:{"cpu":"200m","memory":"128Mi"} But not, even setting a memory limit above the K8S request (e.g. 65MB) gives this same error! This works on Docker runtime - I'm able to limit the memory of the container. Yes, it might crash, but the operation works.. Then, I tried to give a memory limit higher than the current usage, and it succeeded... Can anyone help understanding this error and what might be causing it on containerd runtime?? Is this indeed a limitation that I cannot limit to a lower memory currently used by the container? Is there a way to overcome that? Thanks a lot for your time!!!
I solved my issue, by following this guide I then exported resulting stack into my CloudFormation script. Then to deploy my application I updated my kubernetes manifest to: --- apiVersion: v1 kind: Namespace metadata: name: example --- apiVersion: apps/v1 kind: Deployment metadata: namespace: example name: deployment-example-be-app spec: selector: matchLabels: app.kubernetes.io/name: example-be-app replicas: 2 template: metadata: labels: app.kubernetes.io/name: example-be-app spec: containers: - name: example-be-app image: public.ecr.aws/fake_url/example:latest imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: namespace: example name: service-example-be-app annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP selector: app.kubernetes.io/name: example-be-app Now I access my example application form browser.
###Instruction: I am trying to create EKS Fargate cluster and deploy example Spring Boot application with 1 endpoint, I successfully create stack with following CloudFormation script: --- AWSTemplateFormatVersion: '2010-09-09' Description: 'AWS CloudFormation template for EKS Fargate managed Kubernetes cluster with exposed endpoints' Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.0.0.0/16 EnableDnsSupport: true EnableDnsHostnames: true InternetGateway: Type: AWS::EC2::InternetGateway VPCGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicSubnet: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.2.0/24 MapPublicIpOnLaunch: true AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetA: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.0.0/24 AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetB: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.1.0/24 AvailabilityZone: !Select [ 1, !GetAZs '' ] PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PublicRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway SubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable EIP: Type: AWS::EC2::EIP NatGateway: Type: AWS::EC2::NatGateway Properties: SubnetId: !Ref PublicSubnet AllocationId: !GetAtt EIP.AllocationId PrivateRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PrivateRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !Ref NatGateway PrivateSubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetA RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociationB: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetB RouteTableId: !Ref PrivateRouteTable EKSCluster: Type: AWS::EKS::Cluster Properties: Name: EKSFargateCluster Version: '1.26' ResourcesVpcConfig: SubnetIds: - !Ref PrivateSubnetA - !Ref PrivateSubnetB RoleArn: !GetAtt EKSClusterRole.Arn FargateProfile: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: FargateProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: default Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargateProfileCoredns: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: CorednsProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: kube-system Labels: - Key: k8s-app Value: kube-dns Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargatePodExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks-fargate-pods.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy EKSClusterRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy - arn:aws:iam::aws:policy/AmazonEKSVPCResourceController I run following command to path the CoreDNS for Fargate: kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]' Then I deploy my example application image from public ECR with following kubernetes manifest: --- apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 2 selector: matchLabels: app: example-app template: metadata: labels: app: example-app spec: containers: - name: ventu image: public.ecr.aws/not_real_url/public_ecr_name:latest ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: example-service spec: type: LoadBalancer selector: app: example-app ports: - protocol: TCP port: 80 targetPort: 8080 Then when I run: kubectl get svc I see result: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service LoadBalancer 172.20.228.77 aa0116829ac2647a7bf39a97bffb0183-1208408433.eu-central-1.elb.amazonaws.com 80:31915/TCP 16m kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 29m However when I try to reach the EXTERNAL-IP on my LoadBalancer example-service, I get empty response, I can't reach my application on only path defined in my Spring Boot application: /api/v1/info server.port=8080 server.servlet.context-path=/api/v1 What am I missing? Couple of information: my pods spin up successfully, I can see Spring Boot logging when I run kubectl logs pod-name my coredns pods spin up correctly as well I use busybox to test my cluster's dns, and everything seems to be working too ### Response: I solved my issue, by following this guide I then exported resulting stack into my CloudFormation script. Then to deploy my application I updated my kubernetes manifest to: --- apiVersion: v1 kind: Namespace metadata: name: example --- apiVersion: apps/v1 kind: Deployment metadata: namespace: example name: deployment-example-be-app spec: selector: matchLabels: app.kubernetes.io/name: example-be-app replicas: 2 template: metadata: labels: app.kubernetes.io/name: example-be-app spec: containers: - name: example-be-app image: public.ecr.aws/fake_url/example:latest imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: namespace: example name: service-example-be-app annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP selector: app.kubernetes.io/name: example-be-app Now I access my example application form browser.
I am trying to create EKS Fargate cluster and deploy example Spring Boot application with 1 endpoint, I successfully create stack with following CloudFormation script: --- AWSTemplateFormatVersion: '2010-09-09' Description: 'AWS CloudFormation template for EKS Fargate managed Kubernetes cluster with exposed endpoints' Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.0.0.0/16 EnableDnsSupport: true EnableDnsHostnames: true InternetGateway: Type: AWS::EC2::InternetGateway VPCGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicSubnet: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.2.0/24 MapPublicIpOnLaunch: true AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetA: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.0.0/24 AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetB: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.1.0/24 AvailabilityZone: !Select [ 1, !GetAZs '' ] PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PublicRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway SubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable EIP: Type: AWS::EC2::EIP NatGateway: Type: AWS::EC2::NatGateway Properties: SubnetId: !Ref PublicSubnet AllocationId: !GetAtt EIP.AllocationId PrivateRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PrivateRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !Ref NatGateway PrivateSubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetA RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociationB: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetB RouteTableId: !Ref PrivateRouteTable EKSCluster: Type: AWS::EKS::Cluster Properties: Name: EKSFargateCluster Version: '1.26' ResourcesVpcConfig: SubnetIds: - !Ref PrivateSubnetA - !Ref PrivateSubnetB RoleArn: !GetAtt EKSClusterRole.Arn FargateProfile: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: FargateProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: default Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargateProfileCoredns: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: CorednsProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: kube-system Labels: - Key: k8s-app Value: kube-dns Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargatePodExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks-fargate-pods.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy EKSClusterRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy - arn:aws:iam::aws:policy/AmazonEKSVPCResourceController I run following command to path the CoreDNS for Fargate: kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]' Then I deploy my example application image from public ECR with following kubernetes manifest: --- apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 2 selector: matchLabels: app: example-app template: metadata: labels: app: example-app spec: containers: - name: ventu image: public.ecr.aws/not_real_url/public_ecr_name:latest ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: example-service spec: type: LoadBalancer selector: app: example-app ports: - protocol: TCP port: 80 targetPort: 8080 Then when I run: kubectl get svc I see result: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service LoadBalancer 172.20.228.77 aa0116829ac2647a7bf39a97bffb0183-1208408433.eu-central-1.elb.amazonaws.com 80:31915/TCP 16m kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 29m However when I try to reach the EXTERNAL-IP on my LoadBalancer example-service, I get empty response, I can't reach my application on only path defined in my Spring Boot application: /api/v1/info server.port=8080 server.servlet.context-path=/api/v1 What am I missing? Couple of information: my pods spin up successfully, I can see Spring Boot logging when I run kubectl logs pod-name my coredns pods spin up correctly as well I use busybox to test my cluster's dns, and everything seems to be working too
Currently, it is not possible to have multiple tenants in the same namespace because the MinIO operator always creates services with the names minio, minio-console, and minio-hl in the installation namespace. However, a workaround is to use a different namespace for each tenant and use the DNS in Kubernetes to access them: minio.<other-namespace>.svc For more information, refer to https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/.
###Instruction: I am trying out the minio-operator. This is a two part question regarding namespaces mentioned in the docs. I would like have a namespace in which I deploy different kinds of operators. Eventhough the docs require a dedicated namespace for the operator, I seem to experience no problems so far, putting all my operators in the same namespace. The MinIO Operator exists in its own namespace. When it comes to tenants the docs also require a dedicated namespace for each tenant. When you use the Operator to create a tenant, the tenant must have its own namespace. Does this mean nothing else but the tenant can be in this namespace, or just, that no more than one tenant can exist per namespace. What I would like to do is to put a tenant into the same namespace as the solution, that is consuming the minio API. Is my attempt wrong, and if so, why? What is the reason for those constraints? I could not find any explanation in the article I cited above. ### Response: Currently, it is not possible to have multiple tenants in the same namespace because the MinIO operator always creates services with the names minio, minio-console, and minio-hl in the installation namespace. However, a workaround is to use a different namespace for each tenant and use the DNS in Kubernetes to access them: minio.<other-namespace>.svc For more information, refer to https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/.
I am trying out the minio-operator. This is a two part question regarding namespaces mentioned in the docs. I would like have a namespace in which I deploy different kinds of operators. Eventhough the docs require a dedicated namespace for the operator, I seem to experience no problems so far, putting all my operators in the same namespace. The MinIO Operator exists in its own namespace. When it comes to tenants the docs also require a dedicated namespace for each tenant. When you use the Operator to create a tenant, the tenant must have its own namespace. Does this mean nothing else but the tenant can be in this namespace, or just, that no more than one tenant can exist per namespace. What I would like to do is to put a tenant into the same namespace as the solution, that is consuming the minio API. Is my attempt wrong, and if so, why? What is the reason for those constraints? I could not find any explanation in the article I cited above.
Wow, great spot... The enable flag isn't there. You might want to raise this on the Azure PowerShell GitHub. As a workaround, the Azure CLI does allow enabling of local accounts. az aks update --name cluster --resource-group rg --enable-local-accounts https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update
###Instruction: I am trying to enable the local account on an AKS cluster (version 1.24.10) by running commands from an AzureDevOps yaml pipeline. The “old” az aks command : az aks update -g <myresourcegroup> -n <myclustername> --enable-local used to serve me well to enable a local account on an AKS cluster. In the yaml pipeline, however, this does not seem to work and I resorted to running the Set-AzAksCluster command from within a AzurePowerShell@5 task - task: AzurePowerShell@5 displayName: 'disable-local-account' name: disablelocalaccount inputs: azureSubscription: 'myazsubscription' ScriptType: InlineScript Inline: | Set-AzAksCluster -ResourceGroupName myresourcegrp -Name mycluster -DisableLocalAccount azurePowerShellVersion: LatestVersion By passing the -DisableLocalAccount switch to the command we can disable the local account on the cluster. The enabling of the local account on the cluster just seems to elude me somehow…sigh. Does anybody know if it is possible to enable the local account using the Set-AzAksCluster command? And if so, what is the correct way to go about this? Many thanks! Kind regards, Morné ### Response: Wow, great spot... The enable flag isn't there. You might want to raise this on the Azure PowerShell GitHub. As a workaround, the Azure CLI does allow enabling of local accounts. az aks update --name cluster --resource-group rg --enable-local-accounts https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update
I am trying to enable the local account on an AKS cluster (version 1.24.10) by running commands from an AzureDevOps yaml pipeline. The “old” az aks command : az aks update -g <myresourcegroup> -n <myclustername> --enable-local used to serve me well to enable a local account on an AKS cluster. In the yaml pipeline, however, this does not seem to work and I resorted to running the Set-AzAksCluster command from within a AzurePowerShell@5 task - task: AzurePowerShell@5 displayName: 'disable-local-account' name: disablelocalaccount inputs: azureSubscription: 'myazsubscription' ScriptType: InlineScript Inline: | Set-AzAksCluster -ResourceGroupName myresourcegrp -Name mycluster -DisableLocalAccount azurePowerShellVersion: LatestVersion By passing the -DisableLocalAccount switch to the command we can disable the local account on the cluster. The enabling of the local account on the cluster just seems to elude me somehow…sigh. Does anybody know if it is possible to enable the local account using the Set-AzAksCluster command? And if so, what is the correct way to go about this? Many thanks! Kind regards, Morné
You can check the prerequisites when running Spark in Kubernetes to determine if it will increase its performance A running Kubernetes cluster at version >= 1.22 with access configured to it using kubectl. If you do not already have a working Kubernetes cluster, you may set up a test cluster on your local machine using minikube. We recommend using the latest release of minikube with the DNS addon enabled. Be aware that the default minikube configuration is not enough for running Spark applications. We recommend 3 CPUs and 4g of memory to be able to start a simple Spark application with a single executor. Check kubernetes-client library’s version of your Spark environment, and its compatibility with your Kubernetes cluster’s version. You must have appropriate permissions to list, create, edit and delete pods in your cluster. You can verify that you can list these resources by running      kubectl auth can-i <list|create|edit|delete> pods The service account credentials used by the driver pods must be allowed to create pods, services and configmaps. You must have Kubernetes DNS configured in your cluster.
###Instruction: I have been trying to deploy spark and jupyter note on minikube. I used helm charts for deploying both Jupyter notebook - https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook Spark - https://bitnami.com/stack/spark/helm While able to establish to the master using spark = SparkSession.builder.master("spark://my-release-spark-master-0.my-release-spark-headless.default.svc.cluster.local:7077").getOrCreate() When running the following snippet nums= sc.parallelize([1,2,3,4]) squared = nums.map(lambda x: x*x).collect() for num in squared: print('%i ' % (num)) The execution takes a long time and never completes when it runs the collect() method ### Response: You can check the prerequisites when running Spark in Kubernetes to determine if it will increase its performance A running Kubernetes cluster at version >= 1.22 with access configured to it using kubectl. If you do not already have a working Kubernetes cluster, you may set up a test cluster on your local machine using minikube. We recommend using the latest release of minikube with the DNS addon enabled. Be aware that the default minikube configuration is not enough for running Spark applications. We recommend 3 CPUs and 4g of memory to be able to start a simple Spark application with a single executor. Check kubernetes-client library’s version of your Spark environment, and its compatibility with your Kubernetes cluster’s version. You must have appropriate permissions to list, create, edit and delete pods in your cluster. You can verify that you can list these resources by running      kubectl auth can-i <list|create|edit|delete> pods The service account credentials used by the driver pods must be allowed to create pods, services and configmaps. You must have Kubernetes DNS configured in your cluster.
I have been trying to deploy spark and jupyter note on minikube. I used helm charts for deploying both Jupyter notebook - https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook Spark - https://bitnami.com/stack/spark/helm While able to establish to the master using spark = SparkSession.builder.master("spark://my-release-spark-master-0.my-release-spark-headless.default.svc.cluster.local:7077").getOrCreate() When running the following snippet nums= sc.parallelize([1,2,3,4]) squared = nums.map(lambda x: x*x).collect() for num in squared: print('%i ' % (num)) The execution takes a long time and never completes when it runs the collect() method
I recall seeing this before and finding an answer before. https://cloud.google.com/container-registry/docs/pulling-cached-images Talks about it a little, but I'll explain it so it's easy to follow. If I spin up a private GKE cluster and I create 3 deployments: 1st uses image: nginx:latest 2nd uses image: nginx:stable 3rd uses image: docker.io/busybox:1.36.0-glibc nginx:latest (common tag) will almost always work nginx:stable (popular tag) will work sometimes The super specific tag (rarely used tag) will almost always fail with ImagePullBackOff So why is this the case? 1. The ImagePullBackOff happens when the pods/nodes have no NAT Gateway/no Internet Access kubectl exec -it working-nginx-latest-pod -- curl yahoo.com ^-- You can prove no internet with this, note curl google.com is a bad test on GKE, because it's still reachable via googles internal network / you'll get a response, because google's network can reach google.com without having to go through the internet, that's why I recommend testing with a non google URL like yahoo.com (Google's networking also occasionally does some counterintuitive / non-standard things, like route public IP Addresses over their internal network, so sometimes you can reach public IP addresses w/o internet access, it's usually google services with public IPs that are sometimes reachable w/o internet access.) 2. So the next question is, but wait... how are nginx:latest and nginx:stable able to pull image that exists on the internet/on docker hub, when there's no internet access? Basically why is it working for some images and not others? Answer boils down to popularity of the image:tag pair. Is it popular enough to get cached in mirror.gcr.io? The initial link I shared at the top mentions "Container Registry caches frequently-accessed public Docker Hub images on mirror.gcr.io", so basically if you reference a common tag of a popular image, you can sometimes get lucky enough to pull it even without internet, because the cache is reachable via private IP space / without internet access. When a pod running on GKE private cluster gives you ImagePullBackOff, and you're like, what's going on? I know this image exists! docker pull docker.io/busybox:1.36.0-glibc pulls fine from my local machine, what's happening is that rarely used tag doesn't exist in their cache, that mirrors common tags of popular images. Best way to fix it is to either pull all images from pkg.dev (GCP's Artifact Registry, which GKE should be able to access w/o internet access) or set up NAT gateway/ensure the private cluster has internet access. And you can use kubectl exec -it working-nginx-latest-pod -- curl yahoo.com as a feedback loop to check if the cluster has internet access as you tinker with VPC settings to add NAT GW. https://cloud.google.com/kubernetes-engine/docs/best-practices/networking#use-cloudnat mentions By default, (GKE) "private clusters don't have internet access. In order to allow Pods to reach the internet, enable Cloud NAT for each region. At a minimum, enable Cloud NAT for the primary and secondary ranges in the GKE subnet."
###Instruction: This is a very wried thing. I created a private GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. So one of these pods always get ImagePullBackOff, I check the error Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) And the pods scheduled to the remaining two nodes work well. I ssh to that node, run docker pull and everything is fine. I cannot find another way to troubleshoot this error. I tried to drain or delete that node and let the cluster to recreate the node. but it is still not working. Help me, please. Update: From GCP documentation, it will fail to pull images from the docker hub. BUT the weirdest thing is ONLY ONE node is unable to pull the images. ### Response: I recall seeing this before and finding an answer before. https://cloud.google.com/container-registry/docs/pulling-cached-images Talks about it a little, but I'll explain it so it's easy to follow. If I spin up a private GKE cluster and I create 3 deployments: 1st uses image: nginx:latest 2nd uses image: nginx:stable 3rd uses image: docker.io/busybox:1.36.0-glibc nginx:latest (common tag) will almost always work nginx:stable (popular tag) will work sometimes The super specific tag (rarely used tag) will almost always fail with ImagePullBackOff So why is this the case? 1. The ImagePullBackOff happens when the pods/nodes have no NAT Gateway/no Internet Access kubectl exec -it working-nginx-latest-pod -- curl yahoo.com ^-- You can prove no internet with this, note curl google.com is a bad test on GKE, because it's still reachable via googles internal network / you'll get a response, because google's network can reach google.com without having to go through the internet, that's why I recommend testing with a non google URL like yahoo.com (Google's networking also occasionally does some counterintuitive / non-standard things, like route public IP Addresses over their internal network, so sometimes you can reach public IP addresses w/o internet access, it's usually google services with public IPs that are sometimes reachable w/o internet access.) 2. So the next question is, but wait... how are nginx:latest and nginx:stable able to pull image that exists on the internet/on docker hub, when there's no internet access? Basically why is it working for some images and not others? Answer boils down to popularity of the image:tag pair. Is it popular enough to get cached in mirror.gcr.io? The initial link I shared at the top mentions "Container Registry caches frequently-accessed public Docker Hub images on mirror.gcr.io", so basically if you reference a common tag of a popular image, you can sometimes get lucky enough to pull it even without internet, because the cache is reachable via private IP space / without internet access. When a pod running on GKE private cluster gives you ImagePullBackOff, and you're like, what's going on? I know this image exists! docker pull docker.io/busybox:1.36.0-glibc pulls fine from my local machine, what's happening is that rarely used tag doesn't exist in their cache, that mirrors common tags of popular images. Best way to fix it is to either pull all images from pkg.dev (GCP's Artifact Registry, which GKE should be able to access w/o internet access) or set up NAT gateway/ensure the private cluster has internet access. And you can use kubectl exec -it working-nginx-latest-pod -- curl yahoo.com as a feedback loop to check if the cluster has internet access as you tinker with VPC settings to add NAT GW. https://cloud.google.com/kubernetes-engine/docs/best-practices/networking#use-cloudnat mentions By default, (GKE) "private clusters don't have internet access. In order to allow Pods to reach the internet, enable Cloud NAT for each region. At a minimum, enable Cloud NAT for the primary and secondary ranges in the GKE subnet."
This is a very wried thing. I created a private GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. So one of these pods always get ImagePullBackOff, I check the error Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) And the pods scheduled to the remaining two nodes work well. I ssh to that node, run docker pull and everything is fine. I cannot find another way to troubleshoot this error. I tried to drain or delete that node and let the cluster to recreate the node. but it is still not working. Help me, please. Update: From GCP documentation, it will fail to pull images from the docker hub. BUT the weirdest thing is ONLY ONE node is unable to pull the images.
This check - input.request.namespace == ["alpha", "beta"] will evaluate to true only if the input.request.namespace field is exactly identical to the array it is being compared to. i.e. - Only if input.request.namespace is an array with exactly two values, the first one being "alpha" and the second one "beta" To check if an array input field holds one of two values, use incremental rules and array lookups: namespace_alpha_or_beta { "alpha" = input.request.namespace[_] } namespace_alpha_or_beta { "beta" = input.request.namespace[_] }
###Instruction: Since 2 days I try to create a simple OPA Gatekeeper Policy which blocks the creation of pods with "privileged:true" for some namespaces. Some more details: Im using opa-gatekeeper in version 3.13 installed by following these instructions. To enable the policy, first I created a ConstraintTemplate: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: disallowprivilegedpods annotations: description: "Disallow creation of privileged pods in alpha and beta namespaces" spec: crd: spec: names: kind: DisallowPrivilegedPods targets: - target: admission.k8s.gatekeeper.sh rego: | package disallow_privileged_pods violation[{"msg": msg}] { input.request.kind.kind == "Pod" input.request.operation == "CREATE" input.request.namespace == ["alpha", "beta"] input.request.object.spec.securityContext.privileged == true msg := "Privileged pods are not allowed in the Alpha and Beta namespaces." } Next I created the constraint: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: DisallowPrivilegedPods metadata: name: disallow-privileged-pods-alpha-beta spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] namespaces: - alpha - beta To test if the policy is working correctly, I tried to deploy this pod in one of these namespaces: apiVersion: v1 kind: Pod metadata: name: privileged-pod namespace: alpha spec: containers: - name: my-container image: nginx securityContext: privileged: true restartPolicy: Never Unfortunately the policy doesnt seem to work and the pod can be created. Can anyone give me some hints, whats wrong with the policy? Cheers, Christian ### Response: This check - input.request.namespace == ["alpha", "beta"] will evaluate to true only if the input.request.namespace field is exactly identical to the array it is being compared to. i.e. - Only if input.request.namespace is an array with exactly two values, the first one being "alpha" and the second one "beta" To check if an array input field holds one of two values, use incremental rules and array lookups: namespace_alpha_or_beta { "alpha" = input.request.namespace[_] } namespace_alpha_or_beta { "beta" = input.request.namespace[_] }
Since 2 days I try to create a simple OPA Gatekeeper Policy which blocks the creation of pods with "privileged:true" for some namespaces. Some more details: Im using opa-gatekeeper in version 3.13 installed by following these instructions. To enable the policy, first I created a ConstraintTemplate: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: disallowprivilegedpods annotations: description: "Disallow creation of privileged pods in alpha and beta namespaces" spec: crd: spec: names: kind: DisallowPrivilegedPods targets: - target: admission.k8s.gatekeeper.sh rego: | package disallow_privileged_pods violation[{"msg": msg}] { input.request.kind.kind == "Pod" input.request.operation == "CREATE" input.request.namespace == ["alpha", "beta"] input.request.object.spec.securityContext.privileged == true msg := "Privileged pods are not allowed in the Alpha and Beta namespaces." } Next I created the constraint: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: DisallowPrivilegedPods metadata: name: disallow-privileged-pods-alpha-beta spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] namespaces: - alpha - beta To test if the policy is working correctly, I tried to deploy this pod in one of these namespaces: apiVersion: v1 kind: Pod metadata: name: privileged-pod namespace: alpha spec: containers: - name: my-container image: nginx securityContext: privileged: true restartPolicy: Never Unfortunately the policy doesnt seem to work and the pod can be created. Can anyone give me some hints, whats wrong with the policy? Cheers, Christian
I think (!?) you can't use negation with kubectl's implementation of JSONPath. Once you use the --output flag with kubectl, you lose the ability to render the command's output using the default table format. With e.g. Bash column can provide similar out. Consider using a generic JSON processing tool such as jq. An advantage of using a standalone tool is that, once you're familiar with the tool, you can use it with any command that emits JSON (not just with kubectl. FILTER=' ["NAME","STATUS"], (.items[]| [ select(.metadata.annotations["components.gke.io/layer"]!="addon") |.metadata.name,.status.phase ] ) |@tsv' kubectl get namespaces --output=json \ | jq -r "${FILTER}" \ | column --table --separator $'\t' Explanation of FILTER: Returns a JSON array that is piped into @tsv (tab-formatted output) The first array entry is a an array (!) of column titles .items is piped through a select that filters Namespaces that don't contain the annotation Emit not filtered Namespace name and status as JSON array items The tab-formatted (no longer JSON) that's emitted by jq is then piped into column which formats the results as a table.
###Instruction: I want to filter namespaces based on the annotation. I do not want namespace which has annotation as components\.gke\.io/layer=="addon". I tried kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer=="addon")].metadata.name}' command and at least it gives me the result but it is not coming in normal table format, the way it comes when you execute kubectl get ns. However, this is not what I want, I want the inverse of this, which means filtering out these GKE rows. I tried kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer=="")].metadata.name}' but it doesn't work, it give empty result. I tried to format but unable to create the desired command. Anyone has tried this before - (1.) filter out namespaces which has certain annotation (2.) format that output in table format. ### Response: I think (!?) you can't use negation with kubectl's implementation of JSONPath. Once you use the --output flag with kubectl, you lose the ability to render the command's output using the default table format. With e.g. Bash column can provide similar out. Consider using a generic JSON processing tool such as jq. An advantage of using a standalone tool is that, once you're familiar with the tool, you can use it with any command that emits JSON (not just with kubectl. FILTER=' ["NAME","STATUS"], (.items[]| [ select(.metadata.annotations["components.gke.io/layer"]!="addon") |.metadata.name,.status.phase ] ) |@tsv' kubectl get namespaces --output=json \ | jq -r "${FILTER}" \ | column --table --separator $'\t' Explanation of FILTER: Returns a JSON array that is piped into @tsv (tab-formatted output) The first array entry is a an array (!) of column titles .items is piped through a select that filters Namespaces that don't contain the annotation Emit not filtered Namespace name and status as JSON array items The tab-formatted (no longer JSON) that's emitted by jq is then piped into column which formats the results as a table.
I want to filter namespaces based on the annotation. I do not want namespace which has annotation as components\.gke\.io/layer=="addon". I tried kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer=="addon")].metadata.name}' command and at least it gives me the result but it is not coming in normal table format, the way it comes when you execute kubectl get ns. However, this is not what I want, I want the inverse of this, which means filtering out these GKE rows. I tried kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer=="")].metadata.name}' but it doesn't work, it give empty result. I tried to format but unable to create the desired command. Anyone has tried this before - (1.) filter out namespaces which has certain annotation (2.) format that output in table format.
Try using Predis session handler instead. Setup your services # services.yaml Redis: class: \Predis\Client arguments: - 'redis://%env(REDIS_HOST)%:%env(REDIS_PORT)%' predis_session_handler: class: 'Predis\Session\Handler' arguments: - '@Redis' Configure Symfony # framework.yaml framework: session: handler_id: predis_session_handler
###Instruction: Having an app running perfectly on my local with docker. When I deploy it, I can't get why a new session is started on each request. It seems it can't come from the code since it's exactly the same. The session is stored on a redis DB (I have the same bug if I use the filesystem session). It's there I can see all the new session created. (see last code block). From the log I can clearly identify that the $request->getSession()->getId() changes on each request but not the PHPSESSID cookie. For example: First request [2020-11-02 15:03:59] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"https://foo.bar.dev/login","method":"POST"} [] [2020-11-02 15:03:59] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] [2020-11-02 15:03:59] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $request session id => 6491ddf4e8f3e2eaa22b44b3a98c094a [] [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE => {"PHPSESSID":"87cf6185b652f8d713c45031ebe6d8a4"} [] Second one [2020-11-02 15:04:33] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"https://foo.bar.dev/login","method":"POST"} [] [2020-11-02 15:04:33] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] [2020-11-02 15:04:33] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $request session id => 41b08dac8a803337a48dca7d5b33b840 [] [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE => {"PHPSESSID":"87cf6185b652f8d713c45031ebe6d8a4"} [] KUBERNETES ingress-nginx.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: secured-front namespace: foo-apis-dev annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/affinity-mode: "persistent" nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID" nginx.ingress.kubernetes.io/session-cookie-path: "/" nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax" nginx.ingress.kubernetes.io/session-cookie-expires: "172800000" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800000" spec: tls: - hosts: - bar.foo.dev secretName: tls-secret rules: - host: bar.foo.dev http: paths: - backend: serviceName: bar-nginx servicePort: 80 path: /(.*) Symfony APP security.yaml [...] firewalls: dev: pattern: ^/(_(profiler|wdt)|css|images|js)/ security: false main: anonymous: lazy pattern: ^/.* logout: path: app_logout target: login guard: authenticators: - App\Security\LoginFormAuthenticator [...] services.yaml [...] Redis: class: Redis calls: - connect: - '%env(REDIS_HOST)%' - '%env(int:REDIS_PORT)%' Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler: arguments: - '@Redis' - { prefix: 'admin_phpsess_' } [...] packages.framework.yaml [...] session: handler_id: Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler cookie_secure: auto cookie_samesite: lax [...] REDIS 127.0.0.1:6379> KEYS *admin* 1) "admin_phpsess_245e4a79fe35e2320943770061884c24" 2) "admin_phpsess_0ff29464322b3c2cfc5d8f5fd323ef75" 3) "admin_phpsess_26812c17f93a5d28a71853b77ac85386" 4) "admin_phpsess_7fbae6f0b1fdbe9576e41c9eee2cd60f" VERSIONS: PHP 7.4.12 Symfony 4.4 Kubernetes 1.17.9 redis (pecl) 5.3.2 IMPORTANT NOTE The issue was about the redis configuration. I took the step of using PdoSessionHandler and it works. The problem comes from Redis and / or Kubernetes, I have been trying for 2 hours to point out the configuration which generates this bug but impossible for now. ### Response: Try using Predis session handler instead. Setup your services # services.yaml Redis: class: \Predis\Client arguments: - 'redis://%env(REDIS_HOST)%:%env(REDIS_PORT)%' predis_session_handler: class: 'Predis\Session\Handler' arguments: - '@Redis' Configure Symfony # framework.yaml framework: session: handler_id: predis_session_handler
Having an app running perfectly on my local with docker. When I deploy it, I can't get why a new session is started on each request. It seems it can't come from the code since it's exactly the same. The session is stored on a redis DB (I have the same bug if I use the filesystem session). It's there I can see all the new session created. (see last code block). From the log I can clearly identify that the $request->getSession()->getId() changes on each request but not the PHPSESSID cookie. For example: First request [2020-11-02 15:03:59] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"https://foo.bar.dev/login","method":"POST"} [] [2020-11-02 15:03:59] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] [2020-11-02 15:03:59] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $request session id => 6491ddf4e8f3e2eaa22b44b3a98c094a [] [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE => {"PHPSESSID":"87cf6185b652f8d713c45031ebe6d8a4"} [] Second one [2020-11-02 15:04:33] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"https://foo.bar.dev/login","method":"POST"} [] [2020-11-02 15:04:33] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] [2020-11-02 15:04:33] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $request session id => 41b08dac8a803337a48dca7d5b33b840 [] [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE => {"PHPSESSID":"87cf6185b652f8d713c45031ebe6d8a4"} [] KUBERNETES ingress-nginx.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: secured-front namespace: foo-apis-dev annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/affinity-mode: "persistent" nginx.ingress.kubernetes.io/session-cookie-name: "PHPSESSID" nginx.ingress.kubernetes.io/session-cookie-path: "/" nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax" nginx.ingress.kubernetes.io/session-cookie-expires: "172800000" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800000" spec: tls: - hosts: - bar.foo.dev secretName: tls-secret rules: - host: bar.foo.dev http: paths: - backend: serviceName: bar-nginx servicePort: 80 path: /(.*) Symfony APP security.yaml [...] firewalls: dev: pattern: ^/(_(profiler|wdt)|css|images|js)/ security: false main: anonymous: lazy pattern: ^/.* logout: path: app_logout target: login guard: authenticators: - App\Security\LoginFormAuthenticator [...] services.yaml [...] Redis: class: Redis calls: - connect: - '%env(REDIS_HOST)%' - '%env(int:REDIS_PORT)%' Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler: arguments: - '@Redis' - { prefix: 'admin_phpsess_' } [...] packages.framework.yaml [...] session: handler_id: Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler cookie_secure: auto cookie_samesite: lax [...] REDIS 127.0.0.1:6379> KEYS *admin* 1) "admin_phpsess_245e4a79fe35e2320943770061884c24" 2) "admin_phpsess_0ff29464322b3c2cfc5d8f5fd323ef75" 3) "admin_phpsess_26812c17f93a5d28a71853b77ac85386" 4) "admin_phpsess_7fbae6f0b1fdbe9576e41c9eee2cd60f" VERSIONS: PHP 7.4.12 Symfony 4.4 Kubernetes 1.17.9 redis (pecl) 5.3.2 IMPORTANT NOTE The issue was about the redis configuration. I took the step of using PdoSessionHandler and it works. The problem comes from Redis and / or Kubernetes, I have been trying for 2 hours to point out the configuration which generates this bug but impossible for now.
The issue was that the networking solution was not routing to the service because the health checks had not been updated with the correct routes. readinessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 startupProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 livenessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 Changing these to /auth/health/ready fixed the ready checks.
###Instruction: I am trying to modify a kubernetes keycloak deployment to respond on /auth in place of /. I see in the documentation that this should be possible by setting --http-relative-path to /auth. I have attempted this and it causes all the services to receive Connection refused from the keycloak service. Is this possibly an issue with keycloak or could there be another config required? ### Response: The issue was that the networking solution was not routing to the service because the health checks had not been updated with the correct routes. readinessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 startupProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 livenessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 Changing these to /auth/health/ready fixed the ready checks.
I am trying to modify a kubernetes keycloak deployment to respond on /auth in place of /. I see in the documentation that this should be possible by setting --http-relative-path to /auth. I have attempted this and it causes all the services to receive Connection refused from the keycloak service. Is this possibly an issue with keycloak or could there be another config required?
As ENTRYPOINT you specify Java to run ./ws.jar, assuming it would resolve from within work directory /app. At runtime you get the error message that /ws.jar is not accessible, which looks like an absolute path. When running your container with /bin/bash, please check out where your jar file exists and which mode it has. Then decide who is right and who is broken: Your docker file or your error message. Fix the broken one.
###Instruction: I'm using this Dockerfile configuration FROM openjdk:17-alpine ARG APP_HOME=/app WORKDIR $APP_HOME COPY ./target/ws-exec.jar ws.jar ENV JAVA_OPTS="-Dspring.profiles.active=prod -Dspring.application.name=words" ENTRYPOINT java $JAVA_OPTS -jar ./ws.jar $JAVA_ARGS After deploying it to minikube, I see the only log: Error: Unable to access jarfile /ws.jar. I've tried to run docker run -it <image> with my image's name, and it successfuly started with docker. Running docker exec -it <container> shew me that the jar is present in the right folder. I tried to make the jar executable adding a CMD or RUN layer into my Dockerfile, but nothing helped. Where is my mistake, or what I don't understand? UPD here is my deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: revise-words-ws name: revise-words-ws namespace: default spec: replicas: 1 minReadySeconds: 45 selector: matchLabels: app: revise-words-ws strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: revise-words-ws spec: containers: - image: maxrybalkin91/revise-words-ws:1.0 imagePullPolicy: IfNotPresent name: revise-words-ws env: - name: VAULT_TOKEN valueFrom: secretKeyRef: name: words key: vault_token - name: VAULT_HOST valueFrom: secretKeyRef: name: words key: vault_host - name: VAULT_PORT valueFrom: secretKeyRef: name: words key: vault_port ports: - name: liveness-port containerPort: 8089 resources: requests: cpu: 100m memory: 256Mi limits: cpu: 300m memory: 512Mi readinessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 livenessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 terminationGracePeriodSeconds: 30 restartPolicy: Always ### Response: As ENTRYPOINT you specify Java to run ./ws.jar, assuming it would resolve from within work directory /app. At runtime you get the error message that /ws.jar is not accessible, which looks like an absolute path. When running your container with /bin/bash, please check out where your jar file exists and which mode it has. Then decide who is right and who is broken: Your docker file or your error message. Fix the broken one.
I'm using this Dockerfile configuration FROM openjdk:17-alpine ARG APP_HOME=/app WORKDIR $APP_HOME COPY ./target/ws-exec.jar ws.jar ENV JAVA_OPTS="-Dspring.profiles.active=prod -Dspring.application.name=words" ENTRYPOINT java $JAVA_OPTS -jar ./ws.jar $JAVA_ARGS After deploying it to minikube, I see the only log: Error: Unable to access jarfile /ws.jar. I've tried to run docker run -it <image> with my image's name, and it successfuly started with docker. Running docker exec -it <container> shew me that the jar is present in the right folder. I tried to make the jar executable adding a CMD or RUN layer into my Dockerfile, but nothing helped. Where is my mistake, or what I don't understand? UPD here is my deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: revise-words-ws name: revise-words-ws namespace: default spec: replicas: 1 minReadySeconds: 45 selector: matchLabels: app: revise-words-ws strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: revise-words-ws spec: containers: - image: maxrybalkin91/revise-words-ws:1.0 imagePullPolicy: IfNotPresent name: revise-words-ws env: - name: VAULT_TOKEN valueFrom: secretKeyRef: name: words key: vault_token - name: VAULT_HOST valueFrom: secretKeyRef: name: words key: vault_host - name: VAULT_PORT valueFrom: secretKeyRef: name: words key: vault_port ports: - name: liveness-port containerPort: 8089 resources: requests: cpu: 100m memory: 256Mi limits: cpu: 300m memory: 512Mi readinessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 livenessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 terminationGracePeriodSeconds: 30 restartPolicy: Always
I had the same issue. Fixed it with: removed whitelist annotation from ns. deployed the pod. kubectl get pod <pod name> -o yaml in my case, I had a couple of extra toleration injected without me realizing. revert 1 (annotations on ns).
###Instruction: I have been trying to make use of Azure Spot instances on Azure Kubernetes Service (AKS) - version 1.19.11 and to enable scheduling of pods onto those nodes, I am trying to use the PodTolerationRestriction admission controller. I can confirm that the PodTolerationRestriction controller is enabled as I am having no issues deploying a replicaset to the default namespace. This is another namespace but we are not specifically adding any tolerations while creating it. I gathered from elsewhere that, along with whitelisting against a specific taint (in my case spot), it is also necessary to whitelist certain default tolerations. As a result, I have added certain annotations to my namespace. I do not have any additional tolerations pre-defined for this statefulset. The node has taints - the first two are taken care of through the helm chart values RabbitMQ=true:NoSchedule Allow=true:NoExecute kubernetes.azure.com/scalesetpriority=spot:NoSchedule I am wondering what additional tolerations need to be whitelisted. The annotations I added - scheduler.alpha.kubernetes.io/defaultTolerations: '[{"operator": "Equal", "value": "spot", "key": "kubernetes.azure.com/scalesetpriority"}]' scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Equal", "value": "spot", "key": "kubernetes.azure.com/scalesetpriority"}, {"operator": "Exists", "effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure"}, {"operator": "Exists", "tolerationSeconds": 300, "effect": "NoExecute", "key": "node.kubernetes.io/unreachable"}, {"operator": "Exists", "tolerationSeconds": 300, "effect": "NoExecute", "key": "node.kubernetes.io/not-ready"}]' Statefulset describe - Name: <release name> Namespace: <namespace> CreationTimestamp: Tue, 18 Jan 2022 19:37:38 +0530 Selector: app.kubernetes.io/instance=<name>,app.kubernetes.io/name=rabbitmq Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: meta.helm.sh/release-name: <release name> meta.helm.sh/release-namespace: <namespace> Replicas: 3 desired | 0 total Update Strategy: RollingUpdate Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: checksum/config: 1a138ded5a3ade049cbee9f4f8e2d0fd7253c126d49b790495a492601fd9f280 checksum/secret: 05af38634eb4b46c2f8db5770013e1368e78b0d5af057aed5fa4fe7eec4c92de prometheus.io/port: 9419 prometheus.io/scrape: true Service Account: sa-rabbitmq Containers: rabbitmq: Image: docker.io/bitnami/rabbitmq:3.8.9-debian-10-r64 Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP, 9419/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q ping] delay=120s timeout=200s period=30s #success=1 #failure=6 Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms] delay=10s timeout=200s period=30s #success=1 #failure=3 Environment: <multiple environment variables> Mounts: /bitnami/rabbitmq/conf from configuration (rw) /bitnami/rabbitmq/mnesia from data (rw) Volumes: configuration: Type: ConfigMap (a volume populated by a ConfigMap) Name: rabbit-rabbitmq-config Optional: false data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> Volume Claims: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 31s (x14 over 72s) statefulset-controller create Pod <pod-name> in StatefulSet <release name> failed error: pod tolerations (possibly merged with namespace default tolerations) conflict with its namespace whitelist ### Response: I had the same issue. Fixed it with: removed whitelist annotation from ns. deployed the pod. kubectl get pod <pod name> -o yaml in my case, I had a couple of extra toleration injected without me realizing. revert 1 (annotations on ns).
I have been trying to make use of Azure Spot instances on Azure Kubernetes Service (AKS) - version 1.19.11 and to enable scheduling of pods onto those nodes, I am trying to use the PodTolerationRestriction admission controller. I can confirm that the PodTolerationRestriction controller is enabled as I am having no issues deploying a replicaset to the default namespace. This is another namespace but we are not specifically adding any tolerations while creating it. I gathered from elsewhere that, along with whitelisting against a specific taint (in my case spot), it is also necessary to whitelist certain default tolerations. As a result, I have added certain annotations to my namespace. I do not have any additional tolerations pre-defined for this statefulset. The node has taints - the first two are taken care of through the helm chart values RabbitMQ=true:NoSchedule Allow=true:NoExecute kubernetes.azure.com/scalesetpriority=spot:NoSchedule I am wondering what additional tolerations need to be whitelisted. The annotations I added - scheduler.alpha.kubernetes.io/defaultTolerations: '[{"operator": "Equal", "value": "spot", "key": "kubernetes.azure.com/scalesetpriority"}]' scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Equal", "value": "spot", "key": "kubernetes.azure.com/scalesetpriority"}, {"operator": "Exists", "effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure"}, {"operator": "Exists", "tolerationSeconds": 300, "effect": "NoExecute", "key": "node.kubernetes.io/unreachable"}, {"operator": "Exists", "tolerationSeconds": 300, "effect": "NoExecute", "key": "node.kubernetes.io/not-ready"}]' Statefulset describe - Name: <release name> Namespace: <namespace> CreationTimestamp: Tue, 18 Jan 2022 19:37:38 +0530 Selector: app.kubernetes.io/instance=<name>,app.kubernetes.io/name=rabbitmq Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: meta.helm.sh/release-name: <release name> meta.helm.sh/release-namespace: <namespace> Replicas: 3 desired | 0 total Update Strategy: RollingUpdate Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: checksum/config: 1a138ded5a3ade049cbee9f4f8e2d0fd7253c126d49b790495a492601fd9f280 checksum/secret: 05af38634eb4b46c2f8db5770013e1368e78b0d5af057aed5fa4fe7eec4c92de prometheus.io/port: 9419 prometheus.io/scrape: true Service Account: sa-rabbitmq Containers: rabbitmq: Image: docker.io/bitnami/rabbitmq:3.8.9-debian-10-r64 Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP, 9419/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q ping] delay=120s timeout=200s period=30s #success=1 #failure=6 Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms] delay=10s timeout=200s period=30s #success=1 #failure=3 Environment: <multiple environment variables> Mounts: /bitnami/rabbitmq/conf from configuration (rw) /bitnami/rabbitmq/mnesia from data (rw) Volumes: configuration: Type: ConfigMap (a volume populated by a ConfigMap) Name: rabbit-rabbitmq-config Optional: false data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> Volume Claims: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 31s (x14 over 72s) statefulset-controller create Pod <pod-name> in StatefulSet <release name> failed error: pod tolerations (possibly merged with namespace default tolerations) conflict with its namespace whitelist
The token is probably simply too big to be handled as an auth header. Request headers have a max size of 4096 KB (depends on the browser and/or proxy implementation) - link Allow logging in when a very long token is provided (i.e. Azure with groups, etc.) - link to GitHub Repo of Kubernetes dashboard with Feature request where you can see this
###Instruction: I deployed kubernetes dashboard in Azure Kubernetes cluster with the help of following command kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml Applied clusterrole and clusterrolebinding to same cluster on my azure AD object. ClusterRole.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: admin-user-testing rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] ClusterRoleBinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user-testing subjects: - kind: User name: <azure user object id> apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: admin-user-testing apiGroup: rbac.authorization.k8s.io For logging into dashboard I am generating the token with the following command kubelogin get-token --login azurecli --server-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx In output it gives token property {"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{"interactive":false},"status":{"expirationTimestamp":"2023-05-25T15:00:16Z","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ii1LSTNROW5OUjdiUm9meG1lWmxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcWJIWkdldyJ9.eyJhdWQiOiI2ZGFlNDJmOC00MzY4LTQ2NzgtOTRmZi0zOTYwZTI4ZTM2MzAiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9jODk1Y2E0ZS02NTg0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxLCJuYmYiOjE2ODUwMjI2ODYsImV4cCI6MTY4NTAyNjgxNiwiYWNyIjoiMSIsImFpbyI6IkFaUUFhLzhUQUFBQW1OL09tTjU5dzhWOU9RczMwOU94dTJBQzNsaGw3aGR0OVE0MjRrYXh4TVJxaGhKajFKcWwvNWpScVI3VUw1TmJFYVUyOCswZjc5N1hUazRwNG5EbGVtWStuZGZsTmNzSkhSMW5jNGJuUmlQZGJiYWlIVWtSRkE3QWxwT0ZJS0w1NTltd1l4b3J1QlFCZ0l2R0czOEI4dnlrQWZic2ZXZS9yNGZBdzIyRTBnRCtIc20vVmtTYXh4am0yK1JIc0V5YiIsImFsdHNlY2lkIjoiNTo6MTAwMzIwMDFFQzhBMjBDNiIsImFtciI6WyJwd2QiXSwiYXBwaWQiOiIwNGIwNzc5NS04ZGRiLTQ2MWEtYmJlZS0wMmY5ZTFiZjdiNDYiLCJhcHBpZGFjciI6IjAiLCJlbWFpbCI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwiZmFtaWx5X25hbWUiOiJCYXRjaHUiLCJnaXZlbl9uYW1lIjoiU2FpIEFraGlsIiwiZ3JvdXBzIjpbIjZjMmRkMTA4LTdlYzUtNGU3NC05M2M1LWM1MTJmMmYzOWFhYiIsImUyMTBiZTM0LTI2ZGUtNDg5ZS04OThmLTFlYTc4Y2M0YTgwMiIsIjBjY2NmMjU2LTRjYWEtNDdjYS04NDBmLWVmNTliMjc0ZDU3NiIsImY0NzU2MDc3LWI1NWMtNDhhMy05YWFkLWQyZGI0YWJjNGMwZiIsIjZkMzVmMjk3LTRmMzUtNDJlNi04NWE4LTg1NDgzNTdmZWYxNiIsIjJhYzk1Y2EyLTEyZWYtNDg3Ny1hNzYxLTliNzhhYmQ2YWQ5OSIsIjk0OTBjNmE4LWRjYTItNGEwNy1hOTA0LTBiYjQ0MGQyMDEyNSIsImI2ZDdjYmI1LTVkZWEtNDdlOC1hYjJiLWE0Zjg0NTAzMDA3OSIsIjAzMTY2MGMxLWZjNTQtNDk3Yi05MzU3LThkMmJhMThjNmQ5YSIsIjI3ODFlM2NmLTMzMTUtNDlmOC1iYjdjLWEyNWU5NGZmZGYzOSIsImI0MzJhNWVlLTUzNzQtNGZlMy05NjYwLWFmOWZjOWE3NDk3NyIsIjE4NTUwOGYzLTIzYTMtNDAzOC05MzNmLThhNzMyYWYxYWQyYiJdLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8zMzU4MzZkZS00MmVmLTQzYTItYjE0NS0zNDhjMmVlOWNhNWIvIiwiaXBhZGRyIjoiMTE2Ljc1Ljk3LjE5MyIsIm5hbWUiOiJCYXRjaHUgU2FpIEFraGlsIiwib2lkIjoiM2QxNTJlN2YtNDQ1Mi00MTNiLWI2YTMtMzE5MjRjNzc3YWNmIiwicHVpZCI6IjEwMDMyMDAxRUZCQkU4QzUiLCJyaCI6IjAuQVxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxM2hHbFA4NVlPS09OakFZQURnLiIsInNjcCI6InVzZXJfaW1wZXJzb25hdGlvbiIsInN1YiI6Ilo1YVh0dm1ndnB6ZTIwdDI0Y1Z4bVprTi1CaDlWc3ZkY04wS2MxdmdqVzgiLCJ0aWQiOiJjODk1Y2E0ZS02NTg0LTQ5ZGUtOTVkNS1lMGVjNGRkNDVlZTEiLCJ1bmlxdWVfbmFtZSI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwidXRpIjoiYnY2ZEVqVk5ERWlJdUg2ckxiRndBQSIsInZlciI6IjEuMCIsIndpZHMiOlsiZmU5MzBiZTctNWU2Mi00N2RiLTkxYWYtOThjM2E0OWEzOGIxIiwiMTNiZDFjNzItNmY0YS00ZGNmLTk4NWYtMThkM2I4MGYyMDhhIl19.YHv6LgS0CEBUIVEXMSYUMTgPp4Y9BnYWO0oazmxeY4shakf_VCvq9HA-ZNir1JcpgZfKaZPLpaM1bkAvsmgWqS9Fa-EXxRgEhgOavC6_BowdWlaot25xTYOpOD4sUe9mZ2GBvUOC8jNW3Juqple927C9TIUD03bbjkEkuHsi0Wxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}} After executing kubectl proxy and login to token Sign In button is not responding In networking tab it shows the following error { "status": 401, "plugins": [], "errors": [ { "ErrStatus": { "metadata": {}, "status": "Failure", "message": "MSG_LOGIN_UNAUTHORIZED_ERROR", "reason": "Unauthorized", "code": 401 } } ] } Screenshot I have tried multiple solutions but none of them have worked. Please help me resolve this error? Thank you in advance. ### Response: The token is probably simply too big to be handled as an auth header. Request headers have a max size of 4096 KB (depends on the browser and/or proxy implementation) - link Allow logging in when a very long token is provided (i.e. Azure with groups, etc.) - link to GitHub Repo of Kubernetes dashboard with Feature request where you can see this
I deployed kubernetes dashboard in Azure Kubernetes cluster with the help of following command kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml Applied clusterrole and clusterrolebinding to same cluster on my azure AD object. ClusterRole.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: admin-user-testing rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] ClusterRoleBinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user-testing subjects: - kind: User name: <azure user object id> apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: admin-user-testing apiGroup: rbac.authorization.k8s.io For logging into dashboard I am generating the token with the following command kubelogin get-token --login azurecli --server-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx In output it gives token property {"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{"interactive":false},"status":{"expirationTimestamp":"2023-05-25T15:00:16Z","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ii1LSTNROW5OUjdiUm9meG1lWmxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcWJIWkdldyJ9.eyJhdWQiOiI2ZGFlNDJmOC00MzY4LTQ2NzgtOTRmZi0zOTYwZTI4ZTM2MzAiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9jODk1Y2E0ZS02NTg0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxLCJuYmYiOjE2ODUwMjI2ODYsImV4cCI6MTY4NTAyNjgxNiwiYWNyIjoiMSIsImFpbyI6IkFaUUFhLzhUQUFBQW1OL09tTjU5dzhWOU9RczMwOU94dTJBQzNsaGw3aGR0OVE0MjRrYXh4TVJxaGhKajFKcWwvNWpScVI3VUw1TmJFYVUyOCswZjc5N1hUazRwNG5EbGVtWStuZGZsTmNzSkhSMW5jNGJuUmlQZGJiYWlIVWtSRkE3QWxwT0ZJS0w1NTltd1l4b3J1QlFCZ0l2R0czOEI4dnlrQWZic2ZXZS9yNGZBdzIyRTBnRCtIc20vVmtTYXh4am0yK1JIc0V5YiIsImFsdHNlY2lkIjoiNTo6MTAwMzIwMDFFQzhBMjBDNiIsImFtciI6WyJwd2QiXSwiYXBwaWQiOiIwNGIwNzc5NS04ZGRiLTQ2MWEtYmJlZS0wMmY5ZTFiZjdiNDYiLCJhcHBpZGFjciI6IjAiLCJlbWFpbCI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwiZmFtaWx5X25hbWUiOiJCYXRjaHUiLCJnaXZlbl9uYW1lIjoiU2FpIEFraGlsIiwiZ3JvdXBzIjpbIjZjMmRkMTA4LTdlYzUtNGU3NC05M2M1LWM1MTJmMmYzOWFhYiIsImUyMTBiZTM0LTI2ZGUtNDg5ZS04OThmLTFlYTc4Y2M0YTgwMiIsIjBjY2NmMjU2LTRjYWEtNDdjYS04NDBmLWVmNTliMjc0ZDU3NiIsImY0NzU2MDc3LWI1NWMtNDhhMy05YWFkLWQyZGI0YWJjNGMwZiIsIjZkMzVmMjk3LTRmMzUtNDJlNi04NWE4LTg1NDgzNTdmZWYxNiIsIjJhYzk1Y2EyLTEyZWYtNDg3Ny1hNzYxLTliNzhhYmQ2YWQ5OSIsIjk0OTBjNmE4LWRjYTItNGEwNy1hOTA0LTBiYjQ0MGQyMDEyNSIsImI2ZDdjYmI1LTVkZWEtNDdlOC1hYjJiLWE0Zjg0NTAzMDA3OSIsIjAzMTY2MGMxLWZjNTQtNDk3Yi05MzU3LThkMmJhMThjNmQ5YSIsIjI3ODFlM2NmLTMzMTUtNDlmOC1iYjdjLWEyNWU5NGZmZGYzOSIsImI0MzJhNWVlLTUzNzQtNGZlMy05NjYwLWFmOWZjOWE3NDk3NyIsIjE4NTUwOGYzLTIzYTMtNDAzOC05MzNmLThhNzMyYWYxYWQyYiJdLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8zMzU4MzZkZS00MmVmLTQzYTItYjE0NS0zNDhjMmVlOWNhNWIvIiwiaXBhZGRyIjoiMTE2Ljc1Ljk3LjE5MyIsIm5hbWUiOiJCYXRjaHUgU2FpIEFraGlsIiwib2lkIjoiM2QxNTJlN2YtNDQ1Mi00MTNiLWI2YTMtMzE5MjRjNzc3YWNmIiwicHVpZCI6IjEwMDMyMDAxRUZCQkU4QzUiLCJyaCI6IjAuQVxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxM2hHbFA4NVlPS09OakFZQURnLiIsInNjcCI6InVzZXJfaW1wZXJzb25hdGlvbiIsInN1YiI6Ilo1YVh0dm1ndnB6ZTIwdDI0Y1Z4bVprTi1CaDlWc3ZkY04wS2MxdmdqVzgiLCJ0aWQiOiJjODk1Y2E0ZS02NTg0LTQ5ZGUtOTVkNS1lMGVjNGRkNDVlZTEiLCJ1bmlxdWVfbmFtZSI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwidXRpIjoiYnY2ZEVqVk5ERWlJdUg2ckxiRndBQSIsInZlciI6IjEuMCIsIndpZHMiOlsiZmU5MzBiZTctNWU2Mi00N2RiLTkxYWYtOThjM2E0OWEzOGIxIiwiMTNiZDFjNzItNmY0YS00ZGNmLTk4NWYtMThkM2I4MGYyMDhhIl19.YHv6LgS0CEBUIVEXMSYUMTgPp4Y9BnYWO0oazmxeY4shakf_VCvq9HA-ZNir1JcpgZfKaZPLpaM1bkAvsmgWqS9Fa-EXxRgEhgOavC6_BowdWlaot25xTYOpOD4sUe9mZ2GBvUOC8jNW3Juqple927C9TIUD03bbjkEkuHsi0Wxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}} After executing kubectl proxy and login to token Sign In button is not responding In networking tab it shows the following error { "status": 401, "plugins": [], "errors": [ { "ErrStatus": { "metadata": {}, "status": "Failure", "message": "MSG_LOGIN_UNAUTHORIZED_ERROR", "reason": "Unauthorized", "code": 401 } } ] } Screenshot I have tried multiple solutions but none of them have worked. Please help me resolve this error? Thank you in advance.
Try you are missing - in ports [] apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: - protocol: TCP port: 80 targetPort: 9376
###Instruction: Below is my Service object apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 When I am creating this object then I get below exception, do anyone knows what is wrong in this? Error from server (BadRequest): error when creating "listing62.yaml": Service in version "v1" cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort I have tried to make changes to the service object but not working. ### Response: Try you are missing - in ports [] apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: - protocol: TCP port: 80 targetPort: 9376
Below is my Service object apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 When I am creating this object then I get below exception, do anyone knows what is wrong in this? Error from server (BadRequest): error when creating "listing62.yaml": Service in version "v1" cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort I have tried to make changes to the service object but not working.
You need to point to the path of the log4j2.xml in the system that contains the spark-submit something like this starting with file:///: spark.driver.extraJavaOptions=-Dlog4j.configurationFile=file:///opt/spark/log/log4j2.xml
###Instruction: I have a Spark cluster running on Kubernetes using this Bitnami Helm chart. Looking at Spark documentation, I have created a log4j2 config file at the below location: /opt/bitnami/spark/conf/log4j2.properties Configuration within this log file works fine as expected. Now, I have a Spring boot based application which I am executing on this cluster using spark-submit command. This application has a log4j2 config file at below location: src/main/resources/log4j2.xml When I execute this jar using spark-submit command, configuration from this file doesn't seemed to work. I have also supply following argument from other Stackoverflow threads: --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j2.xml" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j2.xml" Still has the same issue. How can I override cluster level logging configuration from application level logging file? ### Response: You need to point to the path of the log4j2.xml in the system that contains the spark-submit something like this starting with file:///: spark.driver.extraJavaOptions=-Dlog4j.configurationFile=file:///opt/spark/log/log4j2.xml
I have a Spark cluster running on Kubernetes using this Bitnami Helm chart. Looking at Spark documentation, I have created a log4j2 config file at the below location: /opt/bitnami/spark/conf/log4j2.properties Configuration within this log file works fine as expected. Now, I have a Spring boot based application which I am executing on this cluster using spark-submit command. This application has a log4j2 config file at below location: src/main/resources/log4j2.xml When I execute this jar using spark-submit command, configuration from this file doesn't seemed to work. I have also supply following argument from other Stackoverflow threads: --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j2.xml" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j2.xml" Still has the same issue. How can I override cluster level logging configuration from application level logging file?
Generally there are two way to do it: Incorporate additional check about this job into your alert rules. For this you'll need to add something like unless on() my_job_status{name="kured_reboot"} == 1 (You might need to something similar or even completely different depending on your situation and style of job and metrics related to it) Create alert based on the fact that this job is running and add the inhibition rule, that will prevent other alert from firing while job is running.
###Instruction: When nodes reboot based on the job "kured_reboots", alertManager still alerts on other node alerts. Is there a way to stop alertManager alerts for nodes going through a reboot based on the job "kured_reboot'? Something like this: route: receiver: default group_by: - cluster - namespace - severity - alertname - job continue: false routes: - receiver: receiver1 matchers: - job =~ "kured_reboot" active_time_intervals: - business-hours inhibit_rules: - source_matchers: - job="kured_reboot" target_matchers: - severity=~"critical|warning|info" equal: - namespace - alertname So far, I am still researching this and have not tried anything yet. I wanted to ask if anyone has done something like this before. ### Response: Generally there are two way to do it: Incorporate additional check about this job into your alert rules. For this you'll need to add something like unless on() my_job_status{name="kured_reboot"} == 1 (You might need to something similar or even completely different depending on your situation and style of job and metrics related to it) Create alert based on the fact that this job is running and add the inhibition rule, that will prevent other alert from firing while job is running.
When nodes reboot based on the job "kured_reboots", alertManager still alerts on other node alerts. Is there a way to stop alertManager alerts for nodes going through a reboot based on the job "kured_reboot'? Something like this: route: receiver: default group_by: - cluster - namespace - severity - alertname - job continue: false routes: - receiver: receiver1 matchers: - job =~ "kured_reboot" active_time_intervals: - business-hours inhibit_rules: - source_matchers: - job="kured_reboot" target_matchers: - severity=~"critical|warning|info" equal: - namespace - alertname So far, I am still researching this and have not tried anything yet. I wanted to ask if anyone has done something like this before.
This line is the problem: protocol: TCP You are missing a hyphen. You need to put: - protocol: TCP Basically, if there is a list for a specific field (like, containers, ports), then commonly, there is a hyphen at the start.
###Instruction: Below is my Service object apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 When I am creating this object then I get below exception, do anyone knows what is wrong in this? Error from server (BadRequest): error when creating "listing62.yaml": Service in version "v1" cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort I have tried to make changes to the service object but not working. ### Response: This line is the problem: protocol: TCP You are missing a hyphen. You need to put: - protocol: TCP Basically, if there is a list for a specific field (like, containers, ports), then commonly, there is a hyphen at the start.
Below is my Service object apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 When I am creating this object then I get below exception, do anyone knows what is wrong in this? Error from server (BadRequest): error when creating "listing62.yaml": Service in version "v1" cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort I have tried to make changes to the service object but not working.