input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
If you are using AKS v>=1.24, then try adding below annotation with path /healthz instead of 127.0.0.1 during nginx ingress controller installation or in nginx ingress controller service and use host based routing with nginx ingress routes - service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz If the above helps then Why was it not working with host earlier? because backend pool of LB goes unhealthy because of wrong health-probe path of ingress controller. Ingress route is only accepting traffic for the particular host name and hence health probe of ingress controller service(Azure LB) is failing because / or 127.0.0.1 for http protocol returns 404. Github discussion on changes - https://github.com/Azure/AKS/issues/2903#issuecomment-1115720970 More details on installation - https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration
###Instruction: I am configuring an ingress-nginx load balancer on Azure Kubernetes service. I have installed the load balancer using Helm, and set up ingress. Here is the behavior I'm encountering: When I include a host in my pathing rules in my ingress config, I cannot access service at that host URL. The request times out When I don't include a host in my pathing rules, I can access the service at that host URL with no issues Regardless of whether or not the host is included in the pathing rules, I can successfully access the service at the host URL when I CURL it from any pod in the cluster. Nslookup successfully resolves the host on my machine I'm trying to figure out why I'm unable to reach my service when host is included in my ingress configuration. Any ideas? Technical details are below. Note that the configuration is only pointing to one service currently, but filtering by host will eventually be necessary - I'm planning to have multiple services with different domains running through this load balancer. Ingress controller configuration: helm install --replace ingress-nginx ingress-nginx/ingress-nginx \ --create-namespace \ --namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=127.0.0.1 \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \ --set controller.service.loadBalancerIP=$IP \ The ingress configuration: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-world-ingress annotations: cert-manager.io/cluster-issuer: letsencrypt spec: ingressClassName: nginx tls: - hosts: - my.host.com secretName: tls-secret rules: - host: my.host.com //Removing this item makes the service reachable http: paths: - path: / pathType: Prefix backend: service: name: xrcfrontend port: number: 80 This is the curl command I'm running. It returns the correct results when run inside the pod, and times out when run outside. curl https://my.host.com --insecure ### Response: If you are using AKS v>=1.24, then try adding below annotation with path /healthz instead of 127.0.0.1 during nginx ingress controller installation or in nginx ingress controller service and use host based routing with nginx ingress routes - service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz If the above helps then Why was it not working with host earlier? because backend pool of LB goes unhealthy because of wrong health-probe path of ingress controller. Ingress route is only accepting traffic for the particular host name and hence health probe of ingress controller service(Azure LB) is failing because / or 127.0.0.1 for http protocol returns 404. Github discussion on changes - https://github.com/Azure/AKS/issues/2903#issuecomment-1115720970 More details on installation - https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration
I am configuring an ingress-nginx load balancer on Azure Kubernetes service. I have installed the load balancer using Helm, and set up ingress. Here is the behavior I'm encountering: When I include a host in my pathing rules in my ingress config, I cannot access service at that host URL. The request times out When I don't include a host in my pathing rules, I can access the service at that host URL with no issues Regardless of whether or not the host is included in the pathing rules, I can successfully access the service at the host URL when I CURL it from any pod in the cluster. Nslookup successfully resolves the host on my machine I'm trying to figure out why I'm unable to reach my service when host is included in my ingress configuration. Any ideas? Technical details are below. Note that the configuration is only pointing to one service currently, but filtering by host will eventually be necessary - I'm planning to have multiple services with different domains running through this load balancer. Ingress controller configuration: helm install --replace ingress-nginx ingress-nginx/ingress-nginx \ --create-namespace \ --namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=127.0.0.1 \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \ --set controller.service.loadBalancerIP=$IP \ The ingress configuration: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-world-ingress annotations: cert-manager.io/cluster-issuer: letsencrypt spec: ingressClassName: nginx tls: - hosts: - my.host.com secretName: tls-secret rules: - host: my.host.com //Removing this item makes the service reachable http: paths: - path: / pathType: Prefix backend: service: name: xrcfrontend port: number: 80 This is the curl command I'm running. It returns the correct results when run inside the pod, and times out when run outside. curl https://my.host.com --insecure
You can only expose services in the same namespace as your IngressRoute resource. If you watch the logs of your Traefik pod when you deploy your IngressRoute, you should see something like: time="2023-01-26T13:57:17Z" level=error msg="service namespace2/users-api not in the parent resource namespace namespace1" providerName=kubernetescrd ingress=namespace1 namespace=namespace1 To do what you want, you need to create two separate IngressRoute resources, one in namespace1 and one in namespace2. In namespace1: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: labels: app: old-api name: old-api namespace: namespace1 spec: entryPoints: - web routes: - kind: Rule priority: 1000 match: Host(`api.my-domain.com`) services: - name: old-api port: 80 In namespace2: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: labels: app: new-api name: new-api namespace: namespace2 spec: entryPoints: - web routes: - kind: Rule priority: 2000 match: Host(`api.my-domain.com`) && PathPrefix(`/users`) services: - name: new-api port: 80 You can find all the files I used to test this configuration here. I don't know if the explicit priorities are necessary or not; it worked for me without them but maybe I was just lucky so I left them there. I would generally assume that a "more specific route" takes precedence over a "less specific route", but I don't know if that's actually true.
###Instruction: I'm using Traefik 2.7.0 on an AKS Kubernetes Cluster 1.22.6. Currently, everything routes to the same service: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) kind: Rule services: - name: api namespace: namespace1 port: 80 tls: secretName: api-my-domain-com-cert I'm currently in the process of externalizing an API resource from this service to a dedicated new service ("/users") because there will be other services in the future that will need the same functionality. What I'm trying (and failing) to do, is to route calls to "/users" to the new service: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) && Path(`/users`) kind: Rule services: - name: users-api namespace: namespace2 port: 80 - match: Host(`api.my-domain.com`) kind: Rule services: - name: api namespace: namespace1 port: 80 tls: secretName: api-baywa-lusy-com-cert I tried Path(..) and PathPrefix(..). No success. Everything is still routed to the old service. The new service has slightly different output. So I can tell with certainty that it's still routed to the old service. Adding the priority manually didn't help either: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) && Path(`/users`) kind: Rule priority: 2000 services: - name: users-api namespace: namespace2 port: 80 - match: Host(`api.my-domain.com`) kind: Rule priority: 1000 services: - name: api namespace: namespace1 port: 80 tls: secretName: api-baywa-lusy-com-cert Am I Missing something here? Any help is appreciated! Thanks, best regards, Pascal ### Response: You can only expose services in the same namespace as your IngressRoute resource. If you watch the logs of your Traefik pod when you deploy your IngressRoute, you should see something like: time="2023-01-26T13:57:17Z" level=error msg="service namespace2/users-api not in the parent resource namespace namespace1" providerName=kubernetescrd ingress=namespace1 namespace=namespace1 To do what you want, you need to create two separate IngressRoute resources, one in namespace1 and one in namespace2. In namespace1: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: labels: app: old-api name: old-api namespace: namespace1 spec: entryPoints: - web routes: - kind: Rule priority: 1000 match: Host(`api.my-domain.com`) services: - name: old-api port: 80 In namespace2: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: labels: app: new-api name: new-api namespace: namespace2 spec: entryPoints: - web routes: - kind: Rule priority: 2000 match: Host(`api.my-domain.com`) && PathPrefix(`/users`) services: - name: new-api port: 80 You can find all the files I used to test this configuration here. I don't know if the explicit priorities are necessary or not; it worked for me without them but maybe I was just lucky so I left them there. I would generally assume that a "more specific route" takes precedence over a "less specific route", but I don't know if that's actually true.
I'm using Traefik 2.7.0 on an AKS Kubernetes Cluster 1.22.6. Currently, everything routes to the same service: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) kind: Rule services: - name: api namespace: namespace1 port: 80 tls: secretName: api-my-domain-com-cert I'm currently in the process of externalizing an API resource from this service to a dedicated new service ("/users") because there will be other services in the future that will need the same functionality. What I'm trying (and failing) to do, is to route calls to "/users" to the new service: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) && Path(`/users`) kind: Rule services: - name: users-api namespace: namespace2 port: 80 - match: Host(`api.my-domain.com`) kind: Rule services: - name: api namespace: namespace1 port: 80 tls: secretName: api-baywa-lusy-com-cert I tried Path(..) and PathPrefix(..). No success. Everything is still routed to the old service. The new service has slightly different output. So I can tell with certainty that it's still routed to the old service. Adding the priority manually didn't help either: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api namespace: namespace1 spec: entryPoints: - websecure routes: - match: Host(`api.my-domain.com`) && Path(`/users`) kind: Rule priority: 2000 services: - name: users-api namespace: namespace2 port: 80 - match: Host(`api.my-domain.com`) kind: Rule priority: 1000 services: - name: api namespace: namespace1 port: 80 tls: secretName: api-baywa-lusy-com-cert Am I Missing something here? Any help is appreciated! Thanks, best regards, Pascal
You can use the config.load_kube_config method and pass in the kubeconfig object you obtained earlier as a parameter. The method accepts a config_file parameter, which can be a file object, a file-like object, or a string file path. Since kubeconfig is a string, you can pass it as a string file path, like so: from kubernetes import client, config # Pass the kubeconfig string as a file path config.load_kube_config(config_file=kubeconfig) v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
###Instruction: I need to use Azure Python SDK and Kubernetes Python Client to list the Pods CPU limits for a cluster running in AKS. Although its straight forward using CLI/PowerShell but I need to use Python exclusively. Must not use subprocess calls. Here is snippet that gets KubeConfig object after authentication with Azure: from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0] But I am unsure how to put this to be used by K8s Python client: from kubernetes import client, config config.load_kube_config() ## How to pass? v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) ### Response: You can use the config.load_kube_config method and pass in the kubeconfig object you obtained earlier as a parameter. The method accepts a config_file parameter, which can be a file object, a file-like object, or a string file path. Since kubeconfig is a string, you can pass it as a string file path, like so: from kubernetes import client, config # Pass the kubeconfig string as a file path config.load_kube_config(config_file=kubeconfig) v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
I need to use Azure Python SDK and Kubernetes Python Client to list the Pods CPU limits for a cluster running in AKS. Although its straight forward using CLI/PowerShell but I need to use Python exclusively. Must not use subprocess calls. Here is snippet that gets KubeConfig object after authentication with Azure: from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0] But I am unsure how to put this to be used by K8s Python client: from kubernetes import client, config config.load_kube_config() ## How to pass? v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
I tried a repro and was able to achieve as below - Getting kubeconfig in correct decoded format from kubeconfigs[0] which is CredentialResults. Writing the generated kubeconfig in a file. Loading the file in config module. from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient import os from kubernetes import client, config credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) # getting kubeconfig in a decoded format from CredentialResult kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0].value.decode(encoding='UTF-8') # writing generated kubeconfig in a file f=open("kubeconfig","w") f.write(kubeconfig) f.close() # loading the config file config.load_kube_config('kubeconfig') # deleting the kubeconfig file os.remove('kubeconfig') v1 = client.CoreV1Api() print("Listing containers with their CPU limits:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: for container in i.spec.containers: if container.resources.limits: if 'cpu' in container.resources.limits: print( container.name, container.resources.limits['cpu'])
###Instruction: I need to use Azure Python SDK and Kubernetes Python Client to list the Pods CPU limits for a cluster running in AKS. Although its straight forward using CLI/PowerShell but I need to use Python exclusively. Must not use subprocess calls. Here is snippet that gets KubeConfig object after authentication with Azure: from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0] But I am unsure how to put this to be used by K8s Python client: from kubernetes import client, config config.load_kube_config() ## How to pass? v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) ### Response: I tried a repro and was able to achieve as below - Getting kubeconfig in correct decoded format from kubeconfigs[0] which is CredentialResults. Writing the generated kubeconfig in a file. Loading the file in config module. from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient import os from kubernetes import client, config credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) # getting kubeconfig in a decoded format from CredentialResult kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0].value.decode(encoding='UTF-8') # writing generated kubeconfig in a file f=open("kubeconfig","w") f.write(kubeconfig) f.close() # loading the config file config.load_kube_config('kubeconfig') # deleting the kubeconfig file os.remove('kubeconfig') v1 = client.CoreV1Api() print("Listing containers with their CPU limits:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: for container in i.spec.containers: if container.resources.limits: if 'cpu' in container.resources.limits: print( container.name, container.resources.limits['cpu'])
I need to use Azure Python SDK and Kubernetes Python Client to list the Pods CPU limits for a cluster running in AKS. Although its straight forward using CLI/PowerShell but I need to use Python exclusively. Must not use subprocess calls. Here is snippet that gets KubeConfig object after authentication with Azure: from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = "XXX" resource_group_name= 'MY-SUB' cluster_name = "my-aks-clustername" container_service_client = ContainerServiceClient(credential, subscription_id) kubeconfig = container_service_client.managed_clusters. \ list_cluster_user_credentials(resource_group_name, cluster_name). \ kubeconfigs[0] But I am unsure how to put this to be used by K8s Python client: from kubernetes import client, config config.load_kube_config() ## How to pass? v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
Finally found what the problem was. Indeed the Public IP needs to be created in the node resource group because the ingress controller, with the loadBalancerIP assigned to the Public IP address, is going to look for it in the node resource group so if you create it in the resource group fails with the error I was getting. The node resource group name is assigned at cluster creation eg. MC_myResourceGroup_myAKSCluster_eastus, but you can name it as you wish using the parameter node_resource_group = var.node_resource_group_name. Also, the Public IP sku "Standard" (to be specified) or "Basic" ( default), and the cluster load_balancer_sku "standard" or "basic"(no default value her, it needs to be specified) have to match. I also put the Public IP in the cluster module so it can depend on it, to avoid being created before it and failing as the node resource group has not been created yet, couldn't set that dependency correctly in main.tf file. So the working configuration is now: main terraform { required_version = ">=1.1.0" required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0.2" } } } provider "azurerm" { features { resource_group { prevent_deletion_if_contains_resources = false } } subscription_id = var.azure_subscription_id tenant_id = var.azure_subscription_tenant_id client_id = var.service_principal_appid client_secret = var.service_principal_password } provider "kubernetes" { host = "${module.cluster.host}" client_certificate = "${base64decode(module.cluster.client_certificate)}" client_key = "${base64decode(module.cluster.client_key)}" cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}" } provider "helm" { kubernetes { host = "${module.cluster.host}" client_certificate = "${base64decode(module.cluster.client_certificate)}" client_key = "${base64decode(module.cluster.client_key)}" cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}" } } module "cluster" { source = "./modules/cluster" location = var.location vm_size = var.vm_size resource_group_name = var.resource_group_name node_resource_group_name = var.node_resource_group_name kubernetes_version = var.kubernetes_version ssh_key = var.ssh_key sp_client_id = var.service_principal_appid sp_client_secret = var.service_principal_password } module "ingress-controller" { source = "./modules/ingress-controller" public_ip_address = module.cluster.public_ip_address depends_on = [ module.cluster.public_ip_address ] } cluster resource "azurerm_resource_group" "resource_group" { name = var.resource_group_name location = var.location tags = { Environment = "test" Team = "DevOps" } } resource "azurerm_kubernetes_cluster" "server_cluster" { name = "server_cluster" ### choose the resource goup to use for the cluster location = azurerm_resource_group.resource_group.location resource_group_name = azurerm_resource_group.resource_group.name ### decide the name of the cluster "node" resource group, if unset will be named automatically node_resource_group = var.node_resource_group_name dns_prefix = "fixit" kubernetes_version = var.kubernetes_version # sku_tier = "Paid" default_node_pool { name = "default" node_count = 1 min_count = 1 max_count = 3 vm_size = var.vm_size type = "VirtualMachineScaleSets" enable_auto_scaling = true enable_host_encryption = false # os_disk_size_gb = 30 } service_principal { client_id = var.sp_client_id client_secret = var.sp_client_secret } tags = { Environment = "Production" } linux_profile { admin_username = "azureuser" ssh_key { key_data = var.ssh_key } } network_profile { network_plugin = "kubenet" load_balancer_sku = "basic" } http_application_routing_enabled = false depends_on = [ azurerm_resource_group.resource_group ] } resource "azurerm_public_ip" "public-ip" { name = "fixit-public-ip" location = var.location # resource_group_name = var.resource_group_name resource_group_name = var.node_resource_group_name allocation_method = "Static" domain_name_label = "fixit" # sku = "Standard" depends_on = [ azurerm_kubernetes_cluster.server_cluster ] } ingress controller resource "helm_release" "nginx" { name = "ingress-nginx" repository = "ingress-nginx" chart = "ingress-nginx/ingress-nginx" namespace = "default" set { name = "controller.service.externalTrafficPolicy" value = "Local" } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal" value = "true" } set { name = "controller.service.loadBalancerIP" value = var.public_ip_address } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path" value = "/healthz" } } ingress service apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service # namespace: default annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4 spec: ingressClassName: nginx rules: # - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP ### Node.js server - http: paths: - path: /(/|$)(.*) pathType: Prefix backend: service: name: server-clusterip-service port: number: 80 - http: paths: - path: /server(/|$)(.*) pathType: Prefix backend: service: name: server-clusterip-service port: number: 80 ... other services omitted Hope this can help others having difficulties in getting the setup right. Cheers.
###Instruction: It's now quite a few days that I'm trying to configure the cluster on AKS but I keep jumping between parts of the docs, various questions here on SO, articles on Medium.. all to keep failing at it. The goal is get a static ip with a dns that I can use to connect my apps to the server deployed on AKS. I have created via terraform the infrastructure which consists of a resource group in which I created a Public IP and the AKS cluster, so far so good. After trying to use the ingress controller that gets installed when you use the option http_application_routing_enabled = true on cluster creation which the docs are discouraging for production https://learn.microsoft.com/en-us/azure/aks/http-application-routing, I'm trying the recommended way and install the ingress-nginx controller via Helm https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli. In terraform I'm installing it all like this resource group and cluster resource "azurerm_resource_group" "resource_group" { name = var.resource_group_name location = var.location tags = { Environment = "Test" Team = "DevOps" } } resource "azurerm_kubernetes_cluster" "server_cluster" { name = "server_cluster" location = azurerm_resource_group.resource_group.location resource_group_name = azurerm_resource_group.resource_group.name dns_prefix = "fixit" kubernetes_version = var.kubernetes_version # sku_tier = "Paid" default_node_pool { name = "default" node_count = 1 min_count = 1 max_count = 3 # vm_size = "standard_b2s_v5" # vm_size = "standard_e2bs_v5" vm_size = "standard_b4ms" type = "VirtualMachineScaleSets" enable_auto_scaling = true enable_host_encryption = false # os_disk_size_gb = 30 # enable_node_public_ip = true } service_principal { client_id = var.sp_client_id client_secret = var.sp_client_secret } tags = { Environment = "Production" } linux_profile { admin_username = "azureuser" ssh_key { key_data = var.ssh_key } } network_profile { network_plugin = "kubenet" load_balancer_sku = "standard" # load_balancer_sku = "basic" } # http_application_routing_enabled = true http_application_routing_enabled = false } public ip resource "azurerm_public_ip" "public-ip" { name = "fixit-public-ip" location = var.location resource_group_name = var.resource_group_name allocation_method = "Static" domain_name_label = "fixit" sku = "Standard" } load balancer resource "kubernetes_service" "cluster-ingress" { metadata { name = "cluster-ingress-svc" annotations = { "service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group" # Warning SyncLoadBalancerFailed 2m38s (x8 over 12m) service-controller Error syncing load balancer: # failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236 # in resource group MC_fixit-resource-group_server_cluster_westeurope # "service.beta.kubernetes.io/azure-load-balancer-resource-group" = "MC_fixit-resource-group_server_cluster_westeurope" # kubernetes.io/ingress.class: addon-http-application-routing } } spec { # type = "Ingress" type = "LoadBalancer" load_balancer_ip = var.public_ip_address selector = { name = "cluster-ingress-svc" } port { name = "cluster-port" protocol = "TCP" port = 3000 target_port = "80" } } } ingress controller resource "helm_release" "nginx" { name = "ingress-nginx" repository = "https://kubernetes.github.io/ingress-nginx" chart = "ingress-nginx" namespace = "default" set { name = "rbac.create" value = "false" } set { name = "controller.service.externalTrafficPolicy" value = "Local" } set { name = "controller.service.loadBalancerIP" value = var.public_ip_address } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal" value = "true" } # --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz set { name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-health-probe-request-path" value = "/healthz" } } but the installation fails with this message from terraform Warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again. │ │ with module.ingress_controller.helm_release.nginx, │ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx": │ 2: resource "helm_release" "nginx" { │ ╵ ╷ │ Error: timed out waiting for the condition │ │ with module.ingress_controller.helm_release.nginx, │ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx": │ 2: resource "helm_release" "nginx" { the controller print out vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller Name: ingress-nginx-controller Namespace: default Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.5.1 helm.sh/chart=ingress-nginx-4.4.2 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: default service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]] service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.173.243 IPs: 10.0.173.243 IP: 52.157.90.236 Port: http 80/TCP TargetPort: http/TCP NodePort: http 31709/TCP Endpoints: Port: https 443/TCP TargetPort: https/TCP NodePort: https 30045/TCP Endpoints: Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32500 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 32s (x5 over 108s) service-controller Ensuring load balancer Warning SyncLoadBalancerFailed 31s (x5 over 107s) service-controller Error syncing load balancer: failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236 in resource group mc_fixit-resource-group_server_cluster_westeurope vincenzocalia@vincenzos-MacBook-Air helm_charts % az aks show --resource-group fixit-resource-group --name server_cluster --query nodeResourceGroup -o tsv MC_fixit-resource-group_server_cluster_westeurope Why is it looking in the MC_fixit-resource-group_server_cluster_westeurope resource group and not in the fixit-resource-group I created for the Cluster, Public IP and Load Balancer? If I change the controller load balancer ip to the public ip in MC_fixit-resource-group_server_cluster_westeurope then terraform still outputs the same error, but the controller prints out to be correctly assigned to the ip and load balancer set { name = "controller.service.loadBalancerIP" value = "20.73.192.77" #var.public_ip_address } vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster-ingress-svc LoadBalancer 10.0.110.114 52.157.90.236 3000:31863/TCP 104m ingress-nginx-controller LoadBalancer 10.0.106.201 20.73.192.77 80:30714/TCP,443:32737/TCP 41m ingress-nginx-controller-admission ClusterIP 10.0.23.188 <none> 443/TCP 41m kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 122m vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller Name: ingress-nginx-controller Namespace: default Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.5.1 helm.sh/chart=ingress-nginx-4.4.2 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: default service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]] service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.106.201 IPs: 10.0.106.201 IP: 20.73.192.77 LoadBalancer Ingress: 20.73.192.77 Port: http 80/TCP TargetPort: http/TCP NodePort: http 30714/TCP Endpoints: Port: https 443/TCP TargetPort: https/TCP NodePort: https 32737/TCP Endpoints: Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32538 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 39m (x2 over 41m) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 39m (x2 over 41m) service-controller Ensured load balancer vincenzocalia@vincenzos-MacBook-Air helm_charts % Reading here https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks To enable this architecture, each AKS deployment spans two resource groups: You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is MC_myResourceGroup_myAKSCluster_eastus. For information on how to specify the name of this second resource group, see the next section. The second resource group, known as the node resource group, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like MC_myResourceGroup_myAKSCluster_eastus. AKS automatically deletes the node resource group whenever the cluster is deleted, so it should only be used for resources that share the cluster's lifecycle. Should I pass the first or the second group depending of what kind of resource I'm creating? E.g. kubernetes_service needs 1st rg, while azurerm_public_ip needs the 2nd rg? What is it that I'm missing out here? Please explain it like I was 5 years old because I'm feeling like right now.. Many thanks ### Response: Finally found what the problem was. Indeed the Public IP needs to be created in the node resource group because the ingress controller, with the loadBalancerIP assigned to the Public IP address, is going to look for it in the node resource group so if you create it in the resource group fails with the error I was getting. The node resource group name is assigned at cluster creation eg. MC_myResourceGroup_myAKSCluster_eastus, but you can name it as you wish using the parameter node_resource_group = var.node_resource_group_name. Also, the Public IP sku "Standard" (to be specified) or "Basic" ( default), and the cluster load_balancer_sku "standard" or "basic"(no default value her, it needs to be specified) have to match. I also put the Public IP in the cluster module so it can depend on it, to avoid being created before it and failing as the node resource group has not been created yet, couldn't set that dependency correctly in main.tf file. So the working configuration is now: main terraform { required_version = ">=1.1.0" required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0.2" } } } provider "azurerm" { features { resource_group { prevent_deletion_if_contains_resources = false } } subscription_id = var.azure_subscription_id tenant_id = var.azure_subscription_tenant_id client_id = var.service_principal_appid client_secret = var.service_principal_password } provider "kubernetes" { host = "${module.cluster.host}" client_certificate = "${base64decode(module.cluster.client_certificate)}" client_key = "${base64decode(module.cluster.client_key)}" cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}" } provider "helm" { kubernetes { host = "${module.cluster.host}" client_certificate = "${base64decode(module.cluster.client_certificate)}" client_key = "${base64decode(module.cluster.client_key)}" cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}" } } module "cluster" { source = "./modules/cluster" location = var.location vm_size = var.vm_size resource_group_name = var.resource_group_name node_resource_group_name = var.node_resource_group_name kubernetes_version = var.kubernetes_version ssh_key = var.ssh_key sp_client_id = var.service_principal_appid sp_client_secret = var.service_principal_password } module "ingress-controller" { source = "./modules/ingress-controller" public_ip_address = module.cluster.public_ip_address depends_on = [ module.cluster.public_ip_address ] } cluster resource "azurerm_resource_group" "resource_group" { name = var.resource_group_name location = var.location tags = { Environment = "test" Team = "DevOps" } } resource "azurerm_kubernetes_cluster" "server_cluster" { name = "server_cluster" ### choose the resource goup to use for the cluster location = azurerm_resource_group.resource_group.location resource_group_name = azurerm_resource_group.resource_group.name ### decide the name of the cluster "node" resource group, if unset will be named automatically node_resource_group = var.node_resource_group_name dns_prefix = "fixit" kubernetes_version = var.kubernetes_version # sku_tier = "Paid" default_node_pool { name = "default" node_count = 1 min_count = 1 max_count = 3 vm_size = var.vm_size type = "VirtualMachineScaleSets" enable_auto_scaling = true enable_host_encryption = false # os_disk_size_gb = 30 } service_principal { client_id = var.sp_client_id client_secret = var.sp_client_secret } tags = { Environment = "Production" } linux_profile { admin_username = "azureuser" ssh_key { key_data = var.ssh_key } } network_profile { network_plugin = "kubenet" load_balancer_sku = "basic" } http_application_routing_enabled = false depends_on = [ azurerm_resource_group.resource_group ] } resource "azurerm_public_ip" "public-ip" { name = "fixit-public-ip" location = var.location # resource_group_name = var.resource_group_name resource_group_name = var.node_resource_group_name allocation_method = "Static" domain_name_label = "fixit" # sku = "Standard" depends_on = [ azurerm_kubernetes_cluster.server_cluster ] } ingress controller resource "helm_release" "nginx" { name = "ingress-nginx" repository = "ingress-nginx" chart = "ingress-nginx/ingress-nginx" namespace = "default" set { name = "controller.service.externalTrafficPolicy" value = "Local" } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal" value = "true" } set { name = "controller.service.loadBalancerIP" value = var.public_ip_address } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path" value = "/healthz" } } ingress service apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service # namespace: default annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4 spec: ingressClassName: nginx rules: # - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP ### Node.js server - http: paths: - path: /(/|$)(.*) pathType: Prefix backend: service: name: server-clusterip-service port: number: 80 - http: paths: - path: /server(/|$)(.*) pathType: Prefix backend: service: name: server-clusterip-service port: number: 80 ... other services omitted Hope this can help others having difficulties in getting the setup right. Cheers.
It's now quite a few days that I'm trying to configure the cluster on AKS but I keep jumping between parts of the docs, various questions here on SO, articles on Medium.. all to keep failing at it. The goal is get a static ip with a dns that I can use to connect my apps to the server deployed on AKS. I have created via terraform the infrastructure which consists of a resource group in which I created a Public IP and the AKS cluster, so far so good. After trying to use the ingress controller that gets installed when you use the option http_application_routing_enabled = true on cluster creation which the docs are discouraging for production https://learn.microsoft.com/en-us/azure/aks/http-application-routing, I'm trying the recommended way and install the ingress-nginx controller via Helm https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli. In terraform I'm installing it all like this resource group and cluster resource "azurerm_resource_group" "resource_group" { name = var.resource_group_name location = var.location tags = { Environment = "Test" Team = "DevOps" } } resource "azurerm_kubernetes_cluster" "server_cluster" { name = "server_cluster" location = azurerm_resource_group.resource_group.location resource_group_name = azurerm_resource_group.resource_group.name dns_prefix = "fixit" kubernetes_version = var.kubernetes_version # sku_tier = "Paid" default_node_pool { name = "default" node_count = 1 min_count = 1 max_count = 3 # vm_size = "standard_b2s_v5" # vm_size = "standard_e2bs_v5" vm_size = "standard_b4ms" type = "VirtualMachineScaleSets" enable_auto_scaling = true enable_host_encryption = false # os_disk_size_gb = 30 # enable_node_public_ip = true } service_principal { client_id = var.sp_client_id client_secret = var.sp_client_secret } tags = { Environment = "Production" } linux_profile { admin_username = "azureuser" ssh_key { key_data = var.ssh_key } } network_profile { network_plugin = "kubenet" load_balancer_sku = "standard" # load_balancer_sku = "basic" } # http_application_routing_enabled = true http_application_routing_enabled = false } public ip resource "azurerm_public_ip" "public-ip" { name = "fixit-public-ip" location = var.location resource_group_name = var.resource_group_name allocation_method = "Static" domain_name_label = "fixit" sku = "Standard" } load balancer resource "kubernetes_service" "cluster-ingress" { metadata { name = "cluster-ingress-svc" annotations = { "service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group" # Warning SyncLoadBalancerFailed 2m38s (x8 over 12m) service-controller Error syncing load balancer: # failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236 # in resource group MC_fixit-resource-group_server_cluster_westeurope # "service.beta.kubernetes.io/azure-load-balancer-resource-group" = "MC_fixit-resource-group_server_cluster_westeurope" # kubernetes.io/ingress.class: addon-http-application-routing } } spec { # type = "Ingress" type = "LoadBalancer" load_balancer_ip = var.public_ip_address selector = { name = "cluster-ingress-svc" } port { name = "cluster-port" protocol = "TCP" port = 3000 target_port = "80" } } } ingress controller resource "helm_release" "nginx" { name = "ingress-nginx" repository = "https://kubernetes.github.io/ingress-nginx" chart = "ingress-nginx" namespace = "default" set { name = "rbac.create" value = "false" } set { name = "controller.service.externalTrafficPolicy" value = "Local" } set { name = "controller.service.loadBalancerIP" value = var.public_ip_address } set { name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal" value = "true" } # --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz set { name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-health-probe-request-path" value = "/healthz" } } but the installation fails with this message from terraform Warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again. │ │ with module.ingress_controller.helm_release.nginx, │ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx": │ 2: resource "helm_release" "nginx" { │ ╵ ╷ │ Error: timed out waiting for the condition │ │ with module.ingress_controller.helm_release.nginx, │ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx": │ 2: resource "helm_release" "nginx" { the controller print out vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller Name: ingress-nginx-controller Namespace: default Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.5.1 helm.sh/chart=ingress-nginx-4.4.2 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: default service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]] service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.173.243 IPs: 10.0.173.243 IP: 52.157.90.236 Port: http 80/TCP TargetPort: http/TCP NodePort: http 31709/TCP Endpoints: Port: https 443/TCP TargetPort: https/TCP NodePort: https 30045/TCP Endpoints: Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32500 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 32s (x5 over 108s) service-controller Ensuring load balancer Warning SyncLoadBalancerFailed 31s (x5 over 107s) service-controller Error syncing load balancer: failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236 in resource group mc_fixit-resource-group_server_cluster_westeurope vincenzocalia@vincenzos-MacBook-Air helm_charts % az aks show --resource-group fixit-resource-group --name server_cluster --query nodeResourceGroup -o tsv MC_fixit-resource-group_server_cluster_westeurope Why is it looking in the MC_fixit-resource-group_server_cluster_westeurope resource group and not in the fixit-resource-group I created for the Cluster, Public IP and Load Balancer? If I change the controller load balancer ip to the public ip in MC_fixit-resource-group_server_cluster_westeurope then terraform still outputs the same error, but the controller prints out to be correctly assigned to the ip and load balancer set { name = "controller.service.loadBalancerIP" value = "20.73.192.77" #var.public_ip_address } vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster-ingress-svc LoadBalancer 10.0.110.114 52.157.90.236 3000:31863/TCP 104m ingress-nginx-controller LoadBalancer 10.0.106.201 20.73.192.77 80:30714/TCP,443:32737/TCP 41m ingress-nginx-controller-admission ClusterIP 10.0.23.188 <none> 443/TCP 41m kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 122m vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller Name: ingress-nginx-controller Namespace: default Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.5.1 helm.sh/chart=ingress-nginx-4.4.2 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: default service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]] service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.106.201 IPs: 10.0.106.201 IP: 20.73.192.77 LoadBalancer Ingress: 20.73.192.77 Port: http 80/TCP TargetPort: http/TCP NodePort: http 30714/TCP Endpoints: Port: https 443/TCP TargetPort: https/TCP NodePort: https 32737/TCP Endpoints: Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32538 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 39m (x2 over 41m) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 39m (x2 over 41m) service-controller Ensured load balancer vincenzocalia@vincenzos-MacBook-Air helm_charts % Reading here https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks To enable this architecture, each AKS deployment spans two resource groups: You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is MC_myResourceGroup_myAKSCluster_eastus. For information on how to specify the name of this second resource group, see the next section. The second resource group, known as the node resource group, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like MC_myResourceGroup_myAKSCluster_eastus. AKS automatically deletes the node resource group whenever the cluster is deleted, so it should only be used for resources that share the cluster's lifecycle. Should I pass the first or the second group depending of what kind of resource I'm creating? E.g. kubernetes_service needs 1st rg, while azurerm_public_ip needs the 2nd rg? What is it that I'm missing out here? Please explain it like I was 5 years old because I'm feeling like right now.. Many thanks
I'd try hard to avoid building up a command line out of unquoted strings as you show. It will in fact be prone to quoting problems, and if the input contains any sort of character that has special shell syntax the process is prone to behaving unexpectedly (and this can have security implications). The first change I'd make is to remove the -c and -f options. Don't have these functions try to run the commands themselves; instead, just have them filter whatever input they get. # prone to quoting problems readcert -c 'kubectl get secret something -o jsonpath="..."' # one fewer level of quoting kubectl get secret something -o jsonpath="..." | readcert If you can construct a fixed pipeline then many of the problems go away. Say you can put "maybe" as a verb in the pipeline; then after parsing the command line, in concept you could get approximately md5cert() { maybe grep something | \ maybe print out some of the line | \ maybe base64 decode it | md5sum } To pick the "maybe base64 decode" as an example: # Maybe base64 decode the input. Takes one argument. If it is "yes", # base64 decodes stdin to stdout; otherwise passes stdin to stdout # unmodified. maybe_base64_decode() { if [ "$1" = yes ]; then base64 -d else cat fi } md5cert() { decode=yes while getopt ... o; do case "$o" in o) decode=no ;; esac done ... | maybe_base64_decode "$decode" | ... } In similar tricks, grep '' (with an empty regexp) will print out a file unmodified (every line matches), as will awk '{ print $0 }'. You can also include the (possibly empty) regexp in the awk call. I might simplify the first function to: maybe_base64_decode() { ... } # as above md5cert() { pattern='' # a regexp words='$0' # an awk expression decode=yes while getopts ":op:" o; do case "${o}" in o) decode=no ;; p) pattern="$OPTARG" words="$2" ;; *) echo 'Usage: md5cert [-o] [-p PATTERN]' >&2 return 1 ;; esac done awk "/$pattern/ { print $words }" | maybe_base64_decode "$decode" } I've also skipped the bash-specific function and local keywords; this setup should work with any POSIX shell. You don't specifically describe how you're using grep and awk here, but if you're reading a Kubernetes manifest, the yq or jq tools could be more robust ways of processing YAML or JSON output. I hint at kubectl get -o jsonpath in an example and that also could be a way to extract a value from an in-cluster object without trying to do text processing on YAML.
###Instruction: The certs in kubernetes are base64 encoded and on the disk are plain certificate. We might have certs embedded in a yaml file. I was wondering if we can quickly verify the certs using either md5sum or viewing the cert contents as follows: The first function tries to use pipes in variables and combines into a final form for the commandline, the second processes the intermediate results and uses the intermediate results in the final command. The first function doesn't work because of extra quotes. The second function doesn't work because the '\n' in the content is lost somehow. But the variable does not have the newline so the openssl doesnt read it well from the stdin. Any suggestions on how to fix these functions? function md5cert() { md5cert_usage() { echo "md5cert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; } local OPTIND o a pflag=0 while getopts ":a:" o; do case "${o}" in f) f="${OPTARG}" cmd="cat $f" decode=" base64 -d | " ;; c) cmd="${OPTARG}" decode=" base64 -d | " ;; o) o="${OPTARG}" cmd="cat ${f}" decode=" " ;; p) p="${OPTARG}" pflag=1 ;; *) md5cert_usage ;; esac done shift $((OPTIND-1)) if [ $pflag -eq 1 ]; then greppat="grep $p | awk '{print $2}' | " fi $cmd | $greppat $decode md5sum } function readcert() { readcert_usage() { echo "readcert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; } local OPTIND o a pflag=0 while getopts ":f:c:o:p:" o; do case "${o}" in f) f="${OPTARG}" cmd="cat $f" decode=" base64 -d | " ;; c) cmd="${OPTARG}" decode=" base64 -d | " ;; o) o="${OPTARG}" cmd="cat ${f}" decode=" " ;; p) p="${OPTARG}" pflag=1 ;; *) readcert_usage ;; esac done shift $((OPTIND-1)) if [ $pflag -eq 1 ]; then greppat="grep --color=auto $p | awk '{print \$2}' | " content=$($cmd | grep $p | awk '{print $2}') if [ -n "$decode" ]; then content="$(echo $content | base64 -d)" fi fi echo -e $content | openssl x509 -in /dev/stdin -noout -text } The commands as such works, but having problems in generalizing the approach using functions ### Response: I'd try hard to avoid building up a command line out of unquoted strings as you show. It will in fact be prone to quoting problems, and if the input contains any sort of character that has special shell syntax the process is prone to behaving unexpectedly (and this can have security implications). The first change I'd make is to remove the -c and -f options. Don't have these functions try to run the commands themselves; instead, just have them filter whatever input they get. # prone to quoting problems readcert -c 'kubectl get secret something -o jsonpath="..."' # one fewer level of quoting kubectl get secret something -o jsonpath="..." | readcert If you can construct a fixed pipeline then many of the problems go away. Say you can put "maybe" as a verb in the pipeline; then after parsing the command line, in concept you could get approximately md5cert() { maybe grep something | \ maybe print out some of the line | \ maybe base64 decode it | md5sum } To pick the "maybe base64 decode" as an example: # Maybe base64 decode the input. Takes one argument. If it is "yes", # base64 decodes stdin to stdout; otherwise passes stdin to stdout # unmodified. maybe_base64_decode() { if [ "$1" = yes ]; then base64 -d else cat fi } md5cert() { decode=yes while getopt ... o; do case "$o" in o) decode=no ;; esac done ... | maybe_base64_decode "$decode" | ... } In similar tricks, grep '' (with an empty regexp) will print out a file unmodified (every line matches), as will awk '{ print $0 }'. You can also include the (possibly empty) regexp in the awk call. I might simplify the first function to: maybe_base64_decode() { ... } # as above md5cert() { pattern='' # a regexp words='$0' # an awk expression decode=yes while getopts ":op:" o; do case "${o}" in o) decode=no ;; p) pattern="$OPTARG" words="$2" ;; *) echo 'Usage: md5cert [-o] [-p PATTERN]' >&2 return 1 ;; esac done awk "/$pattern/ { print $words }" | maybe_base64_decode "$decode" } I've also skipped the bash-specific function and local keywords; this setup should work with any POSIX shell. You don't specifically describe how you're using grep and awk here, but if you're reading a Kubernetes manifest, the yq or jq tools could be more robust ways of processing YAML or JSON output. I hint at kubectl get -o jsonpath in an example and that also could be a way to extract a value from an in-cluster object without trying to do text processing on YAML.
The certs in kubernetes are base64 encoded and on the disk are plain certificate. We might have certs embedded in a yaml file. I was wondering if we can quickly verify the certs using either md5sum or viewing the cert contents as follows: The first function tries to use pipes in variables and combines into a final form for the commandline, the second processes the intermediate results and uses the intermediate results in the final command. The first function doesn't work because of extra quotes. The second function doesn't work because the '\n' in the content is lost somehow. But the variable does not have the newline so the openssl doesnt read it well from the stdin. Any suggestions on how to fix these functions? function md5cert() { md5cert_usage() { echo "md5cert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; } local OPTIND o a pflag=0 while getopts ":a:" o; do case "${o}" in f) f="${OPTARG}" cmd="cat $f" decode=" base64 -d | " ;; c) cmd="${OPTARG}" decode=" base64 -d | " ;; o) o="${OPTARG}" cmd="cat ${f}" decode=" " ;; p) p="${OPTARG}" pflag=1 ;; *) md5cert_usage ;; esac done shift $((OPTIND-1)) if [ $pflag -eq 1 ]; then greppat="grep $p | awk '{print $2}' | " fi $cmd | $greppat $decode md5sum } function readcert() { readcert_usage() { echo "readcert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; } local OPTIND o a pflag=0 while getopts ":f:c:o:p:" o; do case "${o}" in f) f="${OPTARG}" cmd="cat $f" decode=" base64 -d | " ;; c) cmd="${OPTARG}" decode=" base64 -d | " ;; o) o="${OPTARG}" cmd="cat ${f}" decode=" " ;; p) p="${OPTARG}" pflag=1 ;; *) readcert_usage ;; esac done shift $((OPTIND-1)) if [ $pflag -eq 1 ]; then greppat="grep --color=auto $p | awk '{print \$2}' | " content=$($cmd | grep $p | awk '{print $2}') if [ -n "$decode" ]; then content="$(echo $content | base64 -d)" fi fi echo -e $content | openssl x509 -in /dev/stdin -noout -text } The commands as such works, but having problems in generalizing the approach using functions
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so. If I start with this Deployment: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example image: docker.io/traefik/whoami:latest env: - name: ENV_VAR_1 valueFrom: secretKeyRef: name: someSecret key: someKeyName - name: ENV_VAR_2 value: example-value If I write in my kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml patches: - patch: | apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example env: - name: ENV_VAR_1 $patch: delete Then the output of kustomize build is: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - env: - name: ENV_VAR_2 value: example-value image: docker.io/traefik/whoami:latest name: example Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.
###Instruction: I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know. If it's not possible, can we replace environment variable name, and secret key name/key pair all together? containers: - name: container1 env: - name: NAMESPACE valueFrom: secretKeyRef: name: x key: y Any help on this will be appreciated! Thanks! ### Response: If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so. If I start with this Deployment: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example image: docker.io/traefik/whoami:latest env: - name: ENV_VAR_1 valueFrom: secretKeyRef: name: someSecret key: someKeyName - name: ENV_VAR_2 value: example-value If I write in my kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml patches: - patch: | apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example env: - name: ENV_VAR_1 $patch: delete Then the output of kustomize build is: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - env: - name: ENV_VAR_2 value: example-value image: docker.io/traefik/whoami:latest name: example Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.
I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know. If it's not possible, can we replace environment variable name, and secret key name/key pair all together? containers: - name: container1 env: - name: NAMESPACE valueFrom: secretKeyRef: name: x key: y Any help on this will be appreciated! Thanks!
OK, I've found the problem. Instead of: { "auths": { "https://gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } It should have been: { "auths": { "gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } Apparently the documentation at https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ is a bit misleading since the example there does contain https:// in the URL. { "auths": { "https://index.docker.io/v1/": { "auth": "c3R...zE2" } } } And yes, my private repo does work over HTTPS connections.
###Instruction: I'm having a hard time with pulling image from private repository. Here's the drill down: The pod: apiVersion: v1 kind: Pod metadata: [...] spec: containers: - image: gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main imagePullPolicy: Always [...] imagePullSecrets: - name: gitlab-dc-cdndemo-2 And the pull secret: $ base64 -d <(kubectl -n test-cdndemo get secret gitlab-dc-cdndemo-2 -o json | jq -r '.data.".dockerconfigjson"') | jq { "auths": { "https://gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } It's a playbook example of how it should be done. But when I deploy this I get: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned test-cdndemo/appcdnmanagerui-68c8f8c6dd-qcxr5 to node-waw107 Normal Pulling 13m (x4 over 14m) kubelet Pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main" Warning Failed 13m (x4 over 14m) kubelet Failed to pull image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main": rpc error: code = Unknown desc = failed to pull and unpack image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to resolve referen ce "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Warning Failed 13m (x4 over 14m) kubelet Error: ErrImagePull Warning Failed 12m (x6 over 14m) kubelet Error: ImagePullBackOff Normal BackOff 4m41s (x43 over 14m) kubelet Back-off pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main" gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main Notice the error, it's 403 Forbidden, not 401 Unauthorized so the credentials do work. Despite this, the image cannot be pulled from my private repo. But when I do this manually on a worker node everything goes smoothly: $ crictl --debug pull --creds gitlab+deploy-token-22:EGDLqGKJwBtfYYf9cDFg gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest DEBU[0000] get image connection DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest,Annotations:map[string]string{},},Auth:&AuthConfig{Username:gitlab+deploy-token-22,Password:EGDLqGKJwBtfYYf9cDFg,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:nil,} DEBU[0006] PullImageResponse: &PullImageResponse{ImageRef:sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260,} Image is up to date for sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260 $ crictl image IMAGE TAG IMAGE ID SIZE [...] gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main latest 0c3b5d355c164 105MB``` It alwo works when I use auth data instead (as expected): crictl --debug pull --auth Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw== gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest Now I'm stuck. The only thing that comes to my mind is some kind of weird bug that's in the k3s release I'm using: # k3s -v k3s version v1.25.4+k3s1 (0dc63334) go version go1.19.3 # crictl -v crictl version v1.25.0-k3s1 The bug hypothesis is based on previous experience with older k8s deployment, where this approach is used and it works. But on a fresh cluster all I get is 403 error from k3s despite crictl pulling the image with no problems. Anyone had this kind of problem before and solved it? ### Response: OK, I've found the problem. Instead of: { "auths": { "https://gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } It should have been: { "auths": { "gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } Apparently the documentation at https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ is a bit misleading since the example there does contain https:// in the URL. { "auths": { "https://index.docker.io/v1/": { "auth": "c3R...zE2" } } } And yes, my private repo does work over HTTPS connections.
I'm having a hard time with pulling image from private repository. Here's the drill down: The pod: apiVersion: v1 kind: Pod metadata: [...] spec: containers: - image: gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main imagePullPolicy: Always [...] imagePullSecrets: - name: gitlab-dc-cdndemo-2 And the pull secret: $ base64 -d <(kubectl -n test-cdndemo get secret gitlab-dc-cdndemo-2 -o json | jq -r '.data.".dockerconfigjson"') | jq { "auths": { "https://gitlab.private:31443": { "username": "gitlab+deploy-token-22", "password": "EGDLqGKJwBtfYYf9cDFg", "email": "example@example.com", "auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw==" } } } It's a playbook example of how it should be done. But when I deploy this I get: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned test-cdndemo/appcdnmanagerui-68c8f8c6dd-qcxr5 to node-waw107 Normal Pulling 13m (x4 over 14m) kubelet Pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main" Warning Failed 13m (x4 over 14m) kubelet Failed to pull image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main": rpc error: code = Unknown desc = failed to pull and unpack image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to resolve referen ce "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden Warning Failed 13m (x4 over 14m) kubelet Error: ErrImagePull Warning Failed 12m (x6 over 14m) kubelet Error: ImagePullBackOff Normal BackOff 4m41s (x43 over 14m) kubelet Back-off pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main" gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main Notice the error, it's 403 Forbidden, not 401 Unauthorized so the credentials do work. Despite this, the image cannot be pulled from my private repo. But when I do this manually on a worker node everything goes smoothly: $ crictl --debug pull --creds gitlab+deploy-token-22:EGDLqGKJwBtfYYf9cDFg gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest DEBU[0000] get image connection DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest,Annotations:map[string]string{},},Auth:&AuthConfig{Username:gitlab+deploy-token-22,Password:EGDLqGKJwBtfYYf9cDFg,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:nil,} DEBU[0006] PullImageResponse: &PullImageResponse{ImageRef:sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260,} Image is up to date for sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260 $ crictl image IMAGE TAG IMAGE ID SIZE [...] gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main latest 0c3b5d355c164 105MB``` It alwo works when I use auth data instead (as expected): crictl --debug pull --auth Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw== gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest Now I'm stuck. The only thing that comes to my mind is some kind of weird bug that's in the k3s release I'm using: # k3s -v k3s version v1.25.4+k3s1 (0dc63334) go version go1.19.3 # crictl -v crictl version v1.25.0-k3s1 The bug hypothesis is based on previous experience with older k8s deployment, where this approach is used and it works. But on a fresh cluster all I get is 403 error from k3s despite crictl pulling the image with no problems. Anyone had this kind of problem before and solved it?
Your only option is to vendor an older version of the library. This package has been marked for internal use for a long time and was also marked to allow for breakages. See https://github.com/googleapis/google-api-go-client/issues/416 for options.
###Instruction: previously I was using "google.golang.org/api"=>v0.10.0 in my go.mod file and I was able to import gensupport "google.golang.org/api/gensupport" But with new "google.golang.org/api"=>v0.103.0 I was unable to import gensupport and as I and my colleague have investigated, gensupport was shifted to "google.golang.org/api/internal/gensupport" but while using this import there is one error saying Use of the internal package is not allowed So now how I will be able to use gensupport with new "google.golang.org/api"=>v0.103.0 version. ### Response: Your only option is to vendor an older version of the library. This package has been marked for internal use for a long time and was also marked to allow for breakages. See https://github.com/googleapis/google-api-go-client/issues/416 for options.
previously I was using "google.golang.org/api"=>v0.10.0 in my go.mod file and I was able to import gensupport "google.golang.org/api/gensupport" But with new "google.golang.org/api"=>v0.103.0 I was unable to import gensupport and as I and my colleague have investigated, gensupport was shifted to "google.golang.org/api/internal/gensupport" but while using this import there is one error saying Use of the internal package is not allowed So now how I will be able to use gensupport with new "google.golang.org/api"=>v0.103.0 version.
You will need to add a finalizer to A, so that when A is deleted, in the finalizer logic you remove the OwnerReference from B. This way A's deletion is not orphaning B in Kubernetes GC's eyes, as when A is deleted after the finalizer logic, it does not own anything.
###Instruction: I've implemented a controller for Custom Resource (CR) A by controller-runtime lib. CR A is dependent on resource B as apiVersion: CRD.API.Version kind: A metadata: name: a spec: bName: b otherSpecs: ... And I add ownerReference on B instances in my controller when A is reconciled, which enables my controller Watches B and would trigger a reconciliation for A if its dependent B is updated. Besides, when A is deleted, related ownerReference on B would be removed: func (r *MyReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&A{}). Watches(&source.Kind{Type: &B{}}, &handler.EnqueueRequestForOwner{OwnerType: &A{}, IsController: false}). Complete(r) } func (r *MyReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) { a := &A{} reconciler.Get(ctx, request.NamespacedName, a) b := &B{} reconciler.Get(ctx, NamespacedName{Name: a.Spec.bName, Namespace: a.Namespace}, b) if !a.DeletionTimestamp.IsZero() { clearOwnerReference(b) reconciler.Update(ctx, b) return ... } setOwnerReference(b.OwnerReference, a) reconciler.Update(ctx, b) // Other reconciliation logic } But Kubernetes API server acts more quickly: it detects B is orphan and deletes B, i.e., GC. I also tried to add finalizer on B. But when my controller removes both ownerReference and finalizers from B, it's gone, since it's already been marked as deleted. Any other config on B can be utilized to prevent such cascading deletion? Or any best practice to handle such dependency? ### Response: You will need to add a finalizer to A, so that when A is deleted, in the finalizer logic you remove the OwnerReference from B. This way A's deletion is not orphaning B in Kubernetes GC's eyes, as when A is deleted after the finalizer logic, it does not own anything.
I've implemented a controller for Custom Resource (CR) A by controller-runtime lib. CR A is dependent on resource B as apiVersion: CRD.API.Version kind: A metadata: name: a spec: bName: b otherSpecs: ... And I add ownerReference on B instances in my controller when A is reconciled, which enables my controller Watches B and would trigger a reconciliation for A if its dependent B is updated. Besides, when A is deleted, related ownerReference on B would be removed: func (r *MyReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&A{}). Watches(&source.Kind{Type: &B{}}, &handler.EnqueueRequestForOwner{OwnerType: &A{}, IsController: false}). Complete(r) } func (r *MyReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) { a := &A{} reconciler.Get(ctx, request.NamespacedName, a) b := &B{} reconciler.Get(ctx, NamespacedName{Name: a.Spec.bName, Namespace: a.Namespace}, b) if !a.DeletionTimestamp.IsZero() { clearOwnerReference(b) reconciler.Update(ctx, b) return ... } setOwnerReference(b.OwnerReference, a) reconciler.Update(ctx, b) // Other reconciliation logic } But Kubernetes API server acts more quickly: it detects B is orphan and deletes B, i.e., GC. I also tried to add finalizer on B. But when my controller removes both ownerReference and finalizers from B, it's gone, since it's already been marked as deleted. Any other config on B can be utilized to prevent such cascading deletion? Or any best practice to handle such dependency?
This is possible with an controller called the Kubernetes Volume Autoscaler. This will require that you have defined your StorageClasses with the flag of allowVolumeExpansion enabled and that you have Prometheus installed. It will also require that your disk controller supports hot-expansion, and your cloud provider supports it. See the README on this repo for more information on this. I can confirm this tool works perfectly on both AWS and Google Cloud's Kubernetes clusters. Disclaimer: I authored this, but it is open source and now has a handful of contributors.
###Instruction: Is there a way to define a Persistent Volume in kubernetes which can grow automatically if it runs out of space? If not, are there any extensions/workarounds, other than over-provisioning storage to protect our applications? ### Response: This is possible with an controller called the Kubernetes Volume Autoscaler. This will require that you have defined your StorageClasses with the flag of allowVolumeExpansion enabled and that you have Prometheus installed. It will also require that your disk controller supports hot-expansion, and your cloud provider supports it. See the README on this repo for more information on this. I can confirm this tool works perfectly on both AWS and Google Cloud's Kubernetes clusters. Disclaimer: I authored this, but it is open source and now has a handful of contributors.
Is there a way to define a Persistent Volume in kubernetes which can grow automatically if it runs out of space? If not, are there any extensions/workarounds, other than over-provisioning storage to protect our applications?
First of all, execute the following command: kubectl get apiservices And checkout the availablity (status) of kube-system/metrics-server service. In case the availability is True: Add hostNetwork: true to the spec of your metrics-server deployment by executing the following command: kubectl edit deployment -n kube-system metrics-server It should look like the following: ... spec: hostNetwork: true ... Setting hostNetwork to true means that Pod will have access to the host where it's running. In case the availability is False (MissingEndpoints): Download metrics-server: wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml Remove (legacy) metrics server: kubectl delete -f components.yaml Edit downloaded file and add - --kubelet-insecure-tls to args list: ... labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls # add this line ... Create service once again: kubectl apply -f components.yaml
###Instruction: I am new to kubernetes and was trying to apply horizontal pod autoscaling to my existing application. and after following other stackoverflow details - got to know that I need to install metric-server - and I was able to - but some how it's not working and unable to handle request. Further I followed few more things but unable to resolve the issue - I will really appreciate any help here. Please let me know for any further details you need for helping me :) Thanks in advance. Steps followed: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created kubectl get deploy,svc -n kube-system | egrep metrics-server deployment.apps/metrics-server 1/1 1 1 2m6s service/metrics-server ClusterIP 10.32.0.32 <none> 443/TCP 2m6s kubectl get pods -n kube-system | grep metrics-server metrics-server-64cf6869bd-6gx88 1/1 Running 0 2m39s vi ana_hpa.yaml apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: ana-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: StatefulSet name: common-services-auth minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 160 k apply -f ana_hpa.yaml horizontalpodautoscaler.autoscaling/ana-hpa created k get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ana-hpa StatefulSet/common-services-auth <unknown>/160%, <unknown>/80% 1 10 0 4s k describe hpa ana-hpa Name: ana-hpa Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Tue, 12 Apr 2022 17:01:25 +0530 Reference: StatefulSet/common-services-auth Metrics: ( current / target ) resource memory on pods (as a percentage of request): <unknown> / 160% resource cpu on pods (as a percentage of request): <unknown> / 80% Min replicas: 1 Max replicas: 10 StatefulSet pods: 3 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 38s (x8 over 2m23s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Warning FailedComputeMetricsReplicas 38s (x8 over 2m23s) horizontal-pod-autoscaler invalid metrics (2 invalid out of 2), first error is: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Warning FailedGetResourceMetric 23s (x9 over 2m23s) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) kubectl get --raw /apis/metrics.k8s.io/v1beta1 Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl edit deployments.apps -n kube-system metrics-server Add hostNetwork: true deployment.apps/metrics-server edited kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m k describe pod metrics-server-5dc6dbdb8-42hw9 -n kube-system Name: metrics-server-5dc6dbdb8-42hw9 Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: pusntyn196.apac.avaya.com/10.133.85.196 Start Time: Tue, 12 Apr 2022 17:08:25 +0530 Labels: k8s-app=metrics-server pod-template-hash=5dc6dbdb8 Annotations: <none> Status: Running IP: 10.133.85.196 IPs: IP: 10.133.85.196 Controlled By: ReplicaSet/metrics-server-5dc6dbdb8 Containers: metrics-server: Container ID: containerd://024afb1998dce4c0bd5f4e58f996068ea37982bd501b54fda2ef8d5c1098b4f4 Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 Image ID: k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00 Port: 4443/TCP Host Port: 4443/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s State: Running Started: Tue, 12 Apr 2022 17:08:26 +0530 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /tmp from tmp-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p4g (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-g6p4g: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 2s node.kubernetes.io/unreachable:NoExecute op=Exists for 2s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m31s default-scheduler Successfully assigned kube-system/metrics-server-5dc6dbdb8-42hw9 to pusntyn196.apac.avaya.com Normal Pulled 2m32s kubelet Container image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1" already present on machine Normal Created 2m31s kubelet Created container metrics-server Normal Started 2m31s kubelet Started container metrics-server kubectl get --raw /apis/metrics.k8s.io/v1beta1 Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m kubectl logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system E0412 11:43:54.684784 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" E0412 11:44:27.001010 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" k logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system I0412 11:38:26.447305 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0412 11:38:26.899459 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0412 11:38:26.899477 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0412 11:38:26.899518 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0412 11:38:26.899545 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0412 11:38:26.899546 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0412 11:38:26.899567 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0412 11:38:26.900480 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" I0412 11:38:26.900811 1 secure_serving.go:266] Serving securely on [::]:4443 I0412 11:38:26.900854 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0412 11:38:26.900965 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed I0412 11:38:26.999960 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0412 11:38:26.999989 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController I0412 11:38:26.999970 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file E0412 11:38:27.000087 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" E0412 11:38:27.000118 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) Edit metrics server deployment yaml Add - --kubelet-insecure-tls k apply -f metric-server-deployment.yaml serviceaccount/metrics-server unchanged clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged service/metrics-server unchanged deployment.apps/metrics-server configured apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) Also tried by adding below to metrics server deployment command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP ### Response: First of all, execute the following command: kubectl get apiservices And checkout the availablity (status) of kube-system/metrics-server service. In case the availability is True: Add hostNetwork: true to the spec of your metrics-server deployment by executing the following command: kubectl edit deployment -n kube-system metrics-server It should look like the following: ... spec: hostNetwork: true ... Setting hostNetwork to true means that Pod will have access to the host where it's running. In case the availability is False (MissingEndpoints): Download metrics-server: wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml Remove (legacy) metrics server: kubectl delete -f components.yaml Edit downloaded file and add - --kubelet-insecure-tls to args list: ... labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls # add this line ... Create service once again: kubectl apply -f components.yaml
I am new to kubernetes and was trying to apply horizontal pod autoscaling to my existing application. and after following other stackoverflow details - got to know that I need to install metric-server - and I was able to - but some how it's not working and unable to handle request. Further I followed few more things but unable to resolve the issue - I will really appreciate any help here. Please let me know for any further details you need for helping me :) Thanks in advance. Steps followed: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created kubectl get deploy,svc -n kube-system | egrep metrics-server deployment.apps/metrics-server 1/1 1 1 2m6s service/metrics-server ClusterIP 10.32.0.32 <none> 443/TCP 2m6s kubectl get pods -n kube-system | grep metrics-server metrics-server-64cf6869bd-6gx88 1/1 Running 0 2m39s vi ana_hpa.yaml apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: ana-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: StatefulSet name: common-services-auth minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 160 k apply -f ana_hpa.yaml horizontalpodautoscaler.autoscaling/ana-hpa created k get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ana-hpa StatefulSet/common-services-auth <unknown>/160%, <unknown>/80% 1 10 0 4s k describe hpa ana-hpa Name: ana-hpa Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Tue, 12 Apr 2022 17:01:25 +0530 Reference: StatefulSet/common-services-auth Metrics: ( current / target ) resource memory on pods (as a percentage of request): <unknown> / 160% resource cpu on pods (as a percentage of request): <unknown> / 80% Min replicas: 1 Max replicas: 10 StatefulSet pods: 3 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 38s (x8 over 2m23s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Warning FailedComputeMetricsReplicas 38s (x8 over 2m23s) horizontal-pod-autoscaler invalid metrics (2 invalid out of 2), first error is: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Warning FailedGetResourceMetric 23s (x9 over 2m23s) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) kubectl get --raw /apis/metrics.k8s.io/v1beta1 Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl edit deployments.apps -n kube-system metrics-server Add hostNetwork: true deployment.apps/metrics-server edited kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m k describe pod metrics-server-5dc6dbdb8-42hw9 -n kube-system Name: metrics-server-5dc6dbdb8-42hw9 Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: pusntyn196.apac.avaya.com/10.133.85.196 Start Time: Tue, 12 Apr 2022 17:08:25 +0530 Labels: k8s-app=metrics-server pod-template-hash=5dc6dbdb8 Annotations: <none> Status: Running IP: 10.133.85.196 IPs: IP: 10.133.85.196 Controlled By: ReplicaSet/metrics-server-5dc6dbdb8 Containers: metrics-server: Container ID: containerd://024afb1998dce4c0bd5f4e58f996068ea37982bd501b54fda2ef8d5c1098b4f4 Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1 Image ID: k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00 Port: 4443/TCP Host Port: 4443/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s State: Running Started: Tue, 12 Apr 2022 17:08:26 +0530 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /tmp from tmp-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p4g (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-g6p4g: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 2s node.kubernetes.io/unreachable:NoExecute op=Exists for 2s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m31s default-scheduler Successfully assigned kube-system/metrics-server-5dc6dbdb8-42hw9 to pusntyn196.apac.avaya.com Normal Pulled 2m32s kubelet Container image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1" already present on machine Normal Created 2m31s kubelet Created container metrics-server Normal Started 2m31s kubelet Started container metrics-server kubectl get --raw /apis/metrics.k8s.io/v1beta1 Error from server (ServiceUnavailable): the server is currently unable to handle the request kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m kubectl logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system E0412 11:43:54.684784 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" E0412 11:44:27.001010 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" k logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system I0412 11:38:26.447305 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0412 11:38:26.899459 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0412 11:38:26.899477 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0412 11:38:26.899518 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0412 11:38:26.899545 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0412 11:38:26.899546 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0412 11:38:26.899567 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0412 11:38:26.900480 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" I0412 11:38:26.900811 1 secure_serving.go:266] Serving securely on [::]:4443 I0412 11:38:26.900854 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0412 11:38:26.900965 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed I0412 11:38:26.999960 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0412 11:38:26.999989 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController I0412 11:38:26.999970 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file E0412 11:38:27.000087 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" E0412 11:38:27.000118 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) Edit metrics server deployment yaml Add - --kubelet-insecure-tls k apply -f metric-server-deployment.yaml serviceaccount/metrics-server unchanged clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged service/metrics-server unchanged deployment.apps/metrics-server configured apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged kubectl get pods -n kube-system | grep metrics-server metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m kubectl top pods Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) Also tried by adding below to metrics server deployment command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP
Ah, came across my own question :D So, Falco can be used as a HIDS (Host-based intrusion detection system), which will alert on any unusual activity based on rules defined in Flaco configuration. This can be deployed as a Deamonset (privileged) or directly on nodes.
###Instruction: Hope you can help. I am trying to enable audit logging in a docker container so if anybody kubectl exec to a docker container and runs any commands, then those commands get logged and we can view them in kubectl logs and capture with fluentd for example. An option with adding the following line to /etc/profile of a container works for root but not for a non-root user as /proc/1/fd/1 is owned and writable by only root user and changing ownership or permissions, is not an option, unfortunately. trap 'echo "$USER":"$BASH_COMMAND" >> /proc/1/fd/1' DEBUG So far have tried the following: A working option would be to run the container as a non-root, but unfortunately this is not an option Option with just changing permissions/ownership doesn't change permissions/ownership Also adding mesg y to /etc/profile to allow access to root's didn't work either as when doing su - non-root the permission gets denied (mesg: cannot open /dev/pts/2: Permission denied) - cannot change the permission An option with adding a special file and trying to redirect the logs from there didn't work either, still the permission gets denied. For example: mkfifo -m 666 /tmp/logpipe #create the special file trap 'echo "$USER":"$BASH_COMMAND" <> /tmp/logpipe > /proc/1/fd/1' DEBUG # in /etc/profile Changing to trap 'echo "$USER":"$BASH_COMMAND"' DEBUG won't work either as the logs need to go to /proc/1/fd/1 in case of docker How would you enable command line audit logging in docker container or workaround the /proc/1/fd/1 permission issue for non-root in a container run as root user? Any ideas highly appreciated. ### Response: Ah, came across my own question :D So, Falco can be used as a HIDS (Host-based intrusion detection system), which will alert on any unusual activity based on rules defined in Flaco configuration. This can be deployed as a Deamonset (privileged) or directly on nodes.
Hope you can help. I am trying to enable audit logging in a docker container so if anybody kubectl exec to a docker container and runs any commands, then those commands get logged and we can view them in kubectl logs and capture with fluentd for example. An option with adding the following line to /etc/profile of a container works for root but not for a non-root user as /proc/1/fd/1 is owned and writable by only root user and changing ownership or permissions, is not an option, unfortunately. trap 'echo "$USER":"$BASH_COMMAND" >> /proc/1/fd/1' DEBUG So far have tried the following: A working option would be to run the container as a non-root, but unfortunately this is not an option Option with just changing permissions/ownership doesn't change permissions/ownership Also adding mesg y to /etc/profile to allow access to root's didn't work either as when doing su - non-root the permission gets denied (mesg: cannot open /dev/pts/2: Permission denied) - cannot change the permission An option with adding a special file and trying to redirect the logs from there didn't work either, still the permission gets denied. For example: mkfifo -m 666 /tmp/logpipe #create the special file trap 'echo "$USER":"$BASH_COMMAND" <> /tmp/logpipe > /proc/1/fd/1' DEBUG # in /etc/profile Changing to trap 'echo "$USER":"$BASH_COMMAND"' DEBUG won't work either as the logs need to go to /proc/1/fd/1 in case of docker How would you enable command line audit logging in docker container or workaround the /proc/1/fd/1 permission issue for non-root in a container run as root user? Any ideas highly appreciated.
Per my comment, depending on how you are building your go executable will have an effect on how it behaves within a k8s environment. Are you using a scratch image or a CGO_ENABLED=1 image? From the dns package docs there's a caveat on DNS behavior By default the pure Go resolver is used, because a blocked DNS request consumes only a goroutine, while a blocked C call consumes an operating system thread. When cgo is available, the cgo-based resolver is used instead under a variety of conditions: ... when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features that the Go resolver does not implement, and when the name being looked up ends in .local or is an mDNS name. So I would suggest - to maximized your success rate for both external & internal DNS requests - building your go executable for k8s like so: CGO_ENABLED=1 go build -tags netgo
###Instruction: I'm trying to use internal service DNS for service-to-service HTTP communication. If I try to curl a pod from another deployment pod it is working but unable to use it in golang net/http service 2023/01/27 15:48:37 oauth2.go:90: oauth2 url http://hydra-admin.microservices.svc.cluster.local:4445/oauth2/auth/requests/login/accept 2023/01/27 15:48:37 oauth2.go:101: Unable to make http request Put "http://localhost:4445/admin/oauth2/auth/requests/login/accept?login_challenge=b569006c8b834a298cf1cd72e2424953": dial tcp [::1]:4445: connect: connection refused hydra-admin.microservices.svc.cluster.local is resolved to localhost when the API call is made but curl works as you see below / # curl -X PUT http://hydra-admin:4445/admin/oauth2/auth/requests/login/accept?login_challenge=6f51146e49c54b739de8a37b25a72349 {"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. Unable to decode body because: EOF"} What am I missing here? ### Response: Per my comment, depending on how you are building your go executable will have an effect on how it behaves within a k8s environment. Are you using a scratch image or a CGO_ENABLED=1 image? From the dns package docs there's a caveat on DNS behavior By default the pure Go resolver is used, because a blocked DNS request consumes only a goroutine, while a blocked C call consumes an operating system thread. When cgo is available, the cgo-based resolver is used instead under a variety of conditions: ... when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features that the Go resolver does not implement, and when the name being looked up ends in .local or is an mDNS name. So I would suggest - to maximized your success rate for both external & internal DNS requests - building your go executable for k8s like so: CGO_ENABLED=1 go build -tags netgo
I'm trying to use internal service DNS for service-to-service HTTP communication. If I try to curl a pod from another deployment pod it is working but unable to use it in golang net/http service 2023/01/27 15:48:37 oauth2.go:90: oauth2 url http://hydra-admin.microservices.svc.cluster.local:4445/oauth2/auth/requests/login/accept 2023/01/27 15:48:37 oauth2.go:101: Unable to make http request Put "http://localhost:4445/admin/oauth2/auth/requests/login/accept?login_challenge=b569006c8b834a298cf1cd72e2424953": dial tcp [::1]:4445: connect: connection refused hydra-admin.microservices.svc.cluster.local is resolved to localhost when the API call is made but curl works as you see below / # curl -X PUT http://hydra-admin:4445/admin/oauth2/auth/requests/login/accept?login_challenge=6f51146e49c54b739de8a37b25a72349 {"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. Unable to decode body because: EOF"} What am I missing here?
Nextflow processes are run independently and isolated from each other from inside the working directory. For your script to be able to find the required input files, these must be localized inside the process working directory. This should be done by defining an input block and declaring the files using the path qualifier, for example: params.function_rds = './function.rds' params.input_rds = './input.rds' process my_script { input: path my_function_rds path my_input_rds output: path "output.rds" """ #!/usr/bin/env Rscript FUN <- readRDS("${my_function_rds}"); input = readRDS("${my_input_rds}"); output = FUN( singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]] ); saveRDS(output, "output.rds") """ } workflow { function_rds = file( params.function_rds ) input_rds = file( params.input_rds ) my_script( function_rds, input_rds ) my_script.out.view() } In the same way, the script itself would need to be localized inside the process working directory. To avoid specifying an absolute path to your R script (which would not make your workflow portable at all), it's possible to simply embed your code, making sure to specify the Rscript shebang. This works because process scripts are not limited to Bash1. Another way, would be to make your Rscript executable and move it into a directory called bin in the the root directory of your project repository (i.e. the same directory as your 'main.nf' Nextflow script). Nextflow automatically adds this folder to the $PATH environment variable and your script would become automatically accessible to each of your pipeline processes. For this to work, you'd need some way to pass in the input files as command line arguments. For example: params.function_rds = './function.rds' params.input_rds = './input.rds' process my_script { input: path my_function_rds path my_input_rds output: path "output.rds" """ script.R "${my_function_rds}" "${my_input_rds}" output.rds """ } workflow { function_rds = file( params.function_rds ) input_rds = file( params.input_rds ) my_script( function_rds, input_rds ) my_script.out.view() } And your R script might look like: #!/usr/bin/env Rscript args <- commandArgs(trailingOnly = TRUE) FUN <- readRDS(args[1]); input = readRDS(args[2]); output = FUN( singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]] ); saveRDS(output, args[3])
###Instruction: I am trying to run a nextflow pipeline but the output file is not created. The main.nf file looks like this: #!/usr/bin/env nextflow nextflow.enable.dsl=2 process my_script { """ Rscript script.R """ } workflow { my_script } In my nextflow.config I have: process { executor = 'k8s' container = 'rocker/r-ver:4.1.3' } The script.R looks like this: FUN <- readRDS("function.rds"); input = readRDS("input.rds"); output = FUN( singleCell_data_input = input[[1]], savePath = input[[2]], tmpDirGC = input[[3]] ); saveRDS(output, "output.rds") After running nextflow run main.nf the output.rds is not created ### Response: Nextflow processes are run independently and isolated from each other from inside the working directory. For your script to be able to find the required input files, these must be localized inside the process working directory. This should be done by defining an input block and declaring the files using the path qualifier, for example: params.function_rds = './function.rds' params.input_rds = './input.rds' process my_script { input: path my_function_rds path my_input_rds output: path "output.rds" """ #!/usr/bin/env Rscript FUN <- readRDS("${my_function_rds}"); input = readRDS("${my_input_rds}"); output = FUN( singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]] ); saveRDS(output, "output.rds") """ } workflow { function_rds = file( params.function_rds ) input_rds = file( params.input_rds ) my_script( function_rds, input_rds ) my_script.out.view() } In the same way, the script itself would need to be localized inside the process working directory. To avoid specifying an absolute path to your R script (which would not make your workflow portable at all), it's possible to simply embed your code, making sure to specify the Rscript shebang. This works because process scripts are not limited to Bash1. Another way, would be to make your Rscript executable and move it into a directory called bin in the the root directory of your project repository (i.e. the same directory as your 'main.nf' Nextflow script). Nextflow automatically adds this folder to the $PATH environment variable and your script would become automatically accessible to each of your pipeline processes. For this to work, you'd need some way to pass in the input files as command line arguments. For example: params.function_rds = './function.rds' params.input_rds = './input.rds' process my_script { input: path my_function_rds path my_input_rds output: path "output.rds" """ script.R "${my_function_rds}" "${my_input_rds}" output.rds """ } workflow { function_rds = file( params.function_rds ) input_rds = file( params.input_rds ) my_script( function_rds, input_rds ) my_script.out.view() } And your R script might look like: #!/usr/bin/env Rscript args <- commandArgs(trailingOnly = TRUE) FUN <- readRDS(args[1]); input = readRDS(args[2]); output = FUN( singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]] ); saveRDS(output, args[3])
I am trying to run a nextflow pipeline but the output file is not created. The main.nf file looks like this: #!/usr/bin/env nextflow nextflow.enable.dsl=2 process my_script { """ Rscript script.R """ } workflow { my_script } In my nextflow.config I have: process { executor = 'k8s' container = 'rocker/r-ver:4.1.3' } The script.R looks like this: FUN <- readRDS("function.rds"); input = readRDS("input.rds"); output = FUN( singleCell_data_input = input[[1]], savePath = input[[2]], tmpDirGC = input[[3]] ); saveRDS(output, "output.rds") After running nextflow run main.nf the output.rds is not created
Hi Chai I have gone through the link and tried installing pinniped cli it throwed me same error, then I troubleshooted and found that pinniped cli’s executable file is not getting added to the path and we can run pinniped commands by executing the exe file, however everytime you need to go to the directory where your pinniped-cli.exe file is present. Inorder to resolve this you can add pinniped cli’s exe to you path and it will solve your problem, follow this document for more information.
###Instruction: pinniped cli is not working in widnows. pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized. C:\Users\hello>pinniped pinniped is not recognized as a internal command or external command, operable program or batch file. Seem windows is not recognizing this .exe file as published by a valid publisher. pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work. ### Response: Hi Chai I have gone through the link and tried installing pinniped cli it throwed me same error, then I troubleshooted and found that pinniped cli’s executable file is not getting added to the path and we can run pinniped commands by executing the exe file, however everytime you need to go to the directory where your pinniped-cli.exe file is present. Inorder to resolve this you can add pinniped cli’s exe to you path and it will solve your problem, follow this document for more information.
pinniped cli is not working in widnows. pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized. C:\Users\hello>pinniped pinniped is not recognized as a internal command or external command, operable program or batch file. Seem windows is not recognizing this .exe file as published by a valid publisher. pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.
Alas I must admit I just discovered the error does not originate from the kubernetes / ingress-nginx part of the setup but from the testing system, which includes a collision between services at deploy time, because of bad separation in the CI / CD job. Sorry for your time ! So in fact the logs from ingress nginx that stunned me : IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID Shows that a service I deploy is overwritten by another environment deployment with different variables, which makes it start to make request to another namespace. The ingress routing is correct.
###Instruction: I have this kind of routing in each namespace : apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: janitor/expires: ${EXPIRY_DATE} nginx.ingress.kubernetes.io/ssl-redirect: "false" # Set to true once SSL is set up. spec: ingressClassName: nginx rules: - host: api.${KUBE_DEPLOY_HOST} http: paths: - pathType: Prefix path: / backend: service: name: api-js port: number: 111 Served by ingress-nginx (!= nginx-ingress) 1.2.1 (same issue with 1.5.1) with Kube 1.22 (or 1.23), one deployment in the ingress-nginx namespace, two replicas in the deployment. When I check my logs I see that sometimes, I think especially when I deploy new ingress rules in new namespaces (during and after the ingress-nginx reload event) I get 502 / 503 / 404 HTTP error responses from the ingress-nginx controller. When I look into the detailed log, I see : IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID Which makes me think the request goes wrong because the master frontend is being served a development API response by the ingress-nginx controller, sometimes when the new api service is not even ready. When I check the ingress from GKE's view it looks like it is serving 3 pods, corresponding to 3 namespaces that should not overlap / mix requests, instead of the one api pod in the namespace corresponding to the ingress : So the error is seen here, all the ingresses for each 3 namespsace serve 3 pods instead of one pod, which means it is all mixed up, right. I am sure there is one pod per deployment in my namespaces : So if I understand correctly, it seems that the situation is ingress A, ingress B and ingress C, all three of them, serve api A AND api B AND api C instead of serving just the one api pod from their namespace (A, B, C). But what I don't know is how is it possible that the ingress matches pods from other namespaces, when I am not using externalname, it is the opposite of what an ingress does by default. I believe the issue is at the ingress level and not at the service level, as when I look into each service, I see that it just serve the one pod corresponding to its namespace and not 3. The controller is the default ingress-nginx installation edited to use 2 replicas instead of one. Example service and deployment (issue happens for all of them) : apiVersion: v1 kind: Service metadata: name: api-js labels: component: api-js role: api-js annotations: janitor/expires: ${EXPIRY_DATE} spec: type: ClusterIP selector: role: perfmaker-api-js ports: - name: httpapi port: 111 targetPort: 111 --- apiVersion: apps/v1 kind: Deployment metadata: name: api-js annotations: janitor/expires: ${EXPIRY_DATE} spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: api-js template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" labels: app: api-js role: api-js spec: containers: - name: api-js image: registry/api When I change the api name / selectors on one branch, it "untangles" the situation and each branch / namespace's ingress only serves the pod it should serve. But the errors happen during and after 'reload' event on the ingress-controller, not all the time, an event which is fired when ingress resources are added / removed / updated. In my case it is when there is a new branch in the CI/CD which makes a new namespace and deployment + ingress, or when a finished pipeline triggers a namespace deletion. ### Response: Alas I must admit I just discovered the error does not originate from the kubernetes / ingress-nginx part of the setup but from the testing system, which includes a collision between services at deploy time, because of bad separation in the CI / CD job. Sorry for your time ! So in fact the logs from ingress nginx that stunned me : IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID Shows that a service I deploy is overwritten by another environment deployment with different variables, which makes it start to make request to another namespace. The ingress routing is correct.
I have this kind of routing in each namespace : apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: janitor/expires: ${EXPIRY_DATE} nginx.ingress.kubernetes.io/ssl-redirect: "false" # Set to true once SSL is set up. spec: ingressClassName: nginx rules: - host: api.${KUBE_DEPLOY_HOST} http: paths: - pathType: Prefix path: / backend: service: name: api-js port: number: 111 Served by ingress-nginx (!= nginx-ingress) 1.2.1 (same issue with 1.5.1) with Kube 1.22 (or 1.23), one deployment in the ingress-nginx namespace, two replicas in the deployment. When I check my logs I see that sometimes, I think especially when I deploy new ingress rules in new namespaces (during and after the ingress-nginx reload event) I get 502 / 503 / 404 HTTP error responses from the ingress-nginx controller. When I look into the detailed log, I see : IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID Which makes me think the request goes wrong because the master frontend is being served a development API response by the ingress-nginx controller, sometimes when the new api service is not even ready. When I check the ingress from GKE's view it looks like it is serving 3 pods, corresponding to 3 namespaces that should not overlap / mix requests, instead of the one api pod in the namespace corresponding to the ingress : So the error is seen here, all the ingresses for each 3 namespsace serve 3 pods instead of one pod, which means it is all mixed up, right. I am sure there is one pod per deployment in my namespaces : So if I understand correctly, it seems that the situation is ingress A, ingress B and ingress C, all three of them, serve api A AND api B AND api C instead of serving just the one api pod from their namespace (A, B, C). But what I don't know is how is it possible that the ingress matches pods from other namespaces, when I am not using externalname, it is the opposite of what an ingress does by default. I believe the issue is at the ingress level and not at the service level, as when I look into each service, I see that it just serve the one pod corresponding to its namespace and not 3. The controller is the default ingress-nginx installation edited to use 2 replicas instead of one. Example service and deployment (issue happens for all of them) : apiVersion: v1 kind: Service metadata: name: api-js labels: component: api-js role: api-js annotations: janitor/expires: ${EXPIRY_DATE} spec: type: ClusterIP selector: role: perfmaker-api-js ports: - name: httpapi port: 111 targetPort: 111 --- apiVersion: apps/v1 kind: Deployment metadata: name: api-js annotations: janitor/expires: ${EXPIRY_DATE} spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: api-js template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" labels: app: api-js role: api-js spec: containers: - name: api-js image: registry/api When I change the api name / selectors on one branch, it "untangles" the situation and each branch / namespace's ingress only serves the pod it should serve. But the errors happen during and after 'reload' event on the ingress-controller, not all the time, an event which is fired when ingress resources are added / removed / updated. In my case it is when there is a new branch in the CI/CD which makes a new namespace and deployment + ingress, or when a finished pipeline triggers a namespace deletion.
I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes: Include the inflator directly in your kustomization.yaml in the helmCharts section. Use name instead of chartName. Set chartHome in the helmGlobals section rather than per-chart. That gets us something like this in our kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmGlobals: chartHome: ../../../helm-k8s/ helmCharts: - name: helm-k8s releaseName: project-monitoring-chart values: ../../values.yaml releaseNamespace: project-monitoring-ns I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory. You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml. Update To make sure things are clear, this is what I'm recommending: Remove the kustomize bits from your helm chart, so that it looks like this. Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/. Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmCharts: - name: open-electrons-monitoring repo: http://oddbit.com/open-electrons-deployments/ nameSuffix: -prod
###Instruction: I have the following chartInflator.yml file: apiVersion: builtin kind: ChartInflator metadata: name: project-helm-inflator chartName: helm-k8s chartHome: ../../../helm-k8s/ releaseName: project-monitoring-chart values: ../../values.yaml releaseNamespace: project-monitoring-ns When I ran it using this, I got the error message below: $ kustomize build . Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty Here is my project structure: project - helm-k8s - values.yml - Chart.yml - templates - base - project-namespace.yml - grafana - grafana-service.yml - grafana-deployment.yml - grafana-datasource-config.yml - prometheus - prometheus-service.yml - prometheus-deployment.yml - prometheus-config.yml - prometheus-roles.yml - kustomization.yml - prod - kustomization.yml - test - kustomization.yml ### Response: I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes: Include the inflator directly in your kustomization.yaml in the helmCharts section. Use name instead of chartName. Set chartHome in the helmGlobals section rather than per-chart. That gets us something like this in our kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmGlobals: chartHome: ../../../helm-k8s/ helmCharts: - name: helm-k8s releaseName: project-monitoring-chart values: ../../values.yaml releaseNamespace: project-monitoring-ns I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory. You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml. Update To make sure things are clear, this is what I'm recommending: Remove the kustomize bits from your helm chart, so that it looks like this. Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/. Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmCharts: - name: open-electrons-monitoring repo: http://oddbit.com/open-electrons-deployments/ nameSuffix: -prod
I have the following chartInflator.yml file: apiVersion: builtin kind: ChartInflator metadata: name: project-helm-inflator chartName: helm-k8s chartHome: ../../../helm-k8s/ releaseName: project-monitoring-chart values: ../../values.yaml releaseNamespace: project-monitoring-ns When I ran it using this, I got the error message below: $ kustomize build . Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty Here is my project structure: project - helm-k8s - values.yml - Chart.yml - templates - base - project-namespace.yml - grafana - grafana-service.yml - grafana-deployment.yml - grafana-datasource-config.yml - prometheus - prometheus-service.yml - prometheus-deployment.yml - prometheus-config.yml - prometheus-roles.yml - kustomization.yml - prod - kustomization.yml - test - kustomization.yml
Should I create a load balancer for each of these? No, you can create but not a good idea. What about security? I only want a few engineers to have access to these systems. You can create an account in Kibana and manage access or else you can use the IAP (Identity-Aware Proxy) to restrict access. Ref doc You have multiple options. You can use the LoadBalancer as you used but not a good idea though. A good way to expose different applications is using the ingress. So i you are running the Prometheus, Jaeger, and Kibana in your GKE. You can create the different hosts with domain prom.example.com, tracing.example.com, kibana.example.com so there will be single ingress controller service with type LoadBalancer and you can map IP to DNS. Ref doc
###Instruction: I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use port-forward to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way. Should I create a load balancer for each of these? What about security? I only want a few engineers to have access to these systems. There are other considerations as well, would love to get your thoughts. ### Response: Should I create a load balancer for each of these? No, you can create but not a good idea. What about security? I only want a few engineers to have access to these systems. You can create an account in Kibana and manage access or else you can use the IAP (Identity-Aware Proxy) to restrict access. Ref doc You have multiple options. You can use the LoadBalancer as you used but not a good idea though. A good way to expose different applications is using the ingress. So i you are running the Prometheus, Jaeger, and Kibana in your GKE. You can create the different hosts with domain prom.example.com, tracing.example.com, kibana.example.com so there will be single ingress controller service with type LoadBalancer and you can map IP to DNS. Ref doc
I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use port-forward to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way. Should I create a load balancer for each of these? What about security? I only want a few engineers to have access to these systems. There are other considerations as well, would love to get your thoughts.
For legacy reasons I must stand up nexus on GKE. While this question doesn't directly state it is on Google Cloud, the gs:// and ingress.class: gce suggest it was; despite the older answer from Xuan Huy being about AWS. I had a heck of a time getting Nexus TLS to work on GKE, but I finally managed. Google Ingress resources are not the most stable. If you're iterating, they can wedge up and you might find finalizers unable to complete due to getting stuck on L4 ILB cleanup. Things got so screwed up in GCP with just innocent deploy and delete cycles that I had to trash projects and start new ones to test and finally get to a working combination. My Helm values.yaml has the following. Note I am using Terraform also, so my ${variables} are replaced by Terraform with my particular environment settings before running Helm. service: type: ClusterIP annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"ports": {"8081":"sonatype-backendcfg"}}' ingress: ingressClassName: null # on GCP, null this, and use annotations instead enabled: true hostPath: / # don't use /* that is suggested multiple places hostRepo: ${sonatype_dns_name} # public facing FQDN annotations: ingress.gcp.kubernetes.io/pre-shared-cert: "${gce_ssl_cert_name}" kubernetes.io/ingress.class: "gce-internal" kubernetes.io/ingress.allow-http: "false" # unrelated hint - I use external-dns for DNS registration external-dns.alpha.kubernetes.io/hostname: "${sonatype_dns_name}." tls: - secretName: "${tls_secret_name}" hosts: - "${sonatype_cluster_dns_name}" # the svc.cluster.local FQDN Before running Helm, my installer places the TLS certs in the GCE cert store for the ILB to use. Also before Helm, ${tls_secret_name} kubesecret is prepared with the cert in key names tls.crt and tls.key (many other apps use this pattern). I also have a backendconfig resource: apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: sonatype-backendcfg namespace: sonatype spec: healthCheck: checkIntervalSec: 30 healthyThreshold: 1 port: 8081 requestPath: /service/rest/v1/status timeoutSec: 15 type: HTTP unhealthyThreshold: 10 The folks at Nexus are not supporting this scenario much longer, so we're working on moving to Harbor so we can cancel our Nexus license.
###Instruction: I ran the following command: kubectl create secret tls nexus-tls --cert cert.crt --key privateKey.pem where cert.crt contains my certificate and privateKey.pem contains my private key (provisioned using CloudFlare). I then installed the stable/sonatype-nexus Helm chart with the following configuration: nexusProxy: env: nexusDockerHost: containers.<<NEXUS_HOST>> nexusHttpHost: nexus.<<NEXUS_HOST>> nexusBackup: enabled: true nexusAdminPassword: <<PASSWORD>> env: targetBucket: gs://<<BACKUP_BUCKET_NAME>> persistence: storageClass: standard ingress: enabled: true path: /* annotations: kubernetes.io/ingress.allow-http: true kubernetes.io/tls-acme: true kubernetes.io/ingress.class: gce kubernetes.io/ingress.global-static-ip-name: <<STATIC_IP_ADDRESS_NAME>> tls: enabled: true secretName: nexus-tls persistence: storageClass: standard storageSize: 1024Gi resources: requests: cpu: 250m memory: 4800Mi by running the command: helm install -f values.yaml stable/sonatype-nexus The possible configuration values for this chart are documented here. When I visit http://nexus.<<NEXUS_HOST>>, I am able to access the Nexus Repository. However, when I access https://nexus.<<NEXUS_HOST>>, I receive mixed content warnings, because HTTP resources are being served. If I set the nexusProxy.env.enforceHttps environment variable to true, when I visit https://nexus.<<NEXUS_HOST>>, I get a response back which looks like: HTTP access is disabled. Click here to browse Nexus securely: https://nexus.<<NEXUS_HOST>>. How can I ensure that Nexus is served securely? Have I made a configuration error, or does the issue lie elsewhere? ### Response: For legacy reasons I must stand up nexus on GKE. While this question doesn't directly state it is on Google Cloud, the gs:// and ingress.class: gce suggest it was; despite the older answer from Xuan Huy being about AWS. I had a heck of a time getting Nexus TLS to work on GKE, but I finally managed. Google Ingress resources are not the most stable. If you're iterating, they can wedge up and you might find finalizers unable to complete due to getting stuck on L4 ILB cleanup. Things got so screwed up in GCP with just innocent deploy and delete cycles that I had to trash projects and start new ones to test and finally get to a working combination. My Helm values.yaml has the following. Note I am using Terraform also, so my ${variables} are replaced by Terraform with my particular environment settings before running Helm. service: type: ClusterIP annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"ports": {"8081":"sonatype-backendcfg"}}' ingress: ingressClassName: null # on GCP, null this, and use annotations instead enabled: true hostPath: / # don't use /* that is suggested multiple places hostRepo: ${sonatype_dns_name} # public facing FQDN annotations: ingress.gcp.kubernetes.io/pre-shared-cert: "${gce_ssl_cert_name}" kubernetes.io/ingress.class: "gce-internal" kubernetes.io/ingress.allow-http: "false" # unrelated hint - I use external-dns for DNS registration external-dns.alpha.kubernetes.io/hostname: "${sonatype_dns_name}." tls: - secretName: "${tls_secret_name}" hosts: - "${sonatype_cluster_dns_name}" # the svc.cluster.local FQDN Before running Helm, my installer places the TLS certs in the GCE cert store for the ILB to use. Also before Helm, ${tls_secret_name} kubesecret is prepared with the cert in key names tls.crt and tls.key (many other apps use this pattern). I also have a backendconfig resource: apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: sonatype-backendcfg namespace: sonatype spec: healthCheck: checkIntervalSec: 30 healthyThreshold: 1 port: 8081 requestPath: /service/rest/v1/status timeoutSec: 15 type: HTTP unhealthyThreshold: 10 The folks at Nexus are not supporting this scenario much longer, so we're working on moving to Harbor so we can cancel our Nexus license.
I ran the following command: kubectl create secret tls nexus-tls --cert cert.crt --key privateKey.pem where cert.crt contains my certificate and privateKey.pem contains my private key (provisioned using CloudFlare). I then installed the stable/sonatype-nexus Helm chart with the following configuration: nexusProxy: env: nexusDockerHost: containers.<<NEXUS_HOST>> nexusHttpHost: nexus.<<NEXUS_HOST>> nexusBackup: enabled: true nexusAdminPassword: <<PASSWORD>> env: targetBucket: gs://<<BACKUP_BUCKET_NAME>> persistence: storageClass: standard ingress: enabled: true path: /* annotations: kubernetes.io/ingress.allow-http: true kubernetes.io/tls-acme: true kubernetes.io/ingress.class: gce kubernetes.io/ingress.global-static-ip-name: <<STATIC_IP_ADDRESS_NAME>> tls: enabled: true secretName: nexus-tls persistence: storageClass: standard storageSize: 1024Gi resources: requests: cpu: 250m memory: 4800Mi by running the command: helm install -f values.yaml stable/sonatype-nexus The possible configuration values for this chart are documented here. When I visit http://nexus.<<NEXUS_HOST>>, I am able to access the Nexus Repository. However, when I access https://nexus.<<NEXUS_HOST>>, I receive mixed content warnings, because HTTP resources are being served. If I set the nexusProxy.env.enforceHttps environment variable to true, when I visit https://nexus.<<NEXUS_HOST>>, I get a response back which looks like: HTTP access is disabled. Click here to browse Nexus securely: https://nexus.<<NEXUS_HOST>>. How can I ensure that Nexus is served securely? Have I made a configuration error, or does the issue lie elsewhere?
This isn't going to be possible with just Kustomize other than by using multiple patches. Because you're looking to change the value in objects of different types, this can't be done with a single patch. So you could do this: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - manifests.yaml patches: - target: kind: Gateway name: foo patch: | - op: replace path: /spec/servers/0/hosts/0 value: foo.acme.com - target: kind: VirtualService name: bazz patch: | - op: replace path: /spec/hosts/0 value: foo.acme.com If you find you need to do this frequently (maybe you have a bunch of similar services), you could move the manifests into a helm chart and then inflate that with kustomize.
###Instruction: When using Istio with Kubernetes, a number of different manifests require the same environment-specific values. For example, the host address is required by both the Gateway (under spec/servers/hosts) and VirtualService (under spec/hosts). The typical approach to changing the address for different environments is to apply Kustomize patches. Is it possible to use a single patch to transform/insert the value into each manifest that needs it, or somehow maintain just one copy of the address that gets inserted where needed? The alternative is having to maintain multiple patch files with the same host address, and I would like to avoid duplication. --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: foo spec: selector: istio: bar servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: secret hosts: - test.acme.com --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bazz spec: hosts: - test.acme.com gateways: - foo http: - match: - uri: prefix: /api/v1 route: - destination: host: blarg port: number: 80 ### Response: This isn't going to be possible with just Kustomize other than by using multiple patches. Because you're looking to change the value in objects of different types, this can't be done with a single patch. So you could do this: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - manifests.yaml patches: - target: kind: Gateway name: foo patch: | - op: replace path: /spec/servers/0/hosts/0 value: foo.acme.com - target: kind: VirtualService name: bazz patch: | - op: replace path: /spec/hosts/0 value: foo.acme.com If you find you need to do this frequently (maybe you have a bunch of similar services), you could move the manifests into a helm chart and then inflate that with kustomize.
When using Istio with Kubernetes, a number of different manifests require the same environment-specific values. For example, the host address is required by both the Gateway (under spec/servers/hosts) and VirtualService (under spec/hosts). The typical approach to changing the address for different environments is to apply Kustomize patches. Is it possible to use a single patch to transform/insert the value into each manifest that needs it, or somehow maintain just one copy of the address that gets inserted where needed? The alternative is having to maintain multiple patch files with the same host address, and I would like to avoid duplication. --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: foo spec: selector: istio: bar servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: secret hosts: - test.acme.com --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bazz spec: hosts: - test.acme.com gateways: - foo http: - match: - uri: prefix: /api/v1 route: - destination: host: blarg port: number: 80
To connect MongoDB with Golang in a Kubernetes environment, you need to follow these steps: Deploy MongoDB as a statefulset or a deployment in your Kubernetes cluster. Create a Service for MongoDB to access the deployed pods from your Golang application. In your Golang application, use the official MongoDB Go driver to establish a connection to the MongoDB service by specifying the service name and port. Verify the connection by running a simple test that inserts and retrieves data from the MongoDB database. Finally, package the Golang application as a Docker image and deploy it as a deployment in the same Kubernetes cluster. Here is a sample Go code to connect to MongoDB: package main import ( "context" "fmt" "log" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { // Set client options clientOptions := options.Client().ApplyURI("mongodb://mongodb-service:27017") // Connect to MongoDB client, err := mongo.Connect(context.TODO(), clientOptions) if err != nil { log.Fatal(err) } // Check the connection err = client.Ping(context.TODO(), nil) if err != nil { log.Fatal(err) } fmt.Println("Connected to MongoDB!") } Here's a sample YAML file for deploying MongoDB as a StatefulSet and a Go application as a Deployment: apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: serviceName: mongodb-service replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:4.4 ports: - containerPort: 27017 volumeMounts: - name: mongodb-data mountPath: /data/db volumeClaimTemplates: - metadata: name: mongodb-data annotations: volume.beta.kubernetes.io/storage-class: standard spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - name: mongodb port: 27017 targetPort: 27017 clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: go-app spec: replicas: 1 selector: matchLabels: app: go-app template: metadata: labels: app: go-app spec: containers: - name: go-app image: <your-go-app-image> ports: - containerPort: 8080 Note: You will need to replace your-go-app-image with the actual Docker image of your Go application.
###Instruction: The database and the server are not connected. Attempting to deploy in Kubernetes environment. this is deployment, sevice of mongodb , golang http server apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - image: royroyee/backend:0.8 name: backend ports: - containerPort: 9001 --- apiVersion: v1 kind: Service metadata: name: backend-service labels: run: backend-service spec: ports: - port: 9001 targetPort: 9001 protocol: TCP selector: app: backend type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: mongo labels: app: mongo spec: selector: matchLabels: app: mongo template: metadata: labels: app: mongo spec: containers: - image: mongo name: mongo-db ports: - containerPort: 27017 --- apiVersion: v1 kind: Service metadata: name: mongo-service labels: run: mongo-service spec: ports: - port: 27017 targetPort: 27017 protocol: TCP selector: app: mongo and my golang code ... mongodb session func getSession() *mgo.Session { s, err := mgo.Dial("mongodb://mongo-service:27017/mongo-db") pls let me know .. also I tried something like this. // mongodb://mongo-service:27017/backend // mongodb://mongo-service:27017/mongo-db // mongodb://mongo-service:27017 ### Response: To connect MongoDB with Golang in a Kubernetes environment, you need to follow these steps: Deploy MongoDB as a statefulset or a deployment in your Kubernetes cluster. Create a Service for MongoDB to access the deployed pods from your Golang application. In your Golang application, use the official MongoDB Go driver to establish a connection to the MongoDB service by specifying the service name and port. Verify the connection by running a simple test that inserts and retrieves data from the MongoDB database. Finally, package the Golang application as a Docker image and deploy it as a deployment in the same Kubernetes cluster. Here is a sample Go code to connect to MongoDB: package main import ( "context" "fmt" "log" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { // Set client options clientOptions := options.Client().ApplyURI("mongodb://mongodb-service:27017") // Connect to MongoDB client, err := mongo.Connect(context.TODO(), clientOptions) if err != nil { log.Fatal(err) } // Check the connection err = client.Ping(context.TODO(), nil) if err != nil { log.Fatal(err) } fmt.Println("Connected to MongoDB!") } Here's a sample YAML file for deploying MongoDB as a StatefulSet and a Go application as a Deployment: apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: serviceName: mongodb-service replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:4.4 ports: - containerPort: 27017 volumeMounts: - name: mongodb-data mountPath: /data/db volumeClaimTemplates: - metadata: name: mongodb-data annotations: volume.beta.kubernetes.io/storage-class: standard spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: mongodb-service spec: selector: app: mongodb ports: - name: mongodb port: 27017 targetPort: 27017 clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: go-app spec: replicas: 1 selector: matchLabels: app: go-app template: metadata: labels: app: go-app spec: containers: - name: go-app image: <your-go-app-image> ports: - containerPort: 8080 Note: You will need to replace your-go-app-image with the actual Docker image of your Go application.
The database and the server are not connected. Attempting to deploy in Kubernetes environment. this is deployment, sevice of mongodb , golang http server apiVersion: apps/v1 kind: Deployment metadata: name: backend labels: app: backend spec: selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - image: royroyee/backend:0.8 name: backend ports: - containerPort: 9001 --- apiVersion: v1 kind: Service metadata: name: backend-service labels: run: backend-service spec: ports: - port: 9001 targetPort: 9001 protocol: TCP selector: app: backend type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: mongo labels: app: mongo spec: selector: matchLabels: app: mongo template: metadata: labels: app: mongo spec: containers: - image: mongo name: mongo-db ports: - containerPort: 27017 --- apiVersion: v1 kind: Service metadata: name: mongo-service labels: run: mongo-service spec: ports: - port: 27017 targetPort: 27017 protocol: TCP selector: app: mongo and my golang code ... mongodb session func getSession() *mgo.Session { s, err := mgo.Dial("mongodb://mongo-service:27017/mongo-db") pls let me know .. also I tried something like this. // mongodb://mongo-service:27017/backend // mongodb://mongo-service:27017/mongo-db // mongodb://mongo-service:27017
my work around. ls -l /var/run/docker.sock # /var/run/docker.sock -> $HOME/.docker/run/docker.sock sudo rm /var/run/docker.sock ssh -i ~/.minikube/machines/minikube/id_rsa -L $HOME/.minikube/docker.sock:/var/run/docker.sock docker@$(minikube ip) -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sudo ln -s $HOME/.minikube/docker.sock /var/run/docker.sock if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null then echo "Running" else echo "Not running" fi # Running
###Instruction: I use the Google Cloud Code extension on VSCode. I have a minikube running on my macbook (using the virtualbox driver). I can run skaffold debug from my terminal just fine; the Helm chart gets deployed, but I haven't done the debugger setup so the breakpoints don't hit (as expected). I want to use the Cloud Code extension to avoid manually doing the debugger setup. However, if I run "debug on Kubernetes" in the Cloud Code extension, I get a prompt saying "Docker was found in the path but does not appear to be running. Start Docker to continue": If I select "start Docker", then Docker Desktop will be started, which I want to avoid. It seems to me that Cloud Code needs to do the equivalent of running eval $(minikube -p minikube docker-env) to use the minikube Docker daemon. Is there a setting to get it to do that? ### Response: my work around. ls -l /var/run/docker.sock # /var/run/docker.sock -> $HOME/.docker/run/docker.sock sudo rm /var/run/docker.sock ssh -i ~/.minikube/machines/minikube/id_rsa -L $HOME/.minikube/docker.sock:/var/run/docker.sock docker@$(minikube ip) -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sudo ln -s $HOME/.minikube/docker.sock /var/run/docker.sock if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null then echo "Running" else echo "Not running" fi # Running
I use the Google Cloud Code extension on VSCode. I have a minikube running on my macbook (using the virtualbox driver). I can run skaffold debug from my terminal just fine; the Helm chart gets deployed, but I haven't done the debugger setup so the breakpoints don't hit (as expected). I want to use the Cloud Code extension to avoid manually doing the debugger setup. However, if I run "debug on Kubernetes" in the Cloud Code extension, I get a prompt saying "Docker was found in the path but does not appear to be running. Start Docker to continue": If I select "start Docker", then Docker Desktop will be started, which I want to avoid. It seems to me that Cloud Code needs to do the equivalent of running eval $(minikube -p minikube docker-env) to use the minikube Docker daemon. Is there a setting to get it to do that?
Unfortunately it's not possible to use field selector for every field of a resource. In your case for example, you can only use these fields as field selector. It's also stated in this thread. Alternatively, you can put labels to pods that is owned by a custom resource and use label selectors. Or you can get all pods and apply programmatic filter to get necessary pods. (I recommend the first approach since metadata.ownerReferences is an array and the cost is O(n^2))
###Instruction: I want to list the pods that are owned by the resource X from the Kubernetes cluster using Kubuilder's List(ctx context.Context, list ObjectList, opts ...ListOption) method. ListOptions contains options for limiting or filtering results. Here is the the structure of the ListOptions type ListOptions struct { // LabelSelector filters results by label. Use labels.Parse() to // set from raw string form. LabelSelector labels.Selector // FieldSelector filters results by a particular field. In order // to use this with cache-based implementations, restrict usage to // a single field-value pair that's been added to the indexers. FieldSelector fields.Selector // Namespace represents the namespace to list for, or empty for // non-namespaced objects, or to list across all namespaces. Namespace string // Limit specifies the maximum number of results to return from the server. The server may // not support this field on all resource types, but if it does and more results remain it // will set the continue field on the returned list object. This field is not supported if watch // is true in the Raw ListOptions. Limit int64 // Continue is a token returned by the server that lets a client retrieve chunks of results // from the server by specifying limit. The server may reject requests for continuation tokens // it does not recognize and will return a 410 error if the token can no longer be used because // it has expired. This field is not supported if watch is true in the Raw ListOptions. Continue string // Raw represents raw ListOptions, as passed to the API server. Note // that these may not be respected by all implementations of interface, // and the LabelSelector, FieldSelector, Limit and Continue fields are ignored. Raw *metav1.ListOptions } Now, How can I provide the owner information to this ListOptions so the List method will only list the pods that are owned by X? Here is an example from the KubeBuilder book that shows how to filter results by a particular field, listOps := &client.ListOptions{ FieldSelector: fields.OneTermEqualSelector(configMapField, configMap.GetName()), Namespace: configMap.GetNamespace(), } err := r.List(context.TODO(), attachedConfigDeployments, listOps) ### Response: Unfortunately it's not possible to use field selector for every field of a resource. In your case for example, you can only use these fields as field selector. It's also stated in this thread. Alternatively, you can put labels to pods that is owned by a custom resource and use label selectors. Or you can get all pods and apply programmatic filter to get necessary pods. (I recommend the first approach since metadata.ownerReferences is an array and the cost is O(n^2))
I want to list the pods that are owned by the resource X from the Kubernetes cluster using Kubuilder's List(ctx context.Context, list ObjectList, opts ...ListOption) method. ListOptions contains options for limiting or filtering results. Here is the the structure of the ListOptions type ListOptions struct { // LabelSelector filters results by label. Use labels.Parse() to // set from raw string form. LabelSelector labels.Selector // FieldSelector filters results by a particular field. In order // to use this with cache-based implementations, restrict usage to // a single field-value pair that's been added to the indexers. FieldSelector fields.Selector // Namespace represents the namespace to list for, or empty for // non-namespaced objects, or to list across all namespaces. Namespace string // Limit specifies the maximum number of results to return from the server. The server may // not support this field on all resource types, but if it does and more results remain it // will set the continue field on the returned list object. This field is not supported if watch // is true in the Raw ListOptions. Limit int64 // Continue is a token returned by the server that lets a client retrieve chunks of results // from the server by specifying limit. The server may reject requests for continuation tokens // it does not recognize and will return a 410 error if the token can no longer be used because // it has expired. This field is not supported if watch is true in the Raw ListOptions. Continue string // Raw represents raw ListOptions, as passed to the API server. Note // that these may not be respected by all implementations of interface, // and the LabelSelector, FieldSelector, Limit and Continue fields are ignored. Raw *metav1.ListOptions } Now, How can I provide the owner information to this ListOptions so the List method will only list the pods that are owned by X? Here is an example from the KubeBuilder book that shows how to filter results by a particular field, listOps := &client.ListOptions{ FieldSelector: fields.OneTermEqualSelector(configMapField, configMap.GetName()), Namespace: configMap.GetNamespace(), } err := r.List(context.TODO(), attachedConfigDeployments, listOps)
Good question. Short answer: Because storage is cheaper than processing power, building images "Live" might be complex, time-consuming and it might be unpredictable. On your Kubernetes cluster, for example, you just want to pull "cached" layers of your image that you know that it works, and you just run it... In seconds instead of compiling binaries and downloading things (as you would specify in your Dockerfile). About building images: You don't have to build these images locally, you can use your CI/CD runners and run the docker build and docker push from the pipelines that run when you push your code to a git repository. And also, if the image is too big you should look into ways of reducing its size by using multi-stage building, using lighter/minimal base images, using few layers (for example multiple RUN apt install can be grouped to one apt install command listing multiple packages), and also by using .dockerignore to not ship unnecessary files to your image. And last read more about caching in docker builds as it may reduce the size of the layers you might be pushing when making changes. Long answer: Think of the Dockerfile as the source code, and the Image as the final binary. I know it's a classic example. But just consider how long it would take to build/compile the binary every time you want to use it (either by running it, or importing it as a library in a different piece of software). Then consider how indeterministic it would download the dependencies of that software, or compile them on different machines every time you run them. You can take for example Node.js's Dockerfile: https://github.com/nodejs/docker-node/blob/main/16/alpine3.16/Dockerfile Which is based on Alpine: https://github.com/alpinelinux/docker-alpine You don't want your application to perform all operations specified in these files (and their scripts) on runtime before actually starting your applications as it might be unpredictable, time-consuming, and more complex than it should be (for example you'd require firewall exceptions for an Egress traffic to the internet from the cluster to download some dependencies which you don't know if they would be available). You would instead just ship an image based on the base image you tested and built your code to run on. That image would be built and sent to the registry then k8s will run it as a black box, which might be predictable and deterministic. Then about your point of how annoying it is to push huge docker images every time: You might cut that size down by following some best practices and well designing your Dockerfile, for example: Reduce your layers, for example, pass multiple arguments whenever it's possible to commands, instead of re-running them multiple times. Use multi-stage building, so you will only push the final image, not the stages you needed to build to compile and configure your application. Avoid injecting data into your images, you can pass it later on-runtime to the containers. Order your layers, so you would not have to re-build untouched layers when making changes. Don't include unnecessary files, and use .dockerignore. And last but not least: You don't have to push images from your machine, you can do it with CI/CD runners (for example build-push Github action), or you can use your cloud provider's "Cloud Build" products (like Cloud Build for GCP and AWS CodeBuild)
###Instruction: Having needed several times in the last few days to upload a 1Gb image after some micro change, I can't help but wonder why there isnt a deploy path built into docker and related tech (e.g. k8s) to push just the application files (Dockerfile, docker-compose.yml and app related code) and have it build out the infrastructure from within the (live) docker host? In other words, why do I have to upload an entire linux machine whenever I change my app code? Isn't the whole point of Docker that the configs describe a purely deterministic infrastructure output? I can't even see why one would need to upload the whole container image unless they make changes to it manually, outside of Dockerfile, and then wish to upload that modified image. But that seems like bad practice at the very least... Am I missing something or this just a peculiarity of the system? ### Response: Good question. Short answer: Because storage is cheaper than processing power, building images "Live" might be complex, time-consuming and it might be unpredictable. On your Kubernetes cluster, for example, you just want to pull "cached" layers of your image that you know that it works, and you just run it... In seconds instead of compiling binaries and downloading things (as you would specify in your Dockerfile). About building images: You don't have to build these images locally, you can use your CI/CD runners and run the docker build and docker push from the pipelines that run when you push your code to a git repository. And also, if the image is too big you should look into ways of reducing its size by using multi-stage building, using lighter/minimal base images, using few layers (for example multiple RUN apt install can be grouped to one apt install command listing multiple packages), and also by using .dockerignore to not ship unnecessary files to your image. And last read more about caching in docker builds as it may reduce the size of the layers you might be pushing when making changes. Long answer: Think of the Dockerfile as the source code, and the Image as the final binary. I know it's a classic example. But just consider how long it would take to build/compile the binary every time you want to use it (either by running it, or importing it as a library in a different piece of software). Then consider how indeterministic it would download the dependencies of that software, or compile them on different machines every time you run them. You can take for example Node.js's Dockerfile: https://github.com/nodejs/docker-node/blob/main/16/alpine3.16/Dockerfile Which is based on Alpine: https://github.com/alpinelinux/docker-alpine You don't want your application to perform all operations specified in these files (and their scripts) on runtime before actually starting your applications as it might be unpredictable, time-consuming, and more complex than it should be (for example you'd require firewall exceptions for an Egress traffic to the internet from the cluster to download some dependencies which you don't know if they would be available). You would instead just ship an image based on the base image you tested and built your code to run on. That image would be built and sent to the registry then k8s will run it as a black box, which might be predictable and deterministic. Then about your point of how annoying it is to push huge docker images every time: You might cut that size down by following some best practices and well designing your Dockerfile, for example: Reduce your layers, for example, pass multiple arguments whenever it's possible to commands, instead of re-running them multiple times. Use multi-stage building, so you will only push the final image, not the stages you needed to build to compile and configure your application. Avoid injecting data into your images, you can pass it later on-runtime to the containers. Order your layers, so you would not have to re-build untouched layers when making changes. Don't include unnecessary files, and use .dockerignore. And last but not least: You don't have to push images from your machine, you can do it with CI/CD runners (for example build-push Github action), or you can use your cloud provider's "Cloud Build" products (like Cloud Build for GCP and AWS CodeBuild)
Having needed several times in the last few days to upload a 1Gb image after some micro change, I can't help but wonder why there isnt a deploy path built into docker and related tech (e.g. k8s) to push just the application files (Dockerfile, docker-compose.yml and app related code) and have it build out the infrastructure from within the (live) docker host? In other words, why do I have to upload an entire linux machine whenever I change my app code? Isn't the whole point of Docker that the configs describe a purely deterministic infrastructure output? I can't even see why one would need to upload the whole container image unless they make changes to it manually, outside of Dockerfile, and then wish to upload that modified image. But that seems like bad practice at the very least... Am I missing something or this just a peculiarity of the system?
Solved by quoting the var values in the template, like: - name: NEW_VAR_X value: "${NEW_VAR_VALUE_X}" No errors ever since.
###Instruction: We are deploying with Ansible scripts to Openshift 3 using oc apply. When we change template to add more environment variables, we receive a very vague error: "unrecognized type: string" and status code 500. Setting --loglevel 10 leads to no more details: $ /usr/local/bin/oc_v3.11.715 apply -f \"/tmp/ansible.YtEqVm_deploy/app.yml.json\" -n test-env --loglevel 10 2&> log.log (several GET to get secret, deploymentconfigs, etc.) ... I0127 11:49:05.455217 605 request.go:897] Request Body: {xxxxxxxx} I0127 11:49:05.455280 605 round_trippers.go:386] curl -k -v -XPATCH -H "User-Agent: oc_v3.11.715/v1.11.0+d4cacc0 (linux/amd64) kubernetes/d4cacc0" -H "Authorization: Bearer xxxxxx" -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" 'https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-app/deploymentconfigs/app' I0127 11:49:05.466278 605 round_trippers.go:405] PATCH https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-env-app/deploymentconfigs/app 500 Internal Server Error in 10 milliseconds I0127 11:49:05.466287 605 round_trippers.go:411] Response Headers: I0127 11:49:05.466291 605 round_trippers.go:414] Content-Length: 118 I0127 11:49:05.466294 605 round_trippers.go:414] Date: Fri, 27 Jan 2023 09:49:05 GMT I0127 11:49:05.466297 605 round_trippers.go:414] Audit-Id: 1d3f3398-14fc-4bfa-854b-6faf9b105680 I0127 11:49:05.466302 605 round_trippers.go:414] Cache-Control: no-store I0127 11:49:05.466307 605 round_trippers.go:414] Content-Type: application/json I0127 11:49:05.466321 605 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unrecognized type: string","code":500} I0127 11:49:05.466603 605 helpers.go:201] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "unrecognized type: string", "code": 500 }] F0127 11:49:05.466618 605 helpers.go:119] Error from server: unrecognized type: string The request body is like: { "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" } }, "spec": { "template": { "spec": { "$setElementOrder/containers": [{ "name": "app" }], "containers": [{ "$setElementOrder/env": [{ "name": "OLD_VAR_1" }, { "name": "OLD_VAR_2" }, { "name": "OLD_VAR_3" }, { "name": "OLD_VAR_4" }, { "name": "NEW_VAR_1" }, { "name": "NEW_VAR_2" }, { "name": "NEW_VAR_3" }], "dnsPolicy": "ClusterFirst", "env": [{ "name": "OLD_VAR_4", "value": false }, { "name": "NEW_VAR_1", "value": 10 }, { "name": "NEW_VAR_2", "value": 20 }, { "name": "NEW_VAR_3", "value": 6 }], "name": "app", "restartPolicy": "Always", "terminationGracePeriodSeconds": 300 }] } } } } OLD_VAR_x are old environment variables; we want to add NEW_VAR_[1-3]. Notice strangely that not all old vars are in env, only OLD_VAR_4, but all new vars are in env. This also happens when we use oc patch with the same request body. Same error response. What is wrong? A workaround is first, deployment, fail, and add new vars in Openshift manually, and deploy in Openshift webconsole on top of the last, failed deployment. It works. ### Response: Solved by quoting the var values in the template, like: - name: NEW_VAR_X value: "${NEW_VAR_VALUE_X}" No errors ever since.
We are deploying with Ansible scripts to Openshift 3 using oc apply. When we change template to add more environment variables, we receive a very vague error: "unrecognized type: string" and status code 500. Setting --loglevel 10 leads to no more details: $ /usr/local/bin/oc_v3.11.715 apply -f \"/tmp/ansible.YtEqVm_deploy/app.yml.json\" -n test-env --loglevel 10 2&> log.log (several GET to get secret, deploymentconfigs, etc.) ... I0127 11:49:05.455217 605 request.go:897] Request Body: {xxxxxxxx} I0127 11:49:05.455280 605 round_trippers.go:386] curl -k -v -XPATCH -H "User-Agent: oc_v3.11.715/v1.11.0+d4cacc0 (linux/amd64) kubernetes/d4cacc0" -H "Authorization: Bearer xxxxxx" -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" 'https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-app/deploymentconfigs/app' I0127 11:49:05.466278 605 round_trippers.go:405] PATCH https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-env-app/deploymentconfigs/app 500 Internal Server Error in 10 milliseconds I0127 11:49:05.466287 605 round_trippers.go:411] Response Headers: I0127 11:49:05.466291 605 round_trippers.go:414] Content-Length: 118 I0127 11:49:05.466294 605 round_trippers.go:414] Date: Fri, 27 Jan 2023 09:49:05 GMT I0127 11:49:05.466297 605 round_trippers.go:414] Audit-Id: 1d3f3398-14fc-4bfa-854b-6faf9b105680 I0127 11:49:05.466302 605 round_trippers.go:414] Cache-Control: no-store I0127 11:49:05.466307 605 round_trippers.go:414] Content-Type: application/json I0127 11:49:05.466321 605 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unrecognized type: string","code":500} I0127 11:49:05.466603 605 helpers.go:201] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "unrecognized type: string", "code": 500 }] F0127 11:49:05.466618 605 helpers.go:119] Error from server: unrecognized type: string The request body is like: { "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" } }, "spec": { "template": { "spec": { "$setElementOrder/containers": [{ "name": "app" }], "containers": [{ "$setElementOrder/env": [{ "name": "OLD_VAR_1" }, { "name": "OLD_VAR_2" }, { "name": "OLD_VAR_3" }, { "name": "OLD_VAR_4" }, { "name": "NEW_VAR_1" }, { "name": "NEW_VAR_2" }, { "name": "NEW_VAR_3" }], "dnsPolicy": "ClusterFirst", "env": [{ "name": "OLD_VAR_4", "value": false }, { "name": "NEW_VAR_1", "value": 10 }, { "name": "NEW_VAR_2", "value": 20 }, { "name": "NEW_VAR_3", "value": 6 }], "name": "app", "restartPolicy": "Always", "terminationGracePeriodSeconds": 300 }] } } } } OLD_VAR_x are old environment variables; we want to add NEW_VAR_[1-3]. Notice strangely that not all old vars are in env, only OLD_VAR_4, but all new vars are in env. This also happens when we use oc patch with the same request body. Same error response. What is wrong? A workaround is first, deployment, fail, and add new vars in Openshift manually, and deploy in Openshift webconsole on top of the last, failed deployment. It works.
If you are running the microK8s and Docker still it's necessary microk8s to have an idea about the docker that is running on your machine. local Docker daemon is not part of the MicroK8s Kubernetes cluster. You can export the image and inject to cache docker save <image name> > myimage.tar microk8s ctr image import myimage.tar Ref : doc
###Instruction: Using microk8s installation. An image is tagged, so should use IfNotPresent policy. apiVersion: apps/v1 2 kind: Deployment 3 metadata: 4 name: lh-graphql 5 labels: 6 app: lh-graphql 7 spec: 8 selector: 9 matchLabels: 10 app: lh-graphql 11 strategy: 12 type: Recreate 13 template: 14 metadata: 15 labels: 16 app: lh-graphql 17 spec: 18 containers: 19 - image: hasura/graphql-engine:v2.13.2.cli-migrations-v3 20 name: lh-graphql 21 ports: 22 - containerPort: 8080 23 name: lh-graphql 24 env: 25 - name: HASURA_GRAPHQL_DATABASE_URL 26 value: postgresql://postgres:postgres@$(ORCH_POSTGRES_IP):5432/lh Image is already pulled to docker: light@siddhalok:~$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE postgres 12 3d6880d04326 2 weeks ago 373MB hasura/graphql-engine v2.13.2.cli-migrations-v3 4cd490369623 2 months ago 570MB However, it keeps pulling after a deployment is deleted and created again. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 112s default-scheduler Successfully assigned default/lh-graphql-6db75544cf-j65wp to siddhalok Normal Pulling 112s kubelet Pulling image "hasura/graphql-engine:v2.13.2.cli-migrations-v3" UPD: The same happens when creating from command line: microk8s kubectl run det2 --image=registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector:current --image-pull-policy=IfNotPresent REPOSITORY TAG IMAGE ID CREATED SIZE postgres 12 3d6880d04326 2 weeks ago 373MB lh-develop.img latest f26c3c667fbe 5 weeks ago 2.82GB dpage/pgadmin4 latest 4d5afde0a02e 6 weeks ago 361MB detector latest e6f7e6567b73 7 weeks ago 3.81GB lh-detetctor.img latest e6f7e6567b73 7 weeks ago 3.81GB registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector current e6f7e6567b73 7 weeks ago 3.81GB ### Response: If you are running the microK8s and Docker still it's necessary microk8s to have an idea about the docker that is running on your machine. local Docker daemon is not part of the MicroK8s Kubernetes cluster. You can export the image and inject to cache docker save <image name> > myimage.tar microk8s ctr image import myimage.tar Ref : doc
Using microk8s installation. An image is tagged, so should use IfNotPresent policy. apiVersion: apps/v1 2 kind: Deployment 3 metadata: 4 name: lh-graphql 5 labels: 6 app: lh-graphql 7 spec: 8 selector: 9 matchLabels: 10 app: lh-graphql 11 strategy: 12 type: Recreate 13 template: 14 metadata: 15 labels: 16 app: lh-graphql 17 spec: 18 containers: 19 - image: hasura/graphql-engine:v2.13.2.cli-migrations-v3 20 name: lh-graphql 21 ports: 22 - containerPort: 8080 23 name: lh-graphql 24 env: 25 - name: HASURA_GRAPHQL_DATABASE_URL 26 value: postgresql://postgres:postgres@$(ORCH_POSTGRES_IP):5432/lh Image is already pulled to docker: light@siddhalok:~$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE postgres 12 3d6880d04326 2 weeks ago 373MB hasura/graphql-engine v2.13.2.cli-migrations-v3 4cd490369623 2 months ago 570MB However, it keeps pulling after a deployment is deleted and created again. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 112s default-scheduler Successfully assigned default/lh-graphql-6db75544cf-j65wp to siddhalok Normal Pulling 112s kubelet Pulling image "hasura/graphql-engine:v2.13.2.cli-migrations-v3" UPD: The same happens when creating from command line: microk8s kubectl run det2 --image=registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector:current --image-pull-policy=IfNotPresent REPOSITORY TAG IMAGE ID CREATED SIZE postgres 12 3d6880d04326 2 weeks ago 373MB lh-develop.img latest f26c3c667fbe 5 weeks ago 2.82GB dpage/pgadmin4 latest 4d5afde0a02e 6 weeks ago 361MB detector latest e6f7e6567b73 7 weeks ago 3.81GB lh-detetctor.img latest e6f7e6567b73 7 weeks ago 3.81GB registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector current e6f7e6567b73 7 weeks ago 3.81GB
Try something like adding --platform=linux/amd64 in your Dockerfile FROM --platform=linux/amd64 node:18-alpine as builder Build the image and run it on K8s. it could be due to the building image on Mac or Arm and your K8s cluster is not supporting that architecture. The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64, linux/arm64, or windows/amd64. By default, the target platform of the build request is used. Doc ref
###Instruction: This is my docker file #!/bin/bash FROM node:10.8.0 as build-stage WORKDIR /app COPY package*.json /app/ RUN npm install COPY ./ /app/ ARG configuration=production RUN npm run build -- --output-path=./dist/out --configuration $configuration FROM nginx:1.15 #Copy ci-dashboard-dist COPY --from=build-stage /app/dist/out/ /usr/share/nginx/html #Copy default nginx configuration COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf I am getting error CrashLoopBackOff. When i try to find the log it says exec /usr/sbin/nginx: exec format error. How can i resolve this and get the app running ### Response: Try something like adding --platform=linux/amd64 in your Dockerfile FROM --platform=linux/amd64 node:18-alpine as builder Build the image and run it on K8s. it could be due to the building image on Mac or Arm and your K8s cluster is not supporting that architecture. The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64, linux/arm64, or windows/amd64. By default, the target platform of the build request is used. Doc ref
This is my docker file #!/bin/bash FROM node:10.8.0 as build-stage WORKDIR /app COPY package*.json /app/ RUN npm install COPY ./ /app/ ARG configuration=production RUN npm run build -- --output-path=./dist/out --configuration $configuration FROM nginx:1.15 #Copy ci-dashboard-dist COPY --from=build-stage /app/dist/out/ /usr/share/nginx/html #Copy default nginx configuration COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf I am getting error CrashLoopBackOff. When i try to find the log it says exec /usr/sbin/nginx: exec format error. How can i resolve this and get the app running
Are there any options for a private IP address that works like letsencrypt? : If you are using a private IP address and do not have DNS, you cannot use LetsEncrypt to obtain a SSL certificate. Instead, you will need to use a certificate from a Certificate Authority (CA) that can generate certificates for private IPs. To do this, you will need to generate a Certificate Signing Request (CSR) and submit it to the CA. The CA will then generate a certificate that is signed with its private key and send it back to you. You will then install this certificate on your Istio gateway and use it to check whether requests to your microservices are using HTTPS protocol. Additionally, you will need to ensure that each microservice has its own virtual service to make sure that the requests are routed to the correct microservice. Refer to this doc for more information. To set up istio gateway to check whether rest api requests are using https protocol. To set up an Istio gateway to check whether REST API requests are using the HTTPS protocol, you need to configure a gateway and virtual service in Istio. The gateway should be configured to route traffic on the HTTPS port to the port where your REST API is running. The virtual service should be configured to match requests that have the X-Forwarded-Proto header set to https and route them to the correct service. You can also configure Istio to reject requests that don't have the X-Forwarded-Proto header set to https or that have the X-Forwarded-Proto header set to http. Once you have configured the gateway and virtual service, you should be able to test that requests to your REST API are using the HTTPS protocol. Refer this doc on configuring TLS ingress gateway .
###Instruction: What I am going to use: Microk8s istio addon Metallb addon Cert manager (if possible) With microk8s, I want to deploy several micro services. I want to set up istio gateway to check whether rest api requests are using https protocol. Each micro service has its own virtual service. If there is no dns, but can only use a private ip address(ex. 192.168.2xx.xxx), what do I have to do first? If this approach is not possible technically, please let me know. (With dns, letsencrypt would be a solution using cert-manager. Is there any options for a private IP address that works like letsencrypt?) ### Response: Are there any options for a private IP address that works like letsencrypt? : If you are using a private IP address and do not have DNS, you cannot use LetsEncrypt to obtain a SSL certificate. Instead, you will need to use a certificate from a Certificate Authority (CA) that can generate certificates for private IPs. To do this, you will need to generate a Certificate Signing Request (CSR) and submit it to the CA. The CA will then generate a certificate that is signed with its private key and send it back to you. You will then install this certificate on your Istio gateway and use it to check whether requests to your microservices are using HTTPS protocol. Additionally, you will need to ensure that each microservice has its own virtual service to make sure that the requests are routed to the correct microservice. Refer to this doc for more information. To set up istio gateway to check whether rest api requests are using https protocol. To set up an Istio gateway to check whether REST API requests are using the HTTPS protocol, you need to configure a gateway and virtual service in Istio. The gateway should be configured to route traffic on the HTTPS port to the port where your REST API is running. The virtual service should be configured to match requests that have the X-Forwarded-Proto header set to https and route them to the correct service. You can also configure Istio to reject requests that don't have the X-Forwarded-Proto header set to https or that have the X-Forwarded-Proto header set to http. Once you have configured the gateway and virtual service, you should be able to test that requests to your REST API are using the HTTPS protocol. Refer this doc on configuring TLS ingress gateway .
What I am going to use: Microk8s istio addon Metallb addon Cert manager (if possible) With microk8s, I want to deploy several micro services. I want to set up istio gateway to check whether rest api requests are using https protocol. Each micro service has its own virtual service. If there is no dns, but can only use a private ip address(ex. 192.168.2xx.xxx), what do I have to do first? If this approach is not possible technically, please let me know. (With dns, letsencrypt would be a solution using cert-manager. Is there any options for a private IP address that works like letsencrypt?)
I think you may have misunderstood the use of the --enable-helm parameter. It does not allow kustomize to perform helm-style templating on files, so when you write: apiVersion: v1 kind: Namespace metadata: name: {{ .Values.app.namespace }} labels: name: {{ .Values.app.namespace }} That doesn't do anything useful. It just generates invalid YAML output. The --enable-helm option allows you to explode Helm charts using Kustomize; see here for the documentation, but for example it allows you to process a kustomization.yaml file like this: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmCharts: - name: traefik repo: https://helm.traefik.io/traefik includeCRDs: true releaseName: example version: 20.8.0 valuesInline: deployment: replicas: 3 logs: access: enabled: true Running kubectl kustomize --enable-helm will cause kustomize to fetch the helm chart and run helm template on it, producing YAML manifests on stdout.
###Instruction: kustomize build --enable-helm .I have the following project structure: project - helm-k8s - values.yml - Chart.yml - templates - base - project-namespace.yml - grafana - grafana-service.yml - grafana-deployment.yml - grafana-datasource-config.yml - prometheus - prometheus-service.yml - prometheus-deployment.yml - prometheus-config.yml - prometheus-roles.yml - kustomization.yml - prod - kustomization.yml - test - kustomization.yml I'm trying to build my kustomization file using helm like below: project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml I get an error message like this: project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here? EDIT: Here is how my kustomization.yml looks like: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization metadata: name: open-electrons-monitoring-kustomization resources: # 0. Get the namespaces first - project-namespace.yml # 1. Set up monitoring services (prometheus) #- monitoring/prometheus/prometheus-roles.yml - prometheus/prometheus-config.yml - prometheus/prometheus-roles.yml - prometheus/prometheus-deployment.yml - prometheus/prometheus-service.yml # 2. Set up monitoring services (grafana) - grafana/grafana-datasource-config.yml - grafana/grafana-deployment.yml - grafana/grafana-service.yml ### Response: I think you may have misunderstood the use of the --enable-helm parameter. It does not allow kustomize to perform helm-style templating on files, so when you write: apiVersion: v1 kind: Namespace metadata: name: {{ .Values.app.namespace }} labels: name: {{ .Values.app.namespace }} That doesn't do anything useful. It just generates invalid YAML output. The --enable-helm option allows you to explode Helm charts using Kustomize; see here for the documentation, but for example it allows you to process a kustomization.yaml file like this: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization helmCharts: - name: traefik repo: https://helm.traefik.io/traefik includeCRDs: true releaseName: example version: 20.8.0 valuesInline: deployment: replicas: 3 logs: access: enabled: true Running kubectl kustomize --enable-helm will cause kustomize to fetch the helm chart and run helm template on it, producing YAML manifests on stdout.
kustomize build --enable-helm .I have the following project structure: project - helm-k8s - values.yml - Chart.yml - templates - base - project-namespace.yml - grafana - grafana-service.yml - grafana-deployment.yml - grafana-datasource-config.yml - prometheus - prometheus-service.yml - prometheus-deployment.yml - prometheus-config.yml - prometheus-roles.yml - kustomization.yml - prod - kustomization.yml - test - kustomization.yml I'm trying to build my kustomization file using helm like below: project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml I get an error message like this: project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here? EDIT: Here is how my kustomization.yml looks like: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization metadata: name: open-electrons-monitoring-kustomization resources: # 0. Get the namespaces first - project-namespace.yml # 1. Set up monitoring services (prometheus) #- monitoring/prometheus/prometheus-roles.yml - prometheus/prometheus-config.yml - prometheus/prometheus-roles.yml - prometheus/prometheus-deployment.yml - prometheus/prometheus-service.yml # 2. Set up monitoring services (grafana) - grafana/grafana-datasource-config.yml - grafana/grafana-deployment.yml - grafana/grafana-service.yml
The search may be heavy due to the sheer size of the resources the query has to search into. Have you looked into this possibility and further reduce the size using one more label or filter on top of current. Also check the performance of you Kubernetes api server when the operation is being performed and optimize it.
###Instruction: I need to get all resources based on label, I used the following code which works, However, it takes too much time ( ~20sec) to get the response, even which I restrict it to only one namespace (vrf), any idea what im doing wrong here? resource.NewBuilder(flags). Unstructured(). ResourceTypes(res...). NamespaceParam("vrf").AllNamespaces(false). LabelSelectorParam("a=b").SelectAllParam(selector == ""). Flatten(). Latest().Do().Object() https://pkg.go.dev/k8s.io/cli-runtime@v0.26.1/pkg/resource#Builder As I already using label and ns, not sure what should else I do in this case. Ive checked the cluster connection and it seems that everything is ok, running regular kubectl are getting very fast response, just this query took much time. ### Response: The search may be heavy due to the sheer size of the resources the query has to search into. Have you looked into this possibility and further reduce the size using one more label or filter on top of current. Also check the performance of you Kubernetes api server when the operation is being performed and optimize it.
I need to get all resources based on label, I used the following code which works, However, it takes too much time ( ~20sec) to get the response, even which I restrict it to only one namespace (vrf), any idea what im doing wrong here? resource.NewBuilder(flags). Unstructured(). ResourceTypes(res...). NamespaceParam("vrf").AllNamespaces(false). LabelSelectorParam("a=b").SelectAllParam(selector == ""). Flatten(). Latest().Do().Object() https://pkg.go.dev/k8s.io/cli-runtime@v0.26.1/pkg/resource#Builder As I already using label and ns, not sure what should else I do in this case. Ive checked the cluster connection and it seems that everything is ok, running regular kubectl are getting very fast response, just this query took much time.
It has been almost 2 years since I posted this question. Now with 20-20 hindsight I thought I would post my solution. I ended up simply provisioning an Azure Cosmos Db in the region where my cluster lives, and hitting the Cosmos Db for all my query-side requirements. (My cluster already lives in the Azure Cloud) I maintain one Postges Db in my original cluster for my write-side requirements. And my app scales nicely in the cluster. I have not yet needed to deploy clusters to new regions. When that happens, I will provision a replica of the Cosmos Db to that additional region or regions. But still just one postgres db for write-side requirements. Not going to bother to try to maintain/sync replicas of the postgres db. Additional insight #1. By provisioning the the Cosmos Db separately from my cluster (but in the same region), I am taking the load off of my cluster nodes. In effect, the Cosmos Db has its own dedicated compute resources. And backup etc. Additional insight #2. It is obvious now but wasnt back then, that tightly coupling a document db (such as MongoDb) to a particular pod is...a bonkers bad idea. Imagine horizontally scaling your app and with each new instance of your app you would instantiate a new document db. You would quickly bloat up your nodes and crash your cluster. One read-side document db per cluster is an efficient and easy way to roll. Additional insight #3. The read side of any CQRS can get a nice jolt of adrenaline with the help of an in-memory cache like Redis. You can first see if some data is available in the cache before you hit the docuement db. I use this approach for data such as for a checkout cart, where I will leave data in the cache for 24 hours but then let it expire. You could conceivably use redis for all your read-side requirements, but memory could quickly become bloated. So the idea here is consider deploying an in-memory cache on your cluster -- only one instance of the cache -- and have all your apps hit it for low-latency/high-availability, but do not use the cache as a replacemet for the document db.
###Instruction: I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment. I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach: Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed. Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions. Some other approach I haven't thought of ### Response: It has been almost 2 years since I posted this question. Now with 20-20 hindsight I thought I would post my solution. I ended up simply provisioning an Azure Cosmos Db in the region where my cluster lives, and hitting the Cosmos Db for all my query-side requirements. (My cluster already lives in the Azure Cloud) I maintain one Postges Db in my original cluster for my write-side requirements. And my app scales nicely in the cluster. I have not yet needed to deploy clusters to new regions. When that happens, I will provision a replica of the Cosmos Db to that additional region or regions. But still just one postgres db for write-side requirements. Not going to bother to try to maintain/sync replicas of the postgres db. Additional insight #1. By provisioning the the Cosmos Db separately from my cluster (but in the same region), I am taking the load off of my cluster nodes. In effect, the Cosmos Db has its own dedicated compute resources. And backup etc. Additional insight #2. It is obvious now but wasnt back then, that tightly coupling a document db (such as MongoDb) to a particular pod is...a bonkers bad idea. Imagine horizontally scaling your app and with each new instance of your app you would instantiate a new document db. You would quickly bloat up your nodes and crash your cluster. One read-side document db per cluster is an efficient and easy way to roll. Additional insight #3. The read side of any CQRS can get a nice jolt of adrenaline with the help of an in-memory cache like Redis. You can first see if some data is available in the cache before you hit the docuement db. I use this approach for data such as for a checkout cart, where I will leave data in the cache for 24 hours but then let it expire. You could conceivably use redis for all your read-side requirements, but memory could quickly become bloated. So the idea here is consider deploying an in-memory cache on your cluster -- only one instance of the cache -- and have all your apps hit it for low-latency/high-availability, but do not use the cache as a replacemet for the document db.
I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment. I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach: Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed. Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions. Some other approach I haven't thought of
I have managed to make it work by deleting services from deployment and running kubectl expose deployment kafka-app. The issue comes from Kompose labeling.
###Instruction: to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals. I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry. The Python app connects to the following address: KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092') kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: kafka name: kafka-app spec: replicas: 1 selector: matchLabels: io.kompose.service: kafka strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: kafka spec: containers: - env: - name: KAFKA_LISTENERS value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092" - name: KAFKA_ADVERTISED_LISTENERS value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "LISTENER_INTERNAL" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS value: "0" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: finnhub-streaming-data-pipeline-kafka:latest imagePullPolicy: Never lifecycle: postStart: exec: command: ["/bin/sh","-c","/kafka-setup-k8s.sh"] name: kafka-app ports: - containerPort: 9092 - containerPort: 29092 resources: {} restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: kafka-service spec: selector: app: kafka ports: - protocol: TCP name: firstport port: 9092 targetPort: 9092 - protocol: TCP name: secondport port: 29092 targetPort: 29092 finnhub-producer.yaml (aka my Python app deployment): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: replicas: 1 selector: matchLabels: io.kompose.service: finnhubproducer strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: finnhubproducer spec: containers: - env: - name: KAFKA_PORT value: "9092" - name: KAFKA_SERVER value: kafka-service.default.svc.cluster.local - name: KAFKA_TOPIC_NAME value: market image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest imagePullPolicy: Never name: finnhubproducer ports: - containerPort: 8001 resources: {} restartPolicy: Always status: {} --- apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: ports: - name: "8001" port: 8001 targetPort: 8001 selector: io.kompose.service: finnhubproducer status: loadBalancer: {} pipeline-network-networkpolicy.yaml: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: pipeline-network spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/pipeline-network: "true" podSelector: matchLabels: io.kompose.network/pipeline-network: "true" EDIT: Dockerfile for Kafka image: FROM confluentinc/cp-kafka:6.2.0 COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh kafka-setup-k8s.sh: # blocks until kafka is reachable kafka-topics --bootstrap-server localhost:29092 --list echo -e 'Creating kafka topics' kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1 echo -e 'Successfully created the following topics:' kafka-topics --bootstrap-server localhost:29092 --list ### Response: I have managed to make it work by deleting services from deployment and running kubectl expose deployment kafka-app. The issue comes from Kompose labeling.
to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals. I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry. The Python app connects to the following address: KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092') kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: kafka name: kafka-app spec: replicas: 1 selector: matchLabels: io.kompose.service: kafka strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: kafka spec: containers: - env: - name: KAFKA_LISTENERS value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092" - name: KAFKA_ADVERTISED_LISTENERS value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "LISTENER_INTERNAL" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS value: "0" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: finnhub-streaming-data-pipeline-kafka:latest imagePullPolicy: Never lifecycle: postStart: exec: command: ["/bin/sh","-c","/kafka-setup-k8s.sh"] name: kafka-app ports: - containerPort: 9092 - containerPort: 29092 resources: {} restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: kafka-service spec: selector: app: kafka ports: - protocol: TCP name: firstport port: 9092 targetPort: 9092 - protocol: TCP name: secondport port: 29092 targetPort: 29092 finnhub-producer.yaml (aka my Python app deployment): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: replicas: 1 selector: matchLabels: io.kompose.service: finnhubproducer strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: finnhubproducer spec: containers: - env: - name: KAFKA_PORT value: "9092" - name: KAFKA_SERVER value: kafka-service.default.svc.cluster.local - name: KAFKA_TOPIC_NAME value: market image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest imagePullPolicy: Never name: finnhubproducer ports: - containerPort: 8001 resources: {} restartPolicy: Always status: {} --- apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: ports: - name: "8001" port: 8001 targetPort: 8001 selector: io.kompose.service: finnhubproducer status: loadBalancer: {} pipeline-network-networkpolicy.yaml: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: pipeline-network spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/pipeline-network: "true" podSelector: matchLabels: io.kompose.network/pipeline-network: "true" EDIT: Dockerfile for Kafka image: FROM confluentinc/cp-kafka:6.2.0 COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh kafka-setup-k8s.sh: # blocks until kafka is reachable kafka-topics --bootstrap-server localhost:29092 --list echo -e 'Creating kafka topics' kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1 echo -e 'Successfully created the following topics:' kafka-topics --bootstrap-server localhost:29092 --list
Either way is fine and I wouldn't expect it to make a huge difference, except maybe at large scale. There is nothing architecturally wrong with running multiple worker tasks inside a single container, particularly within a framework that's specifically designed to do it. As you note it's usually considered good form to run one concern per container, and there are various issues with running multiple processes (recovering from failed processes, collecting logs); but if you have a system that's specifically designed to launch and manage subprocesses, running it in a container is fine. The questions I'd start asking here are around how many Resque workers you're planning to run at the same time. If it's "thousands", then you can start hitting limits around the number of pods per node and pressure on the Kubernetes scheduler. In that case, using multiple workers per container to cut down the number of pods can make some sense. If it's "dozens" then limiting it to just one could make it a little easier to visualize and manage. Starting up a new container can be somewhat expensive (I'm used to seeing 30-60s startup times, but depending heavily on the image) but having a running container isn't especially so. It looks like Resque has a manager process on top of some number of workers so you'll have those extra Rubies, but that's probably not significant memory or storage.
###Instruction: Context I am running a job-processing task (specifically, Resque) in a kubernetes setup. This task sets up one or more workers to takes job messages off a queue and processes them. A typical invocation is to set your desired worker count, e.g. COUNT=8 QUEUE=* resque:work. Of course, in Kubernetes, I am going to add autoscaling to the Deployment running this task. There's a prevailing recommendation to run 1 process per pod (see below). My concern is that doing so can be wasteful when the process I wish to run has a built-in multi-process management system to run identical processes. I am trying to understand the theory & docs to inform decisions and experiments. My motivation question is: is there any reason to continue setting worker count, or does it make more sense to use only one worker process per pod? I.e. is there significant overhead in each pod instance compared to letting Resque spawn multiple processes? Question The objective question is: where should I expect / look for overhead in running 1 process per pod vs letting 1 pod's main process spawn multiple children? E.g. IIUC each pod is running its own version of the OS and other utils installed in the container. So that at least is some memory overhead vs running a single container, single OS, mutli-Resque-worker setup; is that correct? What else should I be looking at, prior to simply benchmarking a bunch of guesses, to model resource consumption for this setup? More Context I understand that small process count allows for more granular scaling. I don't consider scaling at a finer resolution than, say, 4 processes at a time to be much benefit, so I'd start there if pod overhead should be considered. Am I overthinking it, and should I forget about pod overhead and just use a worker count of 1 per pod? This question is informed off of many "one process per pod" references out there. Many listed in this similar question and a stack exchange question linked therein. The linked question was concerned with scaling processes inside a pod to optimize node compute usage, which I get is well managed by k8s already The nested links are more about limiting to one concern per pod, which is the case in my question. My question is about overhead of running 4 identical worker processes in 4 pods vs in 1 pod. ### Response: Either way is fine and I wouldn't expect it to make a huge difference, except maybe at large scale. There is nothing architecturally wrong with running multiple worker tasks inside a single container, particularly within a framework that's specifically designed to do it. As you note it's usually considered good form to run one concern per container, and there are various issues with running multiple processes (recovering from failed processes, collecting logs); but if you have a system that's specifically designed to launch and manage subprocesses, running it in a container is fine. The questions I'd start asking here are around how many Resque workers you're planning to run at the same time. If it's "thousands", then you can start hitting limits around the number of pods per node and pressure on the Kubernetes scheduler. In that case, using multiple workers per container to cut down the number of pods can make some sense. If it's "dozens" then limiting it to just one could make it a little easier to visualize and manage. Starting up a new container can be somewhat expensive (I'm used to seeing 30-60s startup times, but depending heavily on the image) but having a running container isn't especially so. It looks like Resque has a manager process on top of some number of workers so you'll have those extra Rubies, but that's probably not significant memory or storage.
Context I am running a job-processing task (specifically, Resque) in a kubernetes setup. This task sets up one or more workers to takes job messages off a queue and processes them. A typical invocation is to set your desired worker count, e.g. COUNT=8 QUEUE=* resque:work. Of course, in Kubernetes, I am going to add autoscaling to the Deployment running this task. There's a prevailing recommendation to run 1 process per pod (see below). My concern is that doing so can be wasteful when the process I wish to run has a built-in multi-process management system to run identical processes. I am trying to understand the theory & docs to inform decisions and experiments. My motivation question is: is there any reason to continue setting worker count, or does it make more sense to use only one worker process per pod? I.e. is there significant overhead in each pod instance compared to letting Resque spawn multiple processes? Question The objective question is: where should I expect / look for overhead in running 1 process per pod vs letting 1 pod's main process spawn multiple children? E.g. IIUC each pod is running its own version of the OS and other utils installed in the container. So that at least is some memory overhead vs running a single container, single OS, mutli-Resque-worker setup; is that correct? What else should I be looking at, prior to simply benchmarking a bunch of guesses, to model resource consumption for this setup? More Context I understand that small process count allows for more granular scaling. I don't consider scaling at a finer resolution than, say, 4 processes at a time to be much benefit, so I'd start there if pod overhead should be considered. Am I overthinking it, and should I forget about pod overhead and just use a worker count of 1 per pod? This question is informed off of many "one process per pod" references out there. Many listed in this similar question and a stack exchange question linked therein. The linked question was concerned with scaling processes inside a pod to optimize node compute usage, which I get is well managed by k8s already The nested links are more about limiting to one concern per pod, which is the case in my question. My question is about overhead of running 4 identical worker processes in 4 pods vs in 1 pod.
As error says : unexpected status: 403 Forbidden You might be having an issue with the Deployment imagepull secret. For GKE you can use the service account JSON Ref doc : https://cloud.google.com/container-registry/docs/advanced-authentication#json-key Terraform create secret in GKE which you can use it to deployment resource "kubernetes_secret" "gcr" { type = "kubernetes.io/dockerconfigjson" metadata { name = "gcr-image-pull" namespace = "default" } data = { ".dockerconfigjson" = jsonencode({ auths = { "gcr.io" = { username = "_json_key" password = base64decode(google_service_account_key.myaccount.private_key) email = google_service_account.main.email auth = base64encode("_json_key:${ base64decode(google_service_account_key.myaccount.private_key) }") } } }) }} Or use the kubectl to create the secret kubectl create secret docker-registry gcr \ --docker-server=gcr.io \ --docker-username=_json_key \ --docker-password="$(cat google-service-account-key.json)" \ --docker-email=<Email address> Now if you have the POD or deployment you can create YAML config like apiVersion: v1 kind: Pod metadata: name: uses-private-registry spec: containers: - name: hello-app image: <image URI> imagePullSecrets: - name: secret-that-you-created Update: As per Guillaume's suggestion for GKE/GCP you can follow *workload identity* option as best practice with other extern repo it might could not work. Create the IAM service account in GCP: gcloud iam service-accounts create gke-workload-indentity \ --project=<project-id> Create a service account in the K8s cluster : apiVersion: v1 kind: ServiceAccount metadata: annotations: iam.gke.io/gcp-service-account: gke-workload-indentity@PROJECT-ID.iam.gserviceaccount.com name: gke-sa-workload namespace: default Policy binding run below Gcloud command : gcloud iam service-accounts add-iam-policy-binding gke-workload-indentity@PROJECT_ID.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[default/K8s_SANAME]" Now you can create the deployment POD with image in GCR/astifact repo just update the ServiceAccount spec: containers: - name: container image: IMAGE serviceAccountName: gke-sa-workload Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
###Instruction: I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account. The terraform used to stand up the cluster is below. locals { service = "example" resource_prefix = format("%s-%s", local.service, var.env) location = format("%s-b", var.gcp_region) } resource "google_service_account" "main" { account_id = format("%s-sa", local.resource_prefix) display_name = format("%s-sa", local.resource_prefix) project = var.gcp_project } resource "google_container_cluster" "main" { name = local.resource_prefix description = format("Cluster primarily servicing the service %s", local.service) location = local.location remove_default_node_pool = true initial_node_count = 1 } resource "google_container_node_pool" "main" { name = format("%s-node-pool", local.resource_prefix) location = local.location cluster = google_container_cluster.main.name node_count = var.gke_cluster_node_count node_config { preemptible = true machine_type = var.gke_node_machine_type # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. service_account = google_service_account.main.email oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/trace.append" ] } autoscaling { min_node_count = var.gke_cluster_autoscaling_min_node_count max_node_count = var.gke_cluster_autoscaling_max_node_count } } I run a helm deployment to deploy an application and get the following issue. default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest" Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using crictl UPDATE: issue resolved I have been able to resolve my problem with the following additional terraform code. resource "google_project_iam_member" "artifact_role" { role = "roles/artifactregistry.reader" member = "serviceAccount:${google_service_account.main.email}" project = var.gcp_project } ### Response: As error says : unexpected status: 403 Forbidden You might be having an issue with the Deployment imagepull secret. For GKE you can use the service account JSON Ref doc : https://cloud.google.com/container-registry/docs/advanced-authentication#json-key Terraform create secret in GKE which you can use it to deployment resource "kubernetes_secret" "gcr" { type = "kubernetes.io/dockerconfigjson" metadata { name = "gcr-image-pull" namespace = "default" } data = { ".dockerconfigjson" = jsonencode({ auths = { "gcr.io" = { username = "_json_key" password = base64decode(google_service_account_key.myaccount.private_key) email = google_service_account.main.email auth = base64encode("_json_key:${ base64decode(google_service_account_key.myaccount.private_key) }") } } }) }} Or use the kubectl to create the secret kubectl create secret docker-registry gcr \ --docker-server=gcr.io \ --docker-username=_json_key \ --docker-password="$(cat google-service-account-key.json)" \ --docker-email=<Email address> Now if you have the POD or deployment you can create YAML config like apiVersion: v1 kind: Pod metadata: name: uses-private-registry spec: containers: - name: hello-app image: <image URI> imagePullSecrets: - name: secret-that-you-created Update: As per Guillaume's suggestion for GKE/GCP you can follow *workload identity* option as best practice with other extern repo it might could not work. Create the IAM service account in GCP: gcloud iam service-accounts create gke-workload-indentity \ --project=<project-id> Create a service account in the K8s cluster : apiVersion: v1 kind: ServiceAccount metadata: annotations: iam.gke.io/gcp-service-account: gke-workload-indentity@PROJECT-ID.iam.gserviceaccount.com name: gke-sa-workload namespace: default Policy binding run below Gcloud command : gcloud iam service-accounts add-iam-policy-binding gke-workload-indentity@PROJECT_ID.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[default/K8s_SANAME]" Now you can create the deployment POD with image in GCR/astifact repo just update the ServiceAccount spec: containers: - name: container image: IMAGE serviceAccountName: gke-sa-workload Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account. The terraform used to stand up the cluster is below. locals { service = "example" resource_prefix = format("%s-%s", local.service, var.env) location = format("%s-b", var.gcp_region) } resource "google_service_account" "main" { account_id = format("%s-sa", local.resource_prefix) display_name = format("%s-sa", local.resource_prefix) project = var.gcp_project } resource "google_container_cluster" "main" { name = local.resource_prefix description = format("Cluster primarily servicing the service %s", local.service) location = local.location remove_default_node_pool = true initial_node_count = 1 } resource "google_container_node_pool" "main" { name = format("%s-node-pool", local.resource_prefix) location = local.location cluster = google_container_cluster.main.name node_count = var.gke_cluster_node_count node_config { preemptible = true machine_type = var.gke_node_machine_type # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. service_account = google_service_account.main.email oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/trace.append" ] } autoscaling { min_node_count = var.gke_cluster_autoscaling_min_node_count max_node_count = var.gke_cluster_autoscaling_max_node_count } } I run a helm deployment to deploy an application and get the following issue. default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest" Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using crictl UPDATE: issue resolved I have been able to resolve my problem with the following additional terraform code. resource "google_project_iam_member" "artifact_role" { role = "roles/artifactregistry.reader" member = "serviceAccount:${google_service_account.main.email}" project = var.gcp_project }
Your service's app selector is kafka, whereas the deployment is kafka-app, so they aren't connected. I suggest you use Strimzi (or Confluent for Kubernetes if you want to use their images), not convert your existing Docker Compose file using Kompose, as it rarely gets network policies correct. If fact, you can probably remove the network labels and remove the network policy completely, as it isn't really necessary in the same namespace. Regarding your Python app, you shouldn't need to separately define Kafka host and port; use one variable for KAFKA_BOOTSTRAP_SERVERS, which can accept multiple brokers, including their ports
###Instruction: to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals. I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry. The Python app connects to the following address: KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092') kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: kafka name: kafka-app spec: replicas: 1 selector: matchLabels: io.kompose.service: kafka strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: kafka spec: containers: - env: - name: KAFKA_LISTENERS value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092" - name: KAFKA_ADVERTISED_LISTENERS value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "LISTENER_INTERNAL" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS value: "0" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: finnhub-streaming-data-pipeline-kafka:latest imagePullPolicy: Never lifecycle: postStart: exec: command: ["/bin/sh","-c","/kafka-setup-k8s.sh"] name: kafka-app ports: - containerPort: 9092 - containerPort: 29092 resources: {} restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: kafka-service spec: selector: app: kafka ports: - protocol: TCP name: firstport port: 9092 targetPort: 9092 - protocol: TCP name: secondport port: 29092 targetPort: 29092 finnhub-producer.yaml (aka my Python app deployment): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: replicas: 1 selector: matchLabels: io.kompose.service: finnhubproducer strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: finnhubproducer spec: containers: - env: - name: KAFKA_PORT value: "9092" - name: KAFKA_SERVER value: kafka-service.default.svc.cluster.local - name: KAFKA_TOPIC_NAME value: market image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest imagePullPolicy: Never name: finnhubproducer ports: - containerPort: 8001 resources: {} restartPolicy: Always status: {} --- apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: ports: - name: "8001" port: 8001 targetPort: 8001 selector: io.kompose.service: finnhubproducer status: loadBalancer: {} pipeline-network-networkpolicy.yaml: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: pipeline-network spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/pipeline-network: "true" podSelector: matchLabels: io.kompose.network/pipeline-network: "true" EDIT: Dockerfile for Kafka image: FROM confluentinc/cp-kafka:6.2.0 COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh kafka-setup-k8s.sh: # blocks until kafka is reachable kafka-topics --bootstrap-server localhost:29092 --list echo -e 'Creating kafka topics' kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1 echo -e 'Successfully created the following topics:' kafka-topics --bootstrap-server localhost:29092 --list ### Response: Your service's app selector is kafka, whereas the deployment is kafka-app, so they aren't connected. I suggest you use Strimzi (or Confluent for Kubernetes if you want to use their images), not convert your existing Docker Compose file using Kompose, as it rarely gets network policies correct. If fact, you can probably remove the network labels and remove the network policy completely, as it isn't really necessary in the same namespace. Regarding your Python app, you shouldn't need to separately define Kafka host and port; use one variable for KAFKA_BOOTSTRAP_SERVERS, which can accept multiple brokers, including their ports
to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals. I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry. The Python app connects to the following address: KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092') kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: kafka name: kafka-app spec: replicas: 1 selector: matchLabels: io.kompose.service: kafka strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: kafka spec: containers: - env: - name: KAFKA_LISTENERS value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092" - name: KAFKA_ADVERTISED_LISTENERS value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "LISTENER_INTERNAL" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS value: "0" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR value: "1" - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: finnhub-streaming-data-pipeline-kafka:latest imagePullPolicy: Never lifecycle: postStart: exec: command: ["/bin/sh","-c","/kafka-setup-k8s.sh"] name: kafka-app ports: - containerPort: 9092 - containerPort: 29092 resources: {} restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: kafka-service spec: selector: app: kafka ports: - protocol: TCP name: firstport port: 9092 targetPort: 9092 - protocol: TCP name: secondport port: 29092 targetPort: 29092 finnhub-producer.yaml (aka my Python app deployment): apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: replicas: 1 selector: matchLabels: io.kompose.service: finnhubproducer strategy: {} template: metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.network/pipeline-network: "true" io.kompose.service: finnhubproducer spec: containers: - env: - name: KAFKA_PORT value: "9092" - name: KAFKA_SERVER value: kafka-service.default.svc.cluster.local - name: KAFKA_TOPIC_NAME value: market image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest imagePullPolicy: Never name: finnhubproducer ports: - containerPort: 8001 resources: {} restartPolicy: Always status: {} --- apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert kompose.version: 1.27.0 (b0ed6a2c9) creationTimestamp: null labels: io.kompose.service: finnhubproducer name: finnhubproducer spec: ports: - name: "8001" port: 8001 targetPort: 8001 selector: io.kompose.service: finnhubproducer status: loadBalancer: {} pipeline-network-networkpolicy.yaml: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: pipeline-network spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/pipeline-network: "true" podSelector: matchLabels: io.kompose.network/pipeline-network: "true" EDIT: Dockerfile for Kafka image: FROM confluentinc/cp-kafka:6.2.0 COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh kafka-setup-k8s.sh: # blocks until kafka is reachable kafka-topics --bootstrap-server localhost:29092 --list echo -e 'Creating kafka topics' kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1 echo -e 'Successfully created the following topics:' kafka-topics --bootstrap-server localhost:29092 --list
mdaniel's comment works! - description: Label value jsonPath: '.metadata.labels.a\.b\.c\.com/key' name: LABEL type: string You need to use \. instead of . and use single quotes ' '. It doesn't work with double quotes for the reasons I don't understand
###Instruction: I use kubectl to list Kubernetes custom resources of a kind mykind with an additional table column LABEL that contains the value of a label a.b.c.com/key if present: kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key' This works, i.e., the label value is properly displayed. Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of mykind: - description: Label value jsonPath: .metadata.labels.'a\.b\.c\.com/key' name: LABEL type: string Although the additional column is added to kubectl get mykind, it is empty and no label value is shown (in contrast to above kubectl command). My only suspicion were problems with escaping of the special characters - but no variation helped. Are you aware of any difference between the JSON path handling in kubectl and additional printer columns? I expected strongly that they are exactly the same. ### Response: mdaniel's comment works! - description: Label value jsonPath: '.metadata.labels.a\.b\.c\.com/key' name: LABEL type: string You need to use \. instead of . and use single quotes ' '. It doesn't work with double quotes for the reasons I don't understand
I use kubectl to list Kubernetes custom resources of a kind mykind with an additional table column LABEL that contains the value of a label a.b.c.com/key if present: kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key' This works, i.e., the label value is properly displayed. Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of mykind: - description: Label value jsonPath: .metadata.labels.'a\.b\.c\.com/key' name: LABEL type: string Although the additional column is added to kubectl get mykind, it is empty and no label value is shown (in contrast to above kubectl command). My only suspicion were problems with escaping of the special characters - but no variation helped. Are you aware of any difference between the JSON path handling in kubectl and additional printer columns? I expected strongly that they are exactly the same.
I might have found the reason why the status were not updated. Before updating the status, I was also updating the spec fields (to give some feedback to the user on created resources). The issue is caused by the fact that the specs updates trigger a new reconcilation, and the instruction after this update (among them the status update) were not execute. I realized that using specs to give feedback to the user is not suitable and the events were more appropriate for this purpose.
###Instruction: I am building a Go Kubernetes operator. I have used kubebuilder to create it. I want to store some internal details in the CRD status. I have tried : To update the whole resource : if err = r.Client.Update(ctx, upCRD); err != nil { return ctrl.Result{}, client.IgnoreNotFound(err) } And to update only the status : if err = r.Status().Update(ctx, upCRD); err != nil { return reconcile.Result{}, client.IgnoreNotFound(err) } The status struct is defined as follows : type HAAuditStatus struct { ChaosStrategyCron cron.EntryID `json:"chaosStrategyCron,omitempty"` TestStatus TestStatus `json:"testStatus,omitempty"` MetricStatus MetricStatus `json:"metricStatus,omitempty"` RoundRobinStrategy RoundRobinStrategy `json:"roundRobinStrategy,omitempty"` FixedStrategy FixedStrategy `json:"fixedStrategy,omitempty"` NextChaosDateTime int64 `json:"nextChaosDateTime,omitempty"` Created bool `json:"created,default=false"` } No error is raised and the specs fields modified are actually persisted but not the status field whose values remain the default at the next reconciling step. I have looked at the other issues on GitHub or StackOverflow but any suggestion made solved my issue and I can't figure out what is the problem. For a a bigger picture, you can refer to the repo where the operator is located. Any suggestion is very welcomed :) ### Response: I might have found the reason why the status were not updated. Before updating the status, I was also updating the spec fields (to give some feedback to the user on created resources). The issue is caused by the fact that the specs updates trigger a new reconcilation, and the instruction after this update (among them the status update) were not execute. I realized that using specs to give feedback to the user is not suitable and the events were more appropriate for this purpose.
I am building a Go Kubernetes operator. I have used kubebuilder to create it. I want to store some internal details in the CRD status. I have tried : To update the whole resource : if err = r.Client.Update(ctx, upCRD); err != nil { return ctrl.Result{}, client.IgnoreNotFound(err) } And to update only the status : if err = r.Status().Update(ctx, upCRD); err != nil { return reconcile.Result{}, client.IgnoreNotFound(err) } The status struct is defined as follows : type HAAuditStatus struct { ChaosStrategyCron cron.EntryID `json:"chaosStrategyCron,omitempty"` TestStatus TestStatus `json:"testStatus,omitempty"` MetricStatus MetricStatus `json:"metricStatus,omitempty"` RoundRobinStrategy RoundRobinStrategy `json:"roundRobinStrategy,omitempty"` FixedStrategy FixedStrategy `json:"fixedStrategy,omitempty"` NextChaosDateTime int64 `json:"nextChaosDateTime,omitempty"` Created bool `json:"created,default=false"` } No error is raised and the specs fields modified are actually persisted but not the status field whose values remain the default at the next reconciling step. I have looked at the other issues on GitHub or StackOverflow but any suggestion made solved my issue and I can't figure out what is the problem. For a a bigger picture, you can refer to the repo where the operator is located. Any suggestion is very welcomed :)
The api service is based on http, so you can check RFC 7231 about that A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request. Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent. If your're using kubectl the delete command will fail on the second run, because the resource can not be found. You can prevent it from failing by using the --ignore-not-found flag. $ kubectl run nginx --image nginx pod/nginx created $ kubectl delete pod nginx pod "nginx" deleted $ kubectl delete pod nginx Error from server (NotFound): pods "nginx" not found $ kubectl delete pod nginx --ignore-not-found So it's idempotent on the server but not on the client.
###Instruction: Is calling Delete in the Kubernetes go API an idempotent operation, i.e. can it safely be called twice? If so, is there any documentation defining this property? The go code just states Delete deletes the given obj from Kubernetes cluster. Essentially, this statement is what one would expect anyway when looking at the code. ### Response: The api service is based on http, so you can check RFC 7231 about that A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request. Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent. If your're using kubectl the delete command will fail on the second run, because the resource can not be found. You can prevent it from failing by using the --ignore-not-found flag. $ kubectl run nginx --image nginx pod/nginx created $ kubectl delete pod nginx pod "nginx" deleted $ kubectl delete pod nginx Error from server (NotFound): pods "nginx" not found $ kubectl delete pod nginx --ignore-not-found So it's idempotent on the server but not on the client.
Is calling Delete in the Kubernetes go API an idempotent operation, i.e. can it safely be called twice? If so, is there any documentation defining this property? The go code just states Delete deletes the given obj from Kubernetes cluster. Essentially, this statement is what one would expect anyway when looking at the code.
The Deployment has your specifed name "nginx-deployment". However you did not define a Pod with a fixed name, you define a template for the pods managed by this deployment. The Deployment manages 3 pods (because of your replicas 3), so it will use the template to build this three pods. There will also be a Replica Set with a hash and this will manage the Pods, but this is better seen by following the example below. Since a deployment can manage multiple pods (like your example with 3 replicas) or needing one new Pod when updating them, a deployment will not exactly use the name specified in the template, but will always append a hash value to keep them unique. But now you would have the problem to have all Pods loadbalanced behind one Kubernetes Service, because they have different names. This is why you denfine a label "app:nignx" in your template so all 3 Pods will have this label regardless of there name and other labels set by Kubernetes. The Service uses the selector to find the correct Pods. In your case it will search them by label "app:nginx". So yes the Service will expose all 3 Pods of your deployment and will loadbalance trafic between them. You can use --show-labels for kubectl get pods to see the name and the assigned labels. For a more complete example see: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
###Instruction: I have a question about labels and names, in this example manifest file apiVersion: apps/v 1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 I can see that the name of the deployment is "nginx-deployment" and the pod name is "nginx"? or is it the running container? Then I see in the console that the pods would have a hash attached to the end of its name, I believe this is the revision number? I just want to decipher the names from the labels from the matchLables, so for example I can use this service manifest to expose the pods with a certain label: apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80 will this service expose all pods with the selector : app:nginx ? thanks ### Response: The Deployment has your specifed name "nginx-deployment". However you did not define a Pod with a fixed name, you define a template for the pods managed by this deployment. The Deployment manages 3 pods (because of your replicas 3), so it will use the template to build this three pods. There will also be a Replica Set with a hash and this will manage the Pods, but this is better seen by following the example below. Since a deployment can manage multiple pods (like your example with 3 replicas) or needing one new Pod when updating them, a deployment will not exactly use the name specified in the template, but will always append a hash value to keep them unique. But now you would have the problem to have all Pods loadbalanced behind one Kubernetes Service, because they have different names. This is why you denfine a label "app:nignx" in your template so all 3 Pods will have this label regardless of there name and other labels set by Kubernetes. The Service uses the selector to find the correct Pods. In your case it will search them by label "app:nginx". So yes the Service will expose all 3 Pods of your deployment and will loadbalance trafic between them. You can use --show-labels for kubectl get pods to see the name and the assigned labels. For a more complete example see: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
I have a question about labels and names, in this example manifest file apiVersion: apps/v 1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 I can see that the name of the deployment is "nginx-deployment" and the pod name is "nginx"? or is it the running container? Then I see in the console that the pods would have a hash attached to the end of its name, I believe this is the revision number? I just want to decipher the names from the labels from the matchLables, so for example I can use this service manifest to expose the pods with a certain label: apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80 will this service expose all pods with the selector : app:nginx ? thanks
This environment variable comes from compatibility with a very old Docker feature. You can disable it in Kubernetes by setting enableServiceLinks: false on a Container object in a Pod spec, anywhere that may appear. For example: apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: ... enableServiceLinks: false env: [...] In particular the syntax is intended to be compatible with the environment variables generated by container links in first-generation Docker networking. Since then Docker has also introduced a DNS system into its core, and in pure Docker using links at all is now considered obsolete. It should be safe to always set this Kubernetes property, especially if it causes conflicts like what you describe here.
###Instruction: During pod startup Kubernetes is creating some environment variables based on services i created (via downward API?). Problem is that one of them, MY_APPLICATION_PORT, seems to be initialized incorrectly, it looks like: MY_APPLICATION_PORT=tcp://192.168.0.5:7777 whereas i expect it to hold only 7777 value. The problem is that i have a Spring Boot application that has this property in application.properties: my.application.port=7777 So when spring resolves it's properties, it prefers value from environment variable over one from .properties file, thus overwriting it with incorrect value. My question is - do you guys know how to control creation of kubernetes env variables? I can overwrite it in my deployment.yaml, but I wonder if there's another way. EDIT: I've found this as a closest description of my issue I've seen online: https://github.com/kubernetes/kubernetes/issues/65130 ### Response: This environment variable comes from compatibility with a very old Docker feature. You can disable it in Kubernetes by setting enableServiceLinks: false on a Container object in a Pod spec, anywhere that may appear. For example: apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: ... enableServiceLinks: false env: [...] In particular the syntax is intended to be compatible with the environment variables generated by container links in first-generation Docker networking. Since then Docker has also introduced a DNS system into its core, and in pure Docker using links at all is now considered obsolete. It should be safe to always set this Kubernetes property, especially if it causes conflicts like what you describe here.
During pod startup Kubernetes is creating some environment variables based on services i created (via downward API?). Problem is that one of them, MY_APPLICATION_PORT, seems to be initialized incorrectly, it looks like: MY_APPLICATION_PORT=tcp://192.168.0.5:7777 whereas i expect it to hold only 7777 value. The problem is that i have a Spring Boot application that has this property in application.properties: my.application.port=7777 So when spring resolves it's properties, it prefers value from environment variable over one from .properties file, thus overwriting it with incorrect value. My question is - do you guys know how to control creation of kubernetes env variables? I can overwrite it in my deployment.yaml, but I wonder if there's another way. EDIT: I've found this as a closest description of my issue I've seen online: https://github.com/kubernetes/kubernetes/issues/65130
The load balancer that gets created with the AKS cluster (usually called kubernetes) is used for egress (not ingress) traffic and is a public LB, and it cannot be private. This is part of the outbound type configuration. The "outbound type" of the AKS cluster can be set to "LoadBalancer, UserDefinedRouting or managedNatGateway". if you choose any option other than LB, then you would need to configure your network to route the traffic externally. check this doc for more info. For ingress traffic, you have the choice to use public or private LB. This is configured in the service resource (of type LoadBalancer) under kubernetes, where you would use the annotation that you mentioned to create a private LB. Public rules will use the same public LB created with the cluster. You can set the private Ip of the LB using annotations as well: annotations: service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25 service.beta.kubernetes.io/azure-load-balancer-internal: "true" This is mentioned in the same doc that you shared.
###Instruction: I have an Azure kubernetes cluster created using the following Terraform code # Required Provider terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0.2" } } required_version = ">= 1.1.0" } data "azurerm_client_config" "current" {} provider "azurerm" { subscription_id = local.subscription_id tenant_id = local.tenant_id client_id = local.client_id client_secret = local.client_secret features {} } resource "random_pet" "rg-name" { prefix = var.resource_group_name_prefix } resource "azurerm_resource_group" "rg" { name = random_pet.rg-name.id location = var.resource_group_location } resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix private_cluster_enabled = var.private_cluster resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr load_balancer_sku = "standard" } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } It creates the Load Balancer with Public IP as shown below However, I don't want to have a public IP for the Load balancer instead it should have the Internal Private IP. What should I to do have this load balancer with Internal Private IP and Service is not exposed over the Internet using the Public IP? Note: As per the Microsoft documentation, even if you annotate with annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true", external IP is still assigned which I am trying to avoid. ### Response: The load balancer that gets created with the AKS cluster (usually called kubernetes) is used for egress (not ingress) traffic and is a public LB, and it cannot be private. This is part of the outbound type configuration. The "outbound type" of the AKS cluster can be set to "LoadBalancer, UserDefinedRouting or managedNatGateway". if you choose any option other than LB, then you would need to configure your network to route the traffic externally. check this doc for more info. For ingress traffic, you have the choice to use public or private LB. This is configured in the service resource (of type LoadBalancer) under kubernetes, where you would use the annotation that you mentioned to create a private LB. Public rules will use the same public LB created with the cluster. You can set the private Ip of the LB using annotations as well: annotations: service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25 service.beta.kubernetes.io/azure-load-balancer-internal: "true" This is mentioned in the same doc that you shared.
I have an Azure kubernetes cluster created using the following Terraform code # Required Provider terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0.2" } } required_version = ">= 1.1.0" } data "azurerm_client_config" "current" {} provider "azurerm" { subscription_id = local.subscription_id tenant_id = local.tenant_id client_id = local.client_id client_secret = local.client_secret features {} } resource "random_pet" "rg-name" { prefix = var.resource_group_name_prefix } resource "azurerm_resource_group" "rg" { name = random_pet.rg-name.id location = var.resource_group_location } resource "azurerm_virtual_network" "test" { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } tags = var.tags } data "azurerm_subnet" "kubesubnet" { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource "azurerm_kubernetes_cluster" "k8s" { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix private_cluster_enabled = var.private_cluster resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = "agentpool" node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = "azure" dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr load_balancer_sku = "standard" } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } timeouts { create = "20m" delete = "20m" } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } It creates the Load Balancer with Public IP as shown below However, I don't want to have a public IP for the Load balancer instead it should have the Internal Private IP. What should I to do have this load balancer with Internal Private IP and Service is not exposed over the Internet using the Public IP? Note: As per the Microsoft documentation, even if you annotate with annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true", external IP is still assigned which I am trying to avoid.
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up. This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works. Refer to this latest SO and doc for more information.
###Instruction: i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns: failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate ### Response: If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up. This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works. Refer to this latest SO and doc for more information.
i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns: failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
Solved using actuator endpoint @Slf4j @Component @ControllerEndpoint(id = "preStopHook") class WebMvcPreStopHookEndpoint { @ResponseStatus(OK) @GetMapping("/{delayInMillis}") public ResponseEntity<Void> preStopHook(@PathVariable("delayInMillis") final long delayInMillis) throws InterruptedException { log.info("[preStopHook] received signal to sleep for {}ms", delayInMillis); Thread.sleep(delayInMillis); return null; } }
###Instruction: Does anyone had success implementing preStop hook with distroless/java17:nonroot ? Here is my (default) deployment: # [...] lifecycle: preStop: exec: command: ["sh", "-c", "sleep 10"] securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false privileged: false runAsUser: 65532 capabilities: drop: - ALL volumeMounts: - name: tmp-volume mountPath: /tmp ### Response: Solved using actuator endpoint @Slf4j @Component @ControllerEndpoint(id = "preStopHook") class WebMvcPreStopHookEndpoint { @ResponseStatus(OK) @GetMapping("/{delayInMillis}") public ResponseEntity<Void> preStopHook(@PathVariable("delayInMillis") final long delayInMillis) throws InterruptedException { log.info("[preStopHook] received signal to sleep for {}ms", delayInMillis); Thread.sleep(delayInMillis); return null; } }
Does anyone had success implementing preStop hook with distroless/java17:nonroot ? Here is my (default) deployment: # [...] lifecycle: preStop: exec: command: ["sh", "-c", "sleep 10"] securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false privileged: false runAsUser: 65532 capabilities: drop: - ALL volumeMounts: - name: tmp-volume mountPath: /tmp
I think this had to do with a specific setting I had enabled for Docker Desktop for Mac: I had changed from the default value (gRPC FUSE) to VirtioFS. When I changed it back to gRPC FUSE a number of issues resolved themselves, this one included.
###Instruction: I am seeing a very strange issue trying to start the official postgres:14.6-alpine image on Kubernetes. For reference the official postgres image allows for configuring the initialization script using the POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB environment variables. I have the following secret and configmap defined: --- apiVersion: v1 kind: Secret metadata: namespace: default name: postgres-credentials data: DATABASE_URL: cG9zdGdyZXM6Ly9sZXRzY2h1cmNoOnBhc3N3b3JkQHBvc3RncmVzOjU0MzIvbGV0c2NodXJjaA== POSTGRES_USER: bGV0c2NodXJjaA== POSTGRES_PASSWORD: cGFzc3dvcmQ= --- apiVersion: v1 kind: ConfigMap metadata: namespace: default name: postgres-config data: POSTGRES_DB: letschurch The value POSTGRES_USER value of bGV0c2NodXJjaA== decodes to letschurch and the POSTGRES_PASSWORD value of cGFzc3dvcmQ= decodes to password. I also have the following deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: restartPolicy: Always containers: - image: postgres:14.6-alpine name: postgres ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config - secretRef: name: postgres-credentials When I shell into the running container, I can echo out the environment variables, and they appear to be intact: postgres-74f67b778-lsv4c:/# echo $POSTGRES_USER letschurch postgres-74f67b778-lsv4c:/# echo $POSTGRES_PASSWORD password postgres-74f67b778-lsv4c:/# echo $POSTGRES_DB letschurch postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_USER | wc -c 10 postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_PASSWORD | wc -c 8 postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_DB | wc -c 10 postgres-74f67b778-lsv4c:/# [ "$POSTGRES_USER" = "$POSTGRES_DB" ] && echo 'good!' good! However, I am not able to connect with the role letschurch. I can connect as temporal (another role I have set up with an init script), and when I run \l and \du I see that the role (but not the database name) have a + appended: temporal=> \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ---------------------+------------+----------+------------+------------+-------------------------- letschurch | letschurch+| UTF8 | en_US.utf8 | en_US.utf8 | temporal=> \du List of roles Role name | Attributes | Member of ------------+------------------------------------------------------------+----------- letschurch+| Superuser, Create role, Create DB, Replication, Bypass RLS | {} | | temporal | | {} At first I thought that the base64-encoded POSTGRES_USER environment variable might have some whitespace or something encoded in it, so I double checked that I was encoding the value properly with echo -n letschurch | base64, and as you can see in the shell output above the resulting value is exactly 10 characters long, no extra whitespace. Also, the POSTGRES_USER and POSTGRES_DB environment variables are equal, but they appear to result in different outcomes in postgres. Also, this does not happen with docker-compose. Given the following configuration, everything works as expected: postgres: image: postgres:14.6-alpine environment: POSTGRES_USER: letschurch POSTGRES_PASSWORD: password POSTGRES_DB: letschurch ports: - '5432:5432' What am I missing here? Why does the letschurch role get a + appended to it? Is there something to do with secrets that doesn't apply to configMaps that I'm missing? ### Response: I think this had to do with a specific setting I had enabled for Docker Desktop for Mac: I had changed from the default value (gRPC FUSE) to VirtioFS. When I changed it back to gRPC FUSE a number of issues resolved themselves, this one included.
I am seeing a very strange issue trying to start the official postgres:14.6-alpine image on Kubernetes. For reference the official postgres image allows for configuring the initialization script using the POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB environment variables. I have the following secret and configmap defined: --- apiVersion: v1 kind: Secret metadata: namespace: default name: postgres-credentials data: DATABASE_URL: cG9zdGdyZXM6Ly9sZXRzY2h1cmNoOnBhc3N3b3JkQHBvc3RncmVzOjU0MzIvbGV0c2NodXJjaA== POSTGRES_USER: bGV0c2NodXJjaA== POSTGRES_PASSWORD: cGFzc3dvcmQ= --- apiVersion: v1 kind: ConfigMap metadata: namespace: default name: postgres-config data: POSTGRES_DB: letschurch The value POSTGRES_USER value of bGV0c2NodXJjaA== decodes to letschurch and the POSTGRES_PASSWORD value of cGFzc3dvcmQ= decodes to password. I also have the following deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: restartPolicy: Always containers: - image: postgres:14.6-alpine name: postgres ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config - secretRef: name: postgres-credentials When I shell into the running container, I can echo out the environment variables, and they appear to be intact: postgres-74f67b778-lsv4c:/# echo $POSTGRES_USER letschurch postgres-74f67b778-lsv4c:/# echo $POSTGRES_PASSWORD password postgres-74f67b778-lsv4c:/# echo $POSTGRES_DB letschurch postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_USER | wc -c 10 postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_PASSWORD | wc -c 8 postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_DB | wc -c 10 postgres-74f67b778-lsv4c:/# [ "$POSTGRES_USER" = "$POSTGRES_DB" ] && echo 'good!' good! However, I am not able to connect with the role letschurch. I can connect as temporal (another role I have set up with an init script), and when I run \l and \du I see that the role (but not the database name) have a + appended: temporal=> \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ---------------------+------------+----------+------------+------------+-------------------------- letschurch | letschurch+| UTF8 | en_US.utf8 | en_US.utf8 | temporal=> \du List of roles Role name | Attributes | Member of ------------+------------------------------------------------------------+----------- letschurch+| Superuser, Create role, Create DB, Replication, Bypass RLS | {} | | temporal | | {} At first I thought that the base64-encoded POSTGRES_USER environment variable might have some whitespace or something encoded in it, so I double checked that I was encoding the value properly with echo -n letschurch | base64, and as you can see in the shell output above the resulting value is exactly 10 characters long, no extra whitespace. Also, the POSTGRES_USER and POSTGRES_DB environment variables are equal, but they appear to result in different outcomes in postgres. Also, this does not happen with docker-compose. Given the following configuration, everything works as expected: postgres: image: postgres:14.6-alpine environment: POSTGRES_USER: letschurch POSTGRES_PASSWORD: password POSTGRES_DB: letschurch ports: - '5432:5432' What am I missing here? Why does the letschurch role get a + appended to it? Is there something to do with secrets that doesn't apply to configMaps that I'm missing?
OK, fixed. I had to enable the CI driver on our legacy cluster, as described here...
###Instruction: Almost two years later, we are experiencing the same issue as described in this SO post. Our workloads had been working without any disruption since 2018, and they suddenly stopped because we had to renew certificates. Then we've not been able to start the workloads again... The failure is caused by the fact that pods try to mount a persistence disk via NFS, and the nfs-server pod (based on gcr.io/google_containers/volume-nfs:0.8) can't mount the persistent disk. We have upgraded from 1.23 to 1.25.5-gke.2000 (experimenting with a few intermediary previous) and hence have also switched to containerd. We have recreated everything multiple times with slight varioations, but no luck. Pods definitely cannot access any persistent disk. We've checked basic things such as: the persistent disks and cluster are in the same zone as the GKE cluster, the service account used by the pods has the necessary permissions to access the disk, etc. No logs are visible on, each pod, which is also strange since logging seems to be correctly configured. Here is the nfs-server.yaml: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: role: nfs-server name: nfs-server spec: replicas: 1 selector: matchLabels: role: nfs-server strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: role: nfs-server spec: containers: - image: gcr.io/google_containers/volume-nfs:0.8 imagePullPolicy: IfNotPresent name: nfs-server ports: - containerPort: 2049 name: nfs protocol: TCP - containerPort: 20048 name: mountd protocol: TCP - containerPort: 111 name: rpcbind protocol: TCP resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /exports name: webapp-disk dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - gcePersistentDisk: fsType: ext4 pdName: webapp-data-disk name: webapp-disk status: {} ### Response: OK, fixed. I had to enable the CI driver on our legacy cluster, as described here...
Almost two years later, we are experiencing the same issue as described in this SO post. Our workloads had been working without any disruption since 2018, and they suddenly stopped because we had to renew certificates. Then we've not been able to start the workloads again... The failure is caused by the fact that pods try to mount a persistence disk via NFS, and the nfs-server pod (based on gcr.io/google_containers/volume-nfs:0.8) can't mount the persistent disk. We have upgraded from 1.23 to 1.25.5-gke.2000 (experimenting with a few intermediary previous) and hence have also switched to containerd. We have recreated everything multiple times with slight varioations, but no luck. Pods definitely cannot access any persistent disk. We've checked basic things such as: the persistent disks and cluster are in the same zone as the GKE cluster, the service account used by the pods has the necessary permissions to access the disk, etc. No logs are visible on, each pod, which is also strange since logging seems to be correctly configured. Here is the nfs-server.yaml: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: role: nfs-server name: nfs-server spec: replicas: 1 selector: matchLabels: role: nfs-server strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: role: nfs-server spec: containers: - image: gcr.io/google_containers/volume-nfs:0.8 imagePullPolicy: IfNotPresent name: nfs-server ports: - containerPort: 2049 name: nfs protocol: TCP - containerPort: 20048 name: mountd protocol: TCP - containerPort: 111 name: rpcbind protocol: TCP resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /exports name: webapp-disk dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - gcePersistentDisk: fsType: ext4 pdName: webapp-data-disk name: webapp-disk status: {}
Here is a workaround, not the finest way of doing it, but it may help to get in the right direction: kubectl api-resources -oname |while read r; do echo -n "$r ----> "; kubectl get $r -A -o yaml |grep -oPz '\d+\.\d+\.\d+\.\d+'; echo "" ; done Assuming the IP, you are searching for is a cluster resource.
###Instruction: Update: What I am trying to achieve: I have the IP address of a cluster resource (10.1.239.128 in the examples below) and want to figure out to which POD/service/... it belongs. I set up a Kuebernetes cluster using Microk8s running, among others, vault and cert-manager via Helm charts. In two of the PODs created for vault and cert-manager, I get multiple the following log entry: > kubectl logs pod/cert-manager-webhook-6787f645b9-s5fxl http: TLS handshake error from 10.1.239.128:37763: EOF > kubectl logs pod/release-name-vault-0 http: TLS handshake error from 10.1.239.128:48609: remote error: tls: unknown certificate authority I am struggling with finding out where these requests come from. I tried: kubectl get svc --all-namespaces kubectl get endpoints --all-namespaces kubectl get endpointslices --all-namespaces ping -a 10.1.239.128 using from a POD created using netshoot to figure out from where the requests are coming from, but to no luck. The ip address is not listed or resolved to a DNS name to further investigate. Any further ideas of how to get the resource issuing the request or further debug this? Thanks in advance. ### Response: Here is a workaround, not the finest way of doing it, but it may help to get in the right direction: kubectl api-resources -oname |while read r; do echo -n "$r ----> "; kubectl get $r -A -o yaml |grep -oPz '\d+\.\d+\.\d+\.\d+'; echo "" ; done Assuming the IP, you are searching for is a cluster resource.
Update: What I am trying to achieve: I have the IP address of a cluster resource (10.1.239.128 in the examples below) and want to figure out to which POD/service/... it belongs. I set up a Kuebernetes cluster using Microk8s running, among others, vault and cert-manager via Helm charts. In two of the PODs created for vault and cert-manager, I get multiple the following log entry: > kubectl logs pod/cert-manager-webhook-6787f645b9-s5fxl http: TLS handshake error from 10.1.239.128:37763: EOF > kubectl logs pod/release-name-vault-0 http: TLS handshake error from 10.1.239.128:48609: remote error: tls: unknown certificate authority I am struggling with finding out where these requests come from. I tried: kubectl get svc --all-namespaces kubectl get endpoints --all-namespaces kubectl get endpointslices --all-namespaces ping -a 10.1.239.128 using from a POD created using netshoot to figure out from where the requests are coming from, but to no luck. The ip address is not listed or resolved to a DNS name to further investigate. Any further ideas of how to get the resource issuing the request or further debug this? Thanks in advance.
If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release. The command you are using is just applying the Helm values as-is, since the -f flag does not support parsing other files into the values by itself. Instead, use --set-file, which works similarly to --set, but gets the value content from the passed file. Your command would now look like this: helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \ --namespace=monitoring \ --create-namespace \ -f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \ --set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml
###Instruction: It seems that excluding logs in a pod using the configuration below does not work. extrascrapeconfig.yaml: - job_name: kubernetes-pods-app pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ source_labels: - __meta_kubernetes_pod_label_name ### - action: keep regex: ambassador source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_pod_namespace ### To Reproduce Steps to reproduce the behavior: Deployed helm loki-stack : helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \ --namespace=monitoring \ --create-namespace \ -f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" loki-stack-values-v2.4.1.yaml: loki: enabled: true config: promtail: enabled: true extraScrapeConfigs: extrascrapeconfig.yaml fluent-bit: enabled: false grafana: enabled: false prometheus: enabled: false Attach grafana to loki datasource Query: {namespace="kube-system"} in Grafana Loki RESULT: See logs Expected behavior: Not seeing any logs Environment: Infrastructure: Kubernetes Deployment tool: Helm What am I missing? ### Response: If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release. The command you are using is just applying the Helm values as-is, since the -f flag does not support parsing other files into the values by itself. Instead, use --set-file, which works similarly to --set, but gets the value content from the passed file. Your command would now look like this: helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \ --namespace=monitoring \ --create-namespace \ -f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \ --set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml
It seems that excluding logs in a pod using the configuration below does not work. extrascrapeconfig.yaml: - job_name: kubernetes-pods-app pipeline_stages: - docker: {} kubernetes_sd_configs: - role: pod relabel_configs: - action: drop regex: .+ source_labels: - __meta_kubernetes_pod_label_name ### - action: keep regex: ambassador source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_pod_namespace ### To Reproduce Steps to reproduce the behavior: Deployed helm loki-stack : helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \ --namespace=monitoring \ --create-namespace \ -f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" loki-stack-values-v2.4.1.yaml: loki: enabled: true config: promtail: enabled: true extraScrapeConfigs: extrascrapeconfig.yaml fluent-bit: enabled: false grafana: enabled: false prometheus: enabled: false Attach grafana to loki datasource Query: {namespace="kube-system"} in Grafana Loki RESULT: See logs Expected behavior: Not seeing any logs Environment: Infrastructure: Kubernetes Deployment tool: Helm What am I missing?
This works as you expect: kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1
###Instruction: In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this: service-job-12345abc service-job-abc54321 other-job-54321cba I need to find the nameof the most recently created pod starting with "service-job-". I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix. What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one. Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state. ### Response: This works as you expect: kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1
In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this: service-job-12345abc service-job-abc54321 other-job-54321cba I need to find the nameof the most recently created pod starting with "service-job-". I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix. What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one. Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.
Turning comment to answer as it resolved @David's issue. Because the user defined service account is being used for the node_pool the appropriate roles need to be bound to this service account. In this case: roles/artifactregistry.reader Configuring artifact registry permissions Best practice is to grant the minimum required roles.
###Instruction: I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account. The terraform used to stand up the cluster is below. locals { service = "example" resource_prefix = format("%s-%s", local.service, var.env) location = format("%s-b", var.gcp_region) } resource "google_service_account" "main" { account_id = format("%s-sa", local.resource_prefix) display_name = format("%s-sa", local.resource_prefix) project = var.gcp_project } resource "google_container_cluster" "main" { name = local.resource_prefix description = format("Cluster primarily servicing the service %s", local.service) location = local.location remove_default_node_pool = true initial_node_count = 1 } resource "google_container_node_pool" "main" { name = format("%s-node-pool", local.resource_prefix) location = local.location cluster = google_container_cluster.main.name node_count = var.gke_cluster_node_count node_config { preemptible = true machine_type = var.gke_node_machine_type # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. service_account = google_service_account.main.email oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/trace.append" ] } autoscaling { min_node_count = var.gke_cluster_autoscaling_min_node_count max_node_count = var.gke_cluster_autoscaling_max_node_count } } I run a helm deployment to deploy an application and get the following issue. default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest" Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using crictl UPDATE: issue resolved I have been able to resolve my problem with the following additional terraform code. resource "google_project_iam_member" "artifact_role" { role = "roles/artifactregistry.reader" member = "serviceAccount:${google_service_account.main.email}" project = var.gcp_project } ### Response: Turning comment to answer as it resolved @David's issue. Because the user defined service account is being used for the node_pool the appropriate roles need to be bound to this service account. In this case: roles/artifactregistry.reader Configuring artifact registry permissions Best practice is to grant the minimum required roles.
I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account. The terraform used to stand up the cluster is below. locals { service = "example" resource_prefix = format("%s-%s", local.service, var.env) location = format("%s-b", var.gcp_region) } resource "google_service_account" "main" { account_id = format("%s-sa", local.resource_prefix) display_name = format("%s-sa", local.resource_prefix) project = var.gcp_project } resource "google_container_cluster" "main" { name = local.resource_prefix description = format("Cluster primarily servicing the service %s", local.service) location = local.location remove_default_node_pool = true initial_node_count = 1 } resource "google_container_node_pool" "main" { name = format("%s-node-pool", local.resource_prefix) location = local.location cluster = google_container_cluster.main.name node_count = var.gke_cluster_node_count node_config { preemptible = true machine_type = var.gke_node_machine_type # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles. service_account = google_service_account.main.email oauth_scopes = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/cloud-platform", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/trace.append" ] } autoscaling { min_node_count = var.gke_cluster_autoscaling_min_node_count max_node_count = var.gke_cluster_autoscaling_max_node_count } } I run a helm deployment to deploy an application and get the following issue. default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest" Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using crictl UPDATE: issue resolved I have been able to resolve my problem with the following additional terraform code. resource "google_project_iam_member" "artifact_role" { role = "roles/artifactregistry.reader" member = "serviceAccount:${google_service_account.main.email}" project = var.gcp_project }
I don't know anything about AKS, but some HTTP server implementations (for instance nginx) don't accept underscores in HTTP header names. As for why the server API keys contains a trailing newline: it may be a platform thing. I would just be pragmatic about it and remove it before use: process.API_KEY.trim()
###Instruction: I'm deploying my Node.js server on AKS and I'm experiencing a few unexpected behaviours not happening on my local machine. I've tried various things to debug it but I'm not succeeding at all. I use a Middleware as exports.clientApiKeyValidation = (req, res, next) => { try { const clientApiKey = req.get('api_key'); console.log(`received api_key is ${clientApiKey} and expected API_KEY is ${process.env.API_KEY}`); const receivedTypeof = console.log('clientApiKey ', typeof clientApiKey); const expectedTypeof = console.log('expected ', typeof process.env.API_KEY); console.log('req.headers is: ', req.headers); if (!clientApiKey) { return res.status(400).send({ status: false, message: "Missing Api Key" }); } if (clientApiKey === process.env.API_KEY) { // console.log('Api key correct'); next(); } else { return res.status(400).send({ status: false, message: "Invalid Api Key" }); } } catch (error) { res.status(401).json({ error: new Error('Invalid request!') }); } } used on routes like router.get('/users', auth.clientApiKeyValidation, userController.findUsers); In app.js I also set a few things, including a middleware function to allow this params as app.disable('x-powered-by'); app.use(express.json({ limit: '50mb' })); app.use(function (req, res, next) { // either res.setHeader or res.header works. // res.setHeader('Access-Control-Allow-Origin', 'http://localhost:5000'); res.setHeader( 'Access-Control-Allow-Origin', 'https://xxx.westeurope.cloudapp.azure.com' ); res.setHeader( 'Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE' ); // res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE'); res.setHeader( 'Access-Control-Allow-Headers', 'content-type, api_key, AuthToken, apikey' ); // res.setHeader('Access-Control-Allow-Credentials', true); // res.header('Access-Control-Allow-Origin', 'http://localhost:5000'); // res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE'); // res.header('Access-Control-Allow-Headers', 'Origin, Content-Type, api_key, Accept'); // console.log(' res headers added: ', res.getHeaders()); next(); }); app.use(express.urlencoded({ extended: true })); app.use(cors()); app.use(express.json()); app.use(express.json({ type: 'application/json' })); I'm sending requests to it from my Flutter app adding headers like await _firebaseAuth.currentUser?.getIdToken().then((idToken) { headers = { 'Content-Type': 'application/json', 'api_key': Environment.dbApiKey, 'AuthToken': idToken }; }); FixitUser? userDetails; // final Uri uri = Uri.http(Environment.dbUrl, '/api/users'); final Uri uri = Uri.https(Environment.dbUrl, '/server/api/users'); log('headers are $headers'); await get(uri, headers: headers).then((resp) { log('UserMongoDBRepository.downloadUserDetails resp.body is : ${resp.body}'); ... When running the server directly middleware logs show it all works as expected received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh clientApiKey string expected string req.headers is: { 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', api_key: 'hjkdiu-slia7h-si9udd989jw-ols8dh', host: '192.168.1.48:3000', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDY4MDAsImV4cCI6MTY3NTg1MDQwMCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.ZNJFrxWlycMVgg4VAdWt6Q0WR5yrWPar5_UJwhQ9-hVX25aKC69yDpoM2adx3OIQ-hlsGz1bNvVEUSfvWUWRWF-TaX2TVeLb5z0blAtl8A1cGcvnaryffr0jpCtN_nhDg3WTtHw4B2USDe432TxSXx0ICrk8bx_fum8jsfBvAh_xU8bnr6lLsc0pltcmU-zun0rhZcC6jpxua2d0jIwWyWSXurjNBkLkduzOpUbw6KCLur7wVcxz-HmuD67D0qx5dr37malLbOBt5mcfOtdJEJcEq55XqXqUanopT_OWTWxByMrD_CXyZnLu_q1DgSBMM4kaLrSA14ETD_EvIeqHOQ', } When instead is running on AKS cluster api_key is missing from received request headers as the logs show, so the middleware responds with {"status":false,"message":"Missing Api Key"} received api_key is undefined and expected API_KEY is some api key req.headers is: { host: 'xxx.westeurope.cloudapp.azure.com', 'x-request-id': '2ecc2ec74c808cf40f816921374f72d4', 'x-real-ip': '81.56.11.23', 'x-forwarded-for': '81.56.11.23', 'x-forwarded-host': 'xxx.westeurope.cloudapp.azure.com', 'x-forwarded-port': '443', 'x-forwarded-proto': 'https', 'x-forwarded-scheme': 'https', 'x-scheme': 'https', 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg' } The headers prints from Flutter shows the api_key being correctly set. headers are {Content-Type: application/json, api_key: some api key, AuthToken: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg} So as a test instead of api_key I used apikey and it gets retrieved in the request, but then the === check fails so the middleware responds with {"status":false,"message":"Invalid Api Key"} even it their type and value are identical, as checked here https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Strict_equality received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh clientApiKey string expected string req.headers is: { host: 'fixit1.westeurope.cloudapp.azure.com', 'x-request-id': '515ad2a00b1a3db24a69e09f6d181036', 'x-real-ip': '81.56.11.23', 'x-forwarded-for': '81.56.11.23', 'x-forwarded-host': 'fixit1.westeurope.cloudapp.azure.com', 'x-forwarded-port': '443', 'x-forwarded-proto': 'https', 'x-forwarded-scheme': 'https', 'x-scheme': 'https', 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDMxNzgsImV4cCI6MTY3NTg0Njc3OCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.lDMgmTJkrm6ZnoyEU1F7Sjoo7Y8or8ZKzIrwBJ9ssQiR8yN5KD2ZhyM6yyR_Arscmyg1ZV_6RsTnFgGsVsmjiMzyX6TOXmYkmRlvvMjjjFsV8rW_W_gIdVld6vSg-JMrOlLcCeBknFDJC50bbNGYBSwQ2_C_MZIKlbFWWrqME988MOiUBlyT86t5Oofc5uVMETrpBf0a-wsFRdyEX-3uj-T3MRHza62PTcpHURoptQdIzYsBSc6WxR6WCINVjx__DbWlWrGt612Mw4iLv1XReiGriQEjTDc9cXbG0ngbiRsn0ojvZ-Jb8Pb6kj7gWRYDRsKFg2nxxFMhVeSDuIeO-w', apikey: 'hjkdiu-slia7h-si9udd989jw-ols8dh' } As a second test I changed the === operator to == in the middleware but still it returns false and responds with {"status":false,"message":"Invalid Api Key"}. Then I tried to use const serverApiKey = JSON.stringify(process.env.API_KEY); and const clientApiKey = JSON.stringify(req.get('apikey')); for the comparison and they actually do yield different results clientApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh" serverApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh\n" env.API_KEY does get it's value from a k8s secret, base64 encoded with echo -n hjkdiu-slia7h-si9udd989jw-ols8dh |base64 command. To see if it was a matter of an empty space in the encoded value from the secret I tried encoding it with and without the -n flag but they result in the same json encoded string. I'm not sure it could have something to do with the docker image I create so here's the Dockrfile FROM node:18.3.0 WORKDIR /usr/app # where available (npm@5+) COPY ./ package.json ./ ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true # Istall the dependencies RUN npm install # If you are building your code for production # RUN npm ci --only=production COPY ./ ./ # Add the following lines for create-react-app bug, workaround ENV CI=true ENV WDS_SOCKET_PORT=0 CMD ["npm", "start"] Why api_key param is not present while apikey param is in the received request headers? Why the \n in the serverApiKey but not in the clientApiKey ? Why is this happening only on AKS and not on my local machine? Thank you very much as always. ### Response: I don't know anything about AKS, but some HTTP server implementations (for instance nginx) don't accept underscores in HTTP header names. As for why the server API keys contains a trailing newline: it may be a platform thing. I would just be pragmatic about it and remove it before use: process.API_KEY.trim()
I'm deploying my Node.js server on AKS and I'm experiencing a few unexpected behaviours not happening on my local machine. I've tried various things to debug it but I'm not succeeding at all. I use a Middleware as exports.clientApiKeyValidation = (req, res, next) => { try { const clientApiKey = req.get('api_key'); console.log(`received api_key is ${clientApiKey} and expected API_KEY is ${process.env.API_KEY}`); const receivedTypeof = console.log('clientApiKey ', typeof clientApiKey); const expectedTypeof = console.log('expected ', typeof process.env.API_KEY); console.log('req.headers is: ', req.headers); if (!clientApiKey) { return res.status(400).send({ status: false, message: "Missing Api Key" }); } if (clientApiKey === process.env.API_KEY) { // console.log('Api key correct'); next(); } else { return res.status(400).send({ status: false, message: "Invalid Api Key" }); } } catch (error) { res.status(401).json({ error: new Error('Invalid request!') }); } } used on routes like router.get('/users', auth.clientApiKeyValidation, userController.findUsers); In app.js I also set a few things, including a middleware function to allow this params as app.disable('x-powered-by'); app.use(express.json({ limit: '50mb' })); app.use(function (req, res, next) { // either res.setHeader or res.header works. // res.setHeader('Access-Control-Allow-Origin', 'http://localhost:5000'); res.setHeader( 'Access-Control-Allow-Origin', 'https://xxx.westeurope.cloudapp.azure.com' ); res.setHeader( 'Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE' ); // res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE'); res.setHeader( 'Access-Control-Allow-Headers', 'content-type, api_key, AuthToken, apikey' ); // res.setHeader('Access-Control-Allow-Credentials', true); // res.header('Access-Control-Allow-Origin', 'http://localhost:5000'); // res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE'); // res.header('Access-Control-Allow-Headers', 'Origin, Content-Type, api_key, Accept'); // console.log(' res headers added: ', res.getHeaders()); next(); }); app.use(express.urlencoded({ extended: true })); app.use(cors()); app.use(express.json()); app.use(express.json({ type: 'application/json' })); I'm sending requests to it from my Flutter app adding headers like await _firebaseAuth.currentUser?.getIdToken().then((idToken) { headers = { 'Content-Type': 'application/json', 'api_key': Environment.dbApiKey, 'AuthToken': idToken }; }); FixitUser? userDetails; // final Uri uri = Uri.http(Environment.dbUrl, '/api/users'); final Uri uri = Uri.https(Environment.dbUrl, '/server/api/users'); log('headers are $headers'); await get(uri, headers: headers).then((resp) { log('UserMongoDBRepository.downloadUserDetails resp.body is : ${resp.body}'); ... When running the server directly middleware logs show it all works as expected received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh clientApiKey string expected string req.headers is: { 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', api_key: 'hjkdiu-slia7h-si9udd989jw-ols8dh', host: '192.168.1.48:3000', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDY4MDAsImV4cCI6MTY3NTg1MDQwMCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.ZNJFrxWlycMVgg4VAdWt6Q0WR5yrWPar5_UJwhQ9-hVX25aKC69yDpoM2adx3OIQ-hlsGz1bNvVEUSfvWUWRWF-TaX2TVeLb5z0blAtl8A1cGcvnaryffr0jpCtN_nhDg3WTtHw4B2USDe432TxSXx0ICrk8bx_fum8jsfBvAh_xU8bnr6lLsc0pltcmU-zun0rhZcC6jpxua2d0jIwWyWSXurjNBkLkduzOpUbw6KCLur7wVcxz-HmuD67D0qx5dr37malLbOBt5mcfOtdJEJcEq55XqXqUanopT_OWTWxByMrD_CXyZnLu_q1DgSBMM4kaLrSA14ETD_EvIeqHOQ', } When instead is running on AKS cluster api_key is missing from received request headers as the logs show, so the middleware responds with {"status":false,"message":"Missing Api Key"} received api_key is undefined and expected API_KEY is some api key req.headers is: { host: 'xxx.westeurope.cloudapp.azure.com', 'x-request-id': '2ecc2ec74c808cf40f816921374f72d4', 'x-real-ip': '81.56.11.23', 'x-forwarded-for': '81.56.11.23', 'x-forwarded-host': 'xxx.westeurope.cloudapp.azure.com', 'x-forwarded-port': '443', 'x-forwarded-proto': 'https', 'x-forwarded-scheme': 'https', 'x-scheme': 'https', 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg' } The headers prints from Flutter shows the api_key being correctly set. headers are {Content-Type: application/json, api_key: some api key, AuthToken: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg} So as a test instead of api_key I used apikey and it gets retrieved in the request, but then the === check fails so the middleware responds with {"status":false,"message":"Invalid Api Key"} even it their type and value are identical, as checked here https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Strict_equality received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh clientApiKey string expected string req.headers is: { host: 'fixit1.westeurope.cloudapp.azure.com', 'x-request-id': '515ad2a00b1a3db24a69e09f6d181036', 'x-real-ip': '81.56.11.23', 'x-forwarded-for': '81.56.11.23', 'x-forwarded-host': 'fixit1.westeurope.cloudapp.azure.com', 'x-forwarded-port': '443', 'x-forwarded-proto': 'https', 'x-forwarded-scheme': 'https', 'x-scheme': 'https', 'user-agent': 'Dart/2.18 (dart:io)', 'accept-encoding': 'gzip', 'content-type': 'application/json', authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDMxNzgsImV4cCI6MTY3NTg0Njc3OCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.lDMgmTJkrm6ZnoyEU1F7Sjoo7Y8or8ZKzIrwBJ9ssQiR8yN5KD2ZhyM6yyR_Arscmyg1ZV_6RsTnFgGsVsmjiMzyX6TOXmYkmRlvvMjjjFsV8rW_W_gIdVld6vSg-JMrOlLcCeBknFDJC50bbNGYBSwQ2_C_MZIKlbFWWrqME988MOiUBlyT86t5Oofc5uVMETrpBf0a-wsFRdyEX-3uj-T3MRHza62PTcpHURoptQdIzYsBSc6WxR6WCINVjx__DbWlWrGt612Mw4iLv1XReiGriQEjTDc9cXbG0ngbiRsn0ojvZ-Jb8Pb6kj7gWRYDRsKFg2nxxFMhVeSDuIeO-w', apikey: 'hjkdiu-slia7h-si9udd989jw-ols8dh' } As a second test I changed the === operator to == in the middleware but still it returns false and responds with {"status":false,"message":"Invalid Api Key"}. Then I tried to use const serverApiKey = JSON.stringify(process.env.API_KEY); and const clientApiKey = JSON.stringify(req.get('apikey')); for the comparison and they actually do yield different results clientApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh" serverApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh\n" env.API_KEY does get it's value from a k8s secret, base64 encoded with echo -n hjkdiu-slia7h-si9udd989jw-ols8dh |base64 command. To see if it was a matter of an empty space in the encoded value from the secret I tried encoding it with and without the -n flag but they result in the same json encoded string. I'm not sure it could have something to do with the docker image I create so here's the Dockrfile FROM node:18.3.0 WORKDIR /usr/app # where available (npm@5+) COPY ./ package.json ./ ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true # Istall the dependencies RUN npm install # If you are building your code for production # RUN npm ci --only=production COPY ./ ./ # Add the following lines for create-react-app bug, workaround ENV CI=true ENV WDS_SOCKET_PORT=0 CMD ["npm", "start"] Why api_key param is not present while apikey param is in the received request headers? Why the \n in the serverApiKey but not in the clientApiKey ? Why is this happening only on AKS and not on my local machine? Thank you very much as always.
This could be because the GKE Autopilot service account does not have the necessary permissions to access the Artifact Registry. You can grant the needed permissions by adding the roles/artifactregistry.reader role to the service account that the GKE Autopilot node pool is configured to use. Additionally, you may need to adjust the IAM permissions for the service account so that it has access to the private Artifact Registry. gcloud artifacts repositories add-iam-policy-binding <repository-name> \ --location=<location> \ --member=serviceAccount:<nnn>-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.reader" Can you try creating a new service account and granting it the necessary permissions to pull the image and try to pull the image once. Simple troubleshooting steps are: you should ensure that your GKE cluster is configured to allow access to the Artifact Registry. You can do this by going to the GKE dashboard and making sure that the “Allow access to Artifact Registry” option is enabled. The container image you are trying to pull does not exist in the Artifact Registry. You should check the registry to make sure that the container image is correctly uploaded and can be accessed. you can look into the error logs to get more information on what is causing this issue. Additionally, you can check the GKE documentation for more information on troubleshooting this issue.
###Instruction: According to articles below, it seems we can pull container image to GKE from Artifact Registry without any additional authentication when these in same project. https://cloud.google.com/artifact-registry/docs/integrate-gke https://www.youtube.com/watch?v=BfS7mvPA-og Error: ImagePullBackOff and Error: ErrImagePull errors with GKE But when I try it, I faced ImagePullBackOff error. Is there any mistake? misunderstanding? Or should I need use another authentication? Reproduce It's convenient to use Google Cloud Shell in some project on https://console.cloud.google.com . Create Artifact Registry gcloud artifacts repositories create test \ --repository-format=docker \ --location=asia-northeast2 Push sample image gcloud auth configure-docker asia-northeast2-docker.pkg.dev docker pull nginx docker tag nginx asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image docker push asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Create GKE Autopilot cluster Create GKE Autopilot cluster by using GUI console. Almost all options is default but I changed these 2. Set cluster name as test. Set region same as registry's one. (In this case, asia-northeast2) Enabled Anthos Service Mesh. Deploy container image to GKE from Artifact Registry gcloud container clusters get-credentials test --zone asia-northeast2 kubectl run test --image asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Check Pod state kubectl describe po test Name: test Namespace: default Priority: 0 Service Account: default Node: xxxxxxxxxxxxxxxxxxx Start Time: Wed, 08 Feb 2023 12:38:08 +0000 Labels: run=test Annotations: autopilot.gke.io/resource-adjustment: {"input":{"containers":[{"name":"test"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"reque... seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Pending IP: 10.73.0.25 IPs: IP: 10.73.0.25 Containers: test: Container ID: Image: asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Limits: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Requests: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szq85 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-szq85: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Guaranteed Node-Selectors: <none> Tolerations: kubernetes.io/arch=amd64:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19s gke.io/optimize-utilization-scheduler Successfully assigned default/test to xxxxxxxxxxxxxxxxxxx Normal Pulling 16s kubelet Pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image" Warning Failed 16s kubelet Failed to pull image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image": rpc error: code = Unknown desc = failed to pull and unpack image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to resolve reference "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 16s kubelet Error: ErrImagePull Normal BackOff 15s kubelet Back-off pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image" Warning Failed 15s kubelet Error: ImagePullBackOff then, I got ImagePullBackOff. ### Response: This could be because the GKE Autopilot service account does not have the necessary permissions to access the Artifact Registry. You can grant the needed permissions by adding the roles/artifactregistry.reader role to the service account that the GKE Autopilot node pool is configured to use. Additionally, you may need to adjust the IAM permissions for the service account so that it has access to the private Artifact Registry. gcloud artifacts repositories add-iam-policy-binding <repository-name> \ --location=<location> \ --member=serviceAccount:<nnn>-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.reader" Can you try creating a new service account and granting it the necessary permissions to pull the image and try to pull the image once. Simple troubleshooting steps are: you should ensure that your GKE cluster is configured to allow access to the Artifact Registry. You can do this by going to the GKE dashboard and making sure that the “Allow access to Artifact Registry” option is enabled. The container image you are trying to pull does not exist in the Artifact Registry. You should check the registry to make sure that the container image is correctly uploaded and can be accessed. you can look into the error logs to get more information on what is causing this issue. Additionally, you can check the GKE documentation for more information on troubleshooting this issue.
According to articles below, it seems we can pull container image to GKE from Artifact Registry without any additional authentication when these in same project. https://cloud.google.com/artifact-registry/docs/integrate-gke https://www.youtube.com/watch?v=BfS7mvPA-og Error: ImagePullBackOff and Error: ErrImagePull errors with GKE But when I try it, I faced ImagePullBackOff error. Is there any mistake? misunderstanding? Or should I need use another authentication? Reproduce It's convenient to use Google Cloud Shell in some project on https://console.cloud.google.com . Create Artifact Registry gcloud artifacts repositories create test \ --repository-format=docker \ --location=asia-northeast2 Push sample image gcloud auth configure-docker asia-northeast2-docker.pkg.dev docker pull nginx docker tag nginx asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image docker push asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Create GKE Autopilot cluster Create GKE Autopilot cluster by using GUI console. Almost all options is default but I changed these 2. Set cluster name as test. Set region same as registry's one. (In this case, asia-northeast2) Enabled Anthos Service Mesh. Deploy container image to GKE from Artifact Registry gcloud container clusters get-credentials test --zone asia-northeast2 kubectl run test --image asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Check Pod state kubectl describe po test Name: test Namespace: default Priority: 0 Service Account: default Node: xxxxxxxxxxxxxxxxxxx Start Time: Wed, 08 Feb 2023 12:38:08 +0000 Labels: run=test Annotations: autopilot.gke.io/resource-adjustment: {"input":{"containers":[{"name":"test"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"reque... seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Pending IP: 10.73.0.25 IPs: IP: 10.73.0.25 Containers: test: Container ID: Image: asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Limits: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Requests: cpu: 500m ephemeral-storage: 1Gi memory: 2Gi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szq85 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-szq85: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Guaranteed Node-Selectors: <none> Tolerations: kubernetes.io/arch=amd64:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19s gke.io/optimize-utilization-scheduler Successfully assigned default/test to xxxxxxxxxxxxxxxxxxx Normal Pulling 16s kubelet Pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image" Warning Failed 16s kubelet Failed to pull image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image": rpc error: code = Unknown desc = failed to pull and unpack image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to resolve reference "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden Warning Failed 16s kubelet Error: ErrImagePull Normal BackOff 15s kubelet Back-off pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image" Warning Failed 15s kubelet Error: ImagePullBackOff then, I got ImagePullBackOff.
You should install sqlalchemy-trino to make the trino driver available. Add these lines to your values.yaml file: additionalRequirements: - sqlalchemy-trino bootstrapScript: | #!/bin/bash pip install sqlalchemy-trino &&\ if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi If you want more details about the problem, see this Github issue. I added two options that do the same thing because in some versions the additionalRequirements doesn't work and you may need the bootstrapScript option to install the driver.
###Instruction: I have currently Trino deployed in my Kubernetes cluster using the official Trino(trinodb) Helm Chart. In the same way I deployed Apache superset. Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using: ./trino --server http:localhost:8080 I don't have any authentication set mysql is setup correctly as Trino catalog when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs: trino://trino@localhost:8080/mysql trino://localhost:8080/mysql When I test the connection from Superset UI, I get the following error: ERROR: Could not load database driver: TrinoEngineSpec Please advise how I could solve this issue. ### Response: You should install sqlalchemy-trino to make the trino driver available. Add these lines to your values.yaml file: additionalRequirements: - sqlalchemy-trino bootstrapScript: | #!/bin/bash pip install sqlalchemy-trino &&\ if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi If you want more details about the problem, see this Github issue. I added two options that do the same thing because in some versions the additionalRequirements doesn't work and you may need the bootstrapScript option to install the driver.
I have currently Trino deployed in my Kubernetes cluster using the official Trino(trinodb) Helm Chart. In the same way I deployed Apache superset. Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using: ./trino --server http:localhost:8080 I don't have any authentication set mysql is setup correctly as Trino catalog when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs: trino://trino@localhost:8080/mysql trino://localhost:8080/mysql When I test the connection from Superset UI, I get the following error: ERROR: Could not load database driver: TrinoEngineSpec Please advise how I could solve this issue.
GitOps is nothing but extending CI/CD principles beyond application code: to infra code. Simple. You can think of using Git as source of truth that combines with Terraform (provisioning), Ansible (config mgmt) and Kubernetes (orchestration) as an example ...in order to achieve the goal of keeping Git as 1:1 reflection of your infra. It's nothing new and do not be worried about such fancy terms...
###Instruction: In normal kubernetes CI/CD process , following process occurs : clone code from git build and push docker image update kubernetes deployment with updated code According to gitops definition GitOps is a new approach to Continuous Deployment that leverages Git as a single source of truth for declarative infrastructure and applications, providing both revision and change control. With GitOps, a system is run by submitting pull requests (and subsequent merges) to achieve the desired state of the system represented in a Git repository what my understanding of gitops is , when you update anything in git (as this is the source of truth) , the desired state of kubernetes changes to the latest one and the latest code is deployed. End Result of traditional CI/CD without gitops: new code is deployed as kubernetes deployment End Result of gitops: new code is deployed as kubernetes deployment I am unable to figure the difference . sorry if it sounds weird to you. But I am kinda new and exploring gitops . Thanks in advance for your response ### Response: GitOps is nothing but extending CI/CD principles beyond application code: to infra code. Simple. You can think of using Git as source of truth that combines with Terraform (provisioning), Ansible (config mgmt) and Kubernetes (orchestration) as an example ...in order to achieve the goal of keeping Git as 1:1 reflection of your infra. It's nothing new and do not be worried about such fancy terms...
In normal kubernetes CI/CD process , following process occurs : clone code from git build and push docker image update kubernetes deployment with updated code According to gitops definition GitOps is a new approach to Continuous Deployment that leverages Git as a single source of truth for declarative infrastructure and applications, providing both revision and change control. With GitOps, a system is run by submitting pull requests (and subsequent merges) to achieve the desired state of the system represented in a Git repository what my understanding of gitops is , when you update anything in git (as this is the source of truth) , the desired state of kubernetes changes to the latest one and the latest code is deployed. End Result of traditional CI/CD without gitops: new code is deployed as kubernetes deployment End Result of gitops: new code is deployed as kubernetes deployment I am unable to figure the difference . sorry if it sounds weird to you. But I am kinda new and exploring gitops . Thanks in advance for your response
As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. --subresource=status. This flag is considered "Alpha" but does not require enabling the feature. As an example, with a yaml merge: kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}' The Sysdig "What's New?" for v1.24 includes some more words about this flag: Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources. You now can stop using complex curl commands to directly update subresources. The --subresource flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through KEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in #2590 Add subresource support to kubectl.
###Instruction: I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and kubectl patch commands. when I use curl call it works perfectly fine but when I use kubectl patch command it says patched but with no change. Here are the command that I used Using Curl: When I connect to kubectl proxy and run the below curl call, it's successful and updates status subresource on my CR. curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status' Kubectl patch command: Using kubectl patch says the CR is patch but with no change and the status sub-resource is updated. $ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]' myresource.acme.com/default patched (no change) However when I do the kubectl patch on the other sub-resources like spec it works fine. Am i missing something here? ### Response: As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. --subresource=status. This flag is considered "Alpha" but does not require enabling the feature. As an example, with a yaml merge: kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}' The Sysdig "What's New?" for v1.24 includes some more words about this flag: Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources. You now can stop using complex curl commands to directly update subresources. The --subresource flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through KEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in #2590 Add subresource support to kubectl.
I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and kubectl patch commands. when I use curl call it works perfectly fine but when I use kubectl patch command it says patched but with no change. Here are the command that I used Using Curl: When I connect to kubectl proxy and run the below curl call, it's successful and updates status subresource on my CR. curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status' Kubectl patch command: Using kubectl patch says the CR is patch but with no change and the status sub-resource is updated. $ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]' myresource.acme.com/default patched (no change) However when I do the kubectl patch on the other sub-resources like spec it works fine. Am i missing something here?
Everything you're looking for is covered in this document. As GKE ingress is essentially a GCP Load Balancer, the path key is using a url-map to configure and route the traffic to what you've specified in the config. As you'd be able to see there, regexs are not allowed in Path keys. One option if you're using Helm is to make use of the templates to generate this automatically from a variable. Given the following variable in your values.yaml file: paths: - name: /api - name: /admin - name: /auth Then in your ingress YAML definition you can do the following: spec: rules: - http: paths: {{ range $paths := .Values.paths }} - backend: service: name: my-api-service port: number: 80 path: {{ .name }} pathType: Prefix {{ end }}
###Instruction: I want to prevent unsafe requested to reach my application running in GCP GKE with Google Ingress (not nginx) and trying to do this using path rules. I know nginx Ingress can configure paths using regex but I don know the best way to do with Google Ingress. Right now I am just duplicating the same rules change the path prefix like this: spec: rules: - http: paths: - backend: service: name: my-api-service port: number: 80 path: /api pathType: Prefix - backend: service: name: my-api-service port: number: 80 path: /auth pathType: Prefix - backend: service: name: my-api-service port: number: 80 path: /admin pathType: Prefix Is there a better way to do this? ### Response: Everything you're looking for is covered in this document. As GKE ingress is essentially a GCP Load Balancer, the path key is using a url-map to configure and route the traffic to what you've specified in the config. As you'd be able to see there, regexs are not allowed in Path keys. One option if you're using Helm is to make use of the templates to generate this automatically from a variable. Given the following variable in your values.yaml file: paths: - name: /api - name: /admin - name: /auth Then in your ingress YAML definition you can do the following: spec: rules: - http: paths: {{ range $paths := .Values.paths }} - backend: service: name: my-api-service port: number: 80 path: {{ .name }} pathType: Prefix {{ end }}
I want to prevent unsafe requested to reach my application running in GCP GKE with Google Ingress (not nginx) and trying to do this using path rules. I know nginx Ingress can configure paths using regex but I don know the best way to do with Google Ingress. Right now I am just duplicating the same rules change the path prefix like this: spec: rules: - http: paths: - backend: service: name: my-api-service port: number: 80 path: /api pathType: Prefix - backend: service: name: my-api-service port: number: 80 path: /auth pathType: Prefix - backend: service: name: my-api-service port: number: 80 path: /admin pathType: Prefix Is there a better way to do this?
There are no fields inserted. The processor only replaces values that already exist with a different serialization that has the same semantics. test3: in YAML without a value is parsed as having an empty scalar value. The YAML Core Schema defines the following for empty values: Regular expression Resolved to tag null | Null | NULL | ~ tag:yaml.org,2002:null /* Empty */ tag:yaml.org,2002:null Since the empty value is resolved to have the tag !!null (which is a shorthand for the full form shown above), it is loaded as nil into Go. When toYaml receives your data, it doesn't know that the nil values originated from empty scalars. It needs to choose one of the possible serializations and chooses null. This adheres to the YAML spec and is therefore correct behavior. Any downstream processor that supports the Core Schema should process test3: null in the same way it processes test3: without value. Therefore there should be no problem. If you want test3: to specifically have the empty string as value instead of null, write test3: "" If you want it to contain an empty mapping, write test3: {}
###Instruction: I am trying to pass given part of values.yaml into helm template: receivers: test1: test2: test3: test4: using function: {{ .Values.receivers | toYaml | nindent 2}} Code is placed in correct format, however empty fields get filled with 'null': receivers: test1: test2: test3: null test4: null Is there any way to prevent this? I am expecting correct templating without insterted null fields. ### Response: There are no fields inserted. The processor only replaces values that already exist with a different serialization that has the same semantics. test3: in YAML without a value is parsed as having an empty scalar value. The YAML Core Schema defines the following for empty values: Regular expression Resolved to tag null | Null | NULL | ~ tag:yaml.org,2002:null /* Empty */ tag:yaml.org,2002:null Since the empty value is resolved to have the tag !!null (which is a shorthand for the full form shown above), it is loaded as nil into Go. When toYaml receives your data, it doesn't know that the nil values originated from empty scalars. It needs to choose one of the possible serializations and chooses null. This adheres to the YAML spec and is therefore correct behavior. Any downstream processor that supports the Core Schema should process test3: null in the same way it processes test3: without value. Therefore there should be no problem. If you want test3: to specifically have the empty string as value instead of null, write test3: "" If you want it to contain an empty mapping, write test3: {}
I am trying to pass given part of values.yaml into helm template: receivers: test1: test2: test3: test4: using function: {{ .Values.receivers | toYaml | nindent 2}} Code is placed in correct format, however empty fields get filled with 'null': receivers: test1: test2: test3: null test4: null Is there any way to prevent this? I am expecting correct templating without insterted null fields.
Make sure you have updated the ingress file with the proper IP of minikube. Also check with http instead https & KEYCLOAK_HOSTNAME value Try below YAML : apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:20.0.3 args: ["start-dev"] env: - name: KEYCLOAK_ADMIN value: "admin" - name: KEYCLOAK_ADMIN_PASSWORD value: "admin" - name: KC_PROXY value: "edge" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /realms/master port: 8080 it will creat the LB service for you so you will be able to access it without ingress config. Run kubectl get svc -n <namespace-name> and check External IP and try opening that in browser. Extra : You can refer to this YAML if the default one is not working. i am using Postgres & Dpeloying the Keycloak with that. GitHub repo path : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment Ref : https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39
###Instruction: I just follow the Keycloak Documentation for Kubernetes. https://www.keycloak.org/getting-started/getting-started-kube But After deployed it like exactly how they are saying in the documentation. When I try to load the keyclaok page, I'm getting this, if you can give me a solution or explain why this is happening, Really appreciate it! My ingress config (keycloak-ingress.yaml) is, apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak spec: tls: - hosts: - keycloak.192.168.49.2.nip.io rules: - host: keycloak.192.168.49.2.nip.io http: paths: - path: / pathType: Prefix backend: service: name: keycloak port: number: 8080 ### Response: Make sure you have updated the ingress file with the proper IP of minikube. Also check with http instead https & KEYCLOAK_HOSTNAME value Try below YAML : apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:20.0.3 args: ["start-dev"] env: - name: KEYCLOAK_ADMIN value: "admin" - name: KEYCLOAK_ADMIN_PASSWORD value: "admin" - name: KC_PROXY value: "edge" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /realms/master port: 8080 it will creat the LB service for you so you will be able to access it without ingress config. Run kubectl get svc -n <namespace-name> and check External IP and try opening that in browser. Extra : You can refer to this YAML if the default one is not working. i am using Postgres & Dpeloying the Keycloak with that. GitHub repo path : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment Ref : https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39
I just follow the Keycloak Documentation for Kubernetes. https://www.keycloak.org/getting-started/getting-started-kube But After deployed it like exactly how they are saying in the documentation. When I try to load the keyclaok page, I'm getting this, if you can give me a solution or explain why this is happening, Really appreciate it! My ingress config (keycloak-ingress.yaml) is, apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak spec: tls: - hosts: - keycloak.192.168.49.2.nip.io rules: - host: keycloak.192.168.49.2.nip.io http: paths: - path: / pathType: Prefix backend: service: name: keycloak port: number: 8080
The issue you're describing is answered exactly by the error message. Your second Kubernetes Service manifest, named promtail, does not have any specification. For services, at least spec.ports is required. You should add a label selector as well, so the Service can pick up the Deployment's pods properly. apiVersion: v1 kind: Service metadata: name: promtail namespace: monitoring spec: selector: app: promtail ports: - port: <ServicePort> targetPort: <PodPort> However, if you do not need to communicate with the Promtail pods from external services, then simply skip creating the Service itself. May I add, if you need to expose these logs to a service running outside of your cluster, such as Grafana Cloud, you should create a Service of LoadBalancer type for Loki instead. This will request a public IP for it, making it accessible worldwide - assuming your Kubernetes cluster is managed by some cloud provider. Making Loki public is insecure, but a good first step towards consuming these logs externally.
###Instruction: I am still new to K8S infrastructure but I am trying to convert VM infrastructure to K8S on GCP/GKE and I am stuck at forwarding the logs properly after getting Prometheus metrics forwarded correctly. I am also trying to do this without helm, to better understand K8S. The logs of the loki pod, look as expected when comparing to a docker format in a VM setup. But I do not know how to start the promtail service without a port, since in a docker format promtail does not have to expose a port. I get the following error: The Service "promtail" is invalid: spec.ports: Required value My configuration files look like: loki-config.yml auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 168h ingestion_burst_size_mb: 16 ingestion_rate_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true promtail-config.yml server: http_listen_port: 9080 grpc_listen_port: 0 # this is the place where promtail will store the progress about how far it has read the logs positions: filename: /tmp/positions.yaml # address of loki server to which promtail should push the logs clients: - url: https://999999:...=@logs-prod3.grafana.net/api/prom/push # which logs to read/scrape scrape_configs: - job_name: system static_configs: - targets: - localhost labels: job: varlogs __path__: /var/log/*log - job_name: node static_configs: - targets: - localhost labels: job: node # label-1 host: localhost # label-2 __path__: /var/lib/docker/containers/*/*log Then the deployment files: loki-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki spec: selector: matchLabels: app: loki network: cluster-1 replicas: 1 template: metadata: labels: app: loki network: cluster-1 spec: containers: - name: loki image: grafana/loki ports: - containerPort: 3100 volumeMounts: - name: loki-config-volume mountPath: /etc/loki/loki.yml subPath: loki.yml volumes: - name: loki-config-volume configMap: name: "loki-config" --- apiVersion: v1 kind: Service metadata: name: loki namespace: monitoring spec: selector: app: loki type: NodePort ports: - name: loki protocol: TCP port: 3100 And finally promtail-deploy.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: promtail spec: selector: matchLabels: app: promtail network: cluster-1 replicas: 1 template: metadata: labels: app: promtail network: cluster-1 spec: containers: - name: promtail image: grafana/promtail volumeMounts: - name: promtail-config-volume mountPath: /mnt/config/promtail-config.yml subPath: promtail.yml volumes: - name: promtail-config-volume configMap: name: "promtail-config" --- apiVersion: v1 kind: Service metadata: name: promtail namespace: monitoring ### Response: The issue you're describing is answered exactly by the error message. Your second Kubernetes Service manifest, named promtail, does not have any specification. For services, at least spec.ports is required. You should add a label selector as well, so the Service can pick up the Deployment's pods properly. apiVersion: v1 kind: Service metadata: name: promtail namespace: monitoring spec: selector: app: promtail ports: - port: <ServicePort> targetPort: <PodPort> However, if you do not need to communicate with the Promtail pods from external services, then simply skip creating the Service itself. May I add, if you need to expose these logs to a service running outside of your cluster, such as Grafana Cloud, you should create a Service of LoadBalancer type for Loki instead. This will request a public IP for it, making it accessible worldwide - assuming your Kubernetes cluster is managed by some cloud provider. Making Loki public is insecure, but a good first step towards consuming these logs externally.
I am still new to K8S infrastructure but I am trying to convert VM infrastructure to K8S on GCP/GKE and I am stuck at forwarding the logs properly after getting Prometheus metrics forwarded correctly. I am also trying to do this without helm, to better understand K8S. The logs of the loki pod, look as expected when comparing to a docker format in a VM setup. But I do not know how to start the promtail service without a port, since in a docker format promtail does not have to expose a port. I get the following error: The Service "promtail" is invalid: spec.ports: Required value My configuration files look like: loki-config.yml auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 168h ingestion_burst_size_mb: 16 ingestion_rate_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true promtail-config.yml server: http_listen_port: 9080 grpc_listen_port: 0 # this is the place where promtail will store the progress about how far it has read the logs positions: filename: /tmp/positions.yaml # address of loki server to which promtail should push the logs clients: - url: https://999999:...=@logs-prod3.grafana.net/api/prom/push # which logs to read/scrape scrape_configs: - job_name: system static_configs: - targets: - localhost labels: job: varlogs __path__: /var/log/*log - job_name: node static_configs: - targets: - localhost labels: job: node # label-1 host: localhost # label-2 __path__: /var/lib/docker/containers/*/*log Then the deployment files: loki-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki spec: selector: matchLabels: app: loki network: cluster-1 replicas: 1 template: metadata: labels: app: loki network: cluster-1 spec: containers: - name: loki image: grafana/loki ports: - containerPort: 3100 volumeMounts: - name: loki-config-volume mountPath: /etc/loki/loki.yml subPath: loki.yml volumes: - name: loki-config-volume configMap: name: "loki-config" --- apiVersion: v1 kind: Service metadata: name: loki namespace: monitoring spec: selector: app: loki type: NodePort ports: - name: loki protocol: TCP port: 3100 And finally promtail-deploy.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: promtail spec: selector: matchLabels: app: promtail network: cluster-1 replicas: 1 template: metadata: labels: app: promtail network: cluster-1 spec: containers: - name: promtail image: grafana/promtail volumeMounts: - name: promtail-config-volume mountPath: /mnt/config/promtail-config.yml subPath: promtail.yml volumes: - name: promtail-config-volume configMap: name: "promtail-config" --- apiVersion: v1 kind: Service metadata: name: promtail namespace: monitoring
Your code depends on libstdc++.so.6 but it would seem that version isn't installed In your Dockerfile, add the yum install command that should do it. It would depend on what operating system you're using, but for RHEL 7, for example, you could do: RUN yum install -y libstdc++ With more details of the operating system I can give a more specific command In this specific examples the Dockerfile could look something like this: FROM centos:7 RUN yum install -y libstdc++ CMD ["/bin/bash"]
###Instruction: I am creating a custom Builder Image using S2i dotnet core. This will run in OpenShift linux container I have modified the custom builder image and included few lines to copy few dlls and ".so" files When running the container in OpenShift I am facing the below error error says "unable to load shared library 'CustomCppWrapper' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libWrapperName: cannot open shared object file: No such file or directory" I have set the LD_DEBUG environment variable and found below few errors /lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by /opt/app-root/app/libCWrappeNamer.so) (fatal) /lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) (fatal) I did below command and found below ldd libCWrappeNamer.so ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libabc.so) ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libxmlc.so) Below is my Custom Docker file builder image FROM dotnet/dotnet-31-runtime-rhel7 # This image provides a .NET Core 3.1 environment you can use to run your .NET # applications. ENV PATH=/opt/app-root/src/.local/bin:/opt/app-root/src/bin:/opt/app-root/node_modules/.bin:${PATH} \ STI_SCRIPTS_PATH=/usr/libexec/s2i LABEL io.k8s.description="Platform for building and running .NET Core 3.1 applications" \ io.openshift.tags="builder,.net,dotnet,dotnetcore,rh-dotnet31" # Labels consumed by Red Hat build service LABEL name="dotnet/dotnet-31-rhel7" \ com.redhat.component="rh-dotnet31-container" \ version="3.1" \ release="1" \ architecture="x86_64" #-------------------------- COPY CPP LIBS COPY CustomCppWrapper.lib /opt/app-root/app COPY libCWrappeNamer.so /opt/app-root/app #---------------------------------- # Labels consumed by Eclipse JBoss OpenShift plugin LABEL com.redhat.dev-mode="DEV_MODE:false" \ com.redhat.deployments-dir="/opt/app-root/src" # Switch to root for package installs USER 0 # Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH. COPY ./s2i/bin/ /usr/libexec/s2i RUN INSTALL_PKGS="rh-nodejs10-npm rh-nodejs10-nodejs-nodemon rh-dotnet31-dotnet-sdk-3.1 rsync" && \ yum install -y --setopt=tsflags=nodocs --disablerepo=\* \ --enablerepo=rhel-7-server-rpms,rhel-server-rhscl-7-rpms,rhel-7-server-dotnet-rpms \ $INSTALL_PKGS && \ rpm -V $INSTALL_PKGS && \ yum clean all -y && \ # yum cache files may still exist (and quite large in size) rm -rf /var/cache/yum/* # Directory with the sources is set as the working directory. RUN mkdir /opt/app-root/src WORKDIR /opt/app-root/src # Trigger first time actions. RUN scl enable rh-dotnet31 'dotnet help' # Build the container tool. RUN /usr/libexec/s2i/container-tool build-tool # Since $HOME is set to /opt/app-root, the yum install may have created config # directories (such as ~/.pki/nssdb) there. These will be owned by root and can # cause actions that work on all of /opt/app-root to fail. So we need to fix # the permissions on those too. RUN chown -R 1001:0 /opt/app-root && fix-permissions /opt/app-root ENV ENABLED_COLLECTIONS="$ENABLED_COLLECTIONS rh-nodejs10" \ # Needed for the `dotnet watch` to detect changes in a container. DOTNET_USE_POLLING_FILE_WATCHER=true # Run container by default as user with id 1001 (default) USER 1001 # Set the default CMD to print the usage of the language image. CMD /usr/libexec/s2i/usage ### Response: Your code depends on libstdc++.so.6 but it would seem that version isn't installed In your Dockerfile, add the yum install command that should do it. It would depend on what operating system you're using, but for RHEL 7, for example, you could do: RUN yum install -y libstdc++ With more details of the operating system I can give a more specific command In this specific examples the Dockerfile could look something like this: FROM centos:7 RUN yum install -y libstdc++ CMD ["/bin/bash"]
I am creating a custom Builder Image using S2i dotnet core. This will run in OpenShift linux container I have modified the custom builder image and included few lines to copy few dlls and ".so" files When running the container in OpenShift I am facing the below error error says "unable to load shared library 'CustomCppWrapper' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libWrapperName: cannot open shared object file: No such file or directory" I have set the LD_DEBUG environment variable and found below few errors /lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by /opt/app-root/app/libCWrappeNamer.so) (fatal) /lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) (fatal) I did below command and found below ldd libCWrappeNamer.so ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libabc.so) ./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libxmlc.so) Below is my Custom Docker file builder image FROM dotnet/dotnet-31-runtime-rhel7 # This image provides a .NET Core 3.1 environment you can use to run your .NET # applications. ENV PATH=/opt/app-root/src/.local/bin:/opt/app-root/src/bin:/opt/app-root/node_modules/.bin:${PATH} \ STI_SCRIPTS_PATH=/usr/libexec/s2i LABEL io.k8s.description="Platform for building and running .NET Core 3.1 applications" \ io.openshift.tags="builder,.net,dotnet,dotnetcore,rh-dotnet31" # Labels consumed by Red Hat build service LABEL name="dotnet/dotnet-31-rhel7" \ com.redhat.component="rh-dotnet31-container" \ version="3.1" \ release="1" \ architecture="x86_64" #-------------------------- COPY CPP LIBS COPY CustomCppWrapper.lib /opt/app-root/app COPY libCWrappeNamer.so /opt/app-root/app #---------------------------------- # Labels consumed by Eclipse JBoss OpenShift plugin LABEL com.redhat.dev-mode="DEV_MODE:false" \ com.redhat.deployments-dir="/opt/app-root/src" # Switch to root for package installs USER 0 # Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH. COPY ./s2i/bin/ /usr/libexec/s2i RUN INSTALL_PKGS="rh-nodejs10-npm rh-nodejs10-nodejs-nodemon rh-dotnet31-dotnet-sdk-3.1 rsync" && \ yum install -y --setopt=tsflags=nodocs --disablerepo=\* \ --enablerepo=rhel-7-server-rpms,rhel-server-rhscl-7-rpms,rhel-7-server-dotnet-rpms \ $INSTALL_PKGS && \ rpm -V $INSTALL_PKGS && \ yum clean all -y && \ # yum cache files may still exist (and quite large in size) rm -rf /var/cache/yum/* # Directory with the sources is set as the working directory. RUN mkdir /opt/app-root/src WORKDIR /opt/app-root/src # Trigger first time actions. RUN scl enable rh-dotnet31 'dotnet help' # Build the container tool. RUN /usr/libexec/s2i/container-tool build-tool # Since $HOME is set to /opt/app-root, the yum install may have created config # directories (such as ~/.pki/nssdb) there. These will be owned by root and can # cause actions that work on all of /opt/app-root to fail. So we need to fix # the permissions on those too. RUN chown -R 1001:0 /opt/app-root && fix-permissions /opt/app-root ENV ENABLED_COLLECTIONS="$ENABLED_COLLECTIONS rh-nodejs10" \ # Needed for the `dotnet watch` to detect changes in a container. DOTNET_USE_POLLING_FILE_WATCHER=true # Run container by default as user with id 1001 (default) USER 1001 # Set the default CMD to print the usage of the language image. CMD /usr/libexec/s2i/usage
You need to configure nginx to forward the X-Forwarded-For and X-Forwarded-Proto headers. Example: server { listen 80; server_name example.com *.example.com; location / { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Then setup the UseForwardedHeaders middleware. This middleware will update HttpContext.Connection.RemoteIpAddress using the X-Forwarded-For header value. var builder = WebApplication.CreateBuilder(args); // ... builder.Services.Configure<ForwardedHeadersOptions>(options => { options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto; }); var app = builder.Build(); app.UseForwardedHeaders(); // ... Check: Configure ASP.NET Core to work with proxy servers and load balancers
###Instruction: I have an asp.net core 7.0 api application in a docker container hosted in Kubernetes behind an Nginx ingress controller. To get the client ip address i'm using context.HttpContext.Connection.RemoteIpAddress for all user requests I get a Private Ip address like '10.244.0.1' In such instances i'm expecting Public IP address ### Response: You need to configure nginx to forward the X-Forwarded-For and X-Forwarded-Proto headers. Example: server { listen 80; server_name example.com *.example.com; location / { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Then setup the UseForwardedHeaders middleware. This middleware will update HttpContext.Connection.RemoteIpAddress using the X-Forwarded-For header value. var builder = WebApplication.CreateBuilder(args); // ... builder.Services.Configure<ForwardedHeadersOptions>(options => { options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto; }); var app = builder.Build(); app.UseForwardedHeaders(); // ... Check: Configure ASP.NET Core to work with proxy servers and load balancers
I have an asp.net core 7.0 api application in a docker container hosted in Kubernetes behind an Nginx ingress controller. To get the client ip address i'm using context.HttpContext.Connection.RemoteIpAddress for all user requests I get a Private Ip address like '10.244.0.1' In such instances i'm expecting Public IP address
You can use the kubectl cp --retries=10 if the connection is getting timeout while copying the file. If you are in Redis POD and just want to take out the data you can install the aws, gcp CLI and upload the whole file from POD to the bucket. Touching Redis data files is fine, i to have played lot but would recommend checking out the Riot-Redis Used it a lot in migrating large production system Redis data from one to another cluster. Read more about Riot-Redis doc : https://developer.redis.com/riot/riot-redis/ RIOT-Redis is a data migration tool for Redis. Most Redis migration tools available today are offline in nature. Migrating data from AWS ElastiCache to Redis Enterprise Cloud for example means backing up your Elasticache data to an AWS S3 bucket and importing it into Redis Enterprise Cloud using its UI. RIOT-Redis allows for live data migration between any Redis databases.
###Instruction: I have this redis instance in a pod running on 1.21 cluster. Running kubectl cp -c redis --request-timeout=1m redis-pod:data/dump.rdb /tmp/data/dump.rdb or kubectl exec --request-timeout=1m -i redis-pod -- cat /data/dump.rdb > /tmp/data/dump.rdb I get the following two errors: The kubectl cp approach generates: error: unexpected EOF The second approach creates the file but incomplete, with a smaller size and when loading it with another redis instance I got a server message like the DB is corrupted and wont start. I've ran redis SAVE command before. Also copied the dump.rdb to another directory a tried to copy that one. Any help would be appreciated. ### Response: You can use the kubectl cp --retries=10 if the connection is getting timeout while copying the file. If you are in Redis POD and just want to take out the data you can install the aws, gcp CLI and upload the whole file from POD to the bucket. Touching Redis data files is fine, i to have played lot but would recommend checking out the Riot-Redis Used it a lot in migrating large production system Redis data from one to another cluster. Read more about Riot-Redis doc : https://developer.redis.com/riot/riot-redis/ RIOT-Redis is a data migration tool for Redis. Most Redis migration tools available today are offline in nature. Migrating data from AWS ElastiCache to Redis Enterprise Cloud for example means backing up your Elasticache data to an AWS S3 bucket and importing it into Redis Enterprise Cloud using its UI. RIOT-Redis allows for live data migration between any Redis databases.
I have this redis instance in a pod running on 1.21 cluster. Running kubectl cp -c redis --request-timeout=1m redis-pod:data/dump.rdb /tmp/data/dump.rdb or kubectl exec --request-timeout=1m -i redis-pod -- cat /data/dump.rdb > /tmp/data/dump.rdb I get the following two errors: The kubectl cp approach generates: error: unexpected EOF The second approach creates the file but incomplete, with a smaller size and when loading it with another redis instance I got a server message like the DB is corrupted and wont start. I've ran redis SAVE command before. Also copied the dump.rdb to another directory a tried to copy that one. Any help would be appreciated.
Is it able to give extra memory but only for the first X minutes for each pod start? You do get this behavior when you set the limit to a higher value than the request. This allows pods to burst, unless they all need the memory at the same time. If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod? It is common to use some form of cluster autoscaler to add more nodes to your cluster if it needs more capacity. This is easy if you run in the cloud. In general, Java and JVM is memory hungry, consider some other technology if you want to use less memory. How much memory an application needs/uses totally depends on your application, e.g what data structures are used.
###Instruction: What is the correct way of memory handling in OpenShift/Kubernetes? If I create a project in OKD, how can I determine optimal memory usage of pods? For example, if I use 1 deployment for 1-2 pods and each pod uses 300-500 Mb of RAM - Spring Boot apps. So technically, 20 pods uses around 6-10GB RAM, but as I see, sometimes each project could have around 100-150 containers which needs at least 30-50Gb of RAM. I also tried with horizontal scale, and/or request/limits but still lot of memory used by each micro-service. However, to start a pod, it requires around 500-700MB RAM, after spring container has been started they can live with around 300MB as mentioned. So, I have 2 questions: Is it able to give extra memory but only for the first X minutes for each pod start? If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod? Thanks for the answer in advance! ### Response: Is it able to give extra memory but only for the first X minutes for each pod start? You do get this behavior when you set the limit to a higher value than the request. This allows pods to burst, unless they all need the memory at the same time. If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod? It is common to use some form of cluster autoscaler to add more nodes to your cluster if it needs more capacity. This is easy if you run in the cloud. In general, Java and JVM is memory hungry, consider some other technology if you want to use less memory. How much memory an application needs/uses totally depends on your application, e.g what data structures are used.
What is the correct way of memory handling in OpenShift/Kubernetes? If I create a project in OKD, how can I determine optimal memory usage of pods? For example, if I use 1 deployment for 1-2 pods and each pod uses 300-500 Mb of RAM - Spring Boot apps. So technically, 20 pods uses around 6-10GB RAM, but as I see, sometimes each project could have around 100-150 containers which needs at least 30-50Gb of RAM. I also tried with horizontal scale, and/or request/limits but still lot of memory used by each micro-service. However, to start a pod, it requires around 500-700MB RAM, after spring container has been started they can live with around 300MB as mentioned. So, I have 2 questions: Is it able to give extra memory but only for the first X minutes for each pod start? If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod? Thanks for the answer in advance!
Solution if you use prometheus-community/prometheus which includes this alertmanager chart as a dependency, then you can do the following: create secret in the same namespace where your alertmanager pod is running: k create secret generic alertmanager-secrets \ --from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \ --from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...." mount that secret via use of extraSecretMounts alertmanager: enabled: true service: annotations: prometheus.io/scrape: "true" # contains secret values for opsgenie and slack receivers extraSecretMounts: - name: secret-files mountPath: /etc/secrets subPath: "" secretName: alertmanager-secrets readOnly: true use them in your receivers: receivers: - name: slack-channel slack_configs: - channel: '#client-ccf-ccl-alarms' api_url_file: /etc/secrets/slack-api-url <-------------------THIS title: '{{ template "default.title" . }}' text: '{{ template "default.description" . }}' pretext: '{{ template "slack.pretext" . }}' color: '{{ template "slack.color" . }}' footer: '{{ template "slack.footer" . }}' send_resolved: true actions: - type: button text: "Query :mag:" url: '{{ template "alert_query_url" . }}' - type: button text: "Silence :no_bell:" url: '{{ template "alert_silencer_url" . }}' - type: button text: "Karma UI :mag:" url: '{{ template "alert_karma_url" . }}' - type: button text: "Runbook :green_book:" url: '{{ template "alert_runbook_url" . }}' - type: button text: "Grafana :chart_with_upwards_trend:" url: '{{ template "alert_grafana_url" . }}' - type: button text: "KB :mag:" url: '{{ template "alert_kb_url" . }}' - name: opsgenie opsgenie_configs: - send_resolved: true api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS message: '{{ template "default.title" . }}' description: '{{ template "default.description" . }}' source: '{{ template "opsgenie.default.source" . }}' priority: '{{ template "opsgenie.default.priority" . }}' tags: '{{ template "opsgenie.default.tags" . }}' If you want to use email functionality of email_config then simply use the same approach with: [ auth_password_file: <string> | default = global.smtp_auth_password_file ]
###Instruction: I provisioned alertmanager using Helm (and ArgoCD). I need to insert smtp_auth_password value but not as a plain text. smtp_auth_username: 'apikey' smtp_auth_password: $API_KEY How can I achieve it? I heard about "external secret" but this should be the easiest way? ### Response: Solution if you use prometheus-community/prometheus which includes this alertmanager chart as a dependency, then you can do the following: create secret in the same namespace where your alertmanager pod is running: k create secret generic alertmanager-secrets \ --from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \ --from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...." mount that secret via use of extraSecretMounts alertmanager: enabled: true service: annotations: prometheus.io/scrape: "true" # contains secret values for opsgenie and slack receivers extraSecretMounts: - name: secret-files mountPath: /etc/secrets subPath: "" secretName: alertmanager-secrets readOnly: true use them in your receivers: receivers: - name: slack-channel slack_configs: - channel: '#client-ccf-ccl-alarms' api_url_file: /etc/secrets/slack-api-url <-------------------THIS title: '{{ template "default.title" . }}' text: '{{ template "default.description" . }}' pretext: '{{ template "slack.pretext" . }}' color: '{{ template "slack.color" . }}' footer: '{{ template "slack.footer" . }}' send_resolved: true actions: - type: button text: "Query :mag:" url: '{{ template "alert_query_url" . }}' - type: button text: "Silence :no_bell:" url: '{{ template "alert_silencer_url" . }}' - type: button text: "Karma UI :mag:" url: '{{ template "alert_karma_url" . }}' - type: button text: "Runbook :green_book:" url: '{{ template "alert_runbook_url" . }}' - type: button text: "Grafana :chart_with_upwards_trend:" url: '{{ template "alert_grafana_url" . }}' - type: button text: "KB :mag:" url: '{{ template "alert_kb_url" . }}' - name: opsgenie opsgenie_configs: - send_resolved: true api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS message: '{{ template "default.title" . }}' description: '{{ template "default.description" . }}' source: '{{ template "opsgenie.default.source" . }}' priority: '{{ template "opsgenie.default.priority" . }}' tags: '{{ template "opsgenie.default.tags" . }}' If you want to use email functionality of email_config then simply use the same approach with: [ auth_password_file: <string> | default = global.smtp_auth_password_file ]
I provisioned alertmanager using Helm (and ArgoCD). I need to insert smtp_auth_password value but not as a plain text. smtp_auth_username: 'apikey' smtp_auth_password: $API_KEY How can I achieve it? I heard about "external secret" but this should be the easiest way?
We just solved this one after a lot of off-and-on hunting. Turns out KEDA has an option called metricType that you can specify under triggers. TLDR you need to set that to "Value". To understand why you need to dive into how HPA works in Kubernetes. When you define a kind: HorizontalPodAutoscaler you specify the metrics that are used for scaling. KEDA does this for you and creates an external metric like this: metrics: - external: metric: name: ... selector: matchLabels: scaledobject.keda.sh/name: ... target: type: AverageValue averageValue: ... type: External There are Value and AverageValue metric types. AverageValue is the default, meant for metrics like http-requests-per-second, which would need to be divided by the number of replicas before compared to the target. Value, on the other hand, takes the direct value from your metric without averaging it. Since your Prometheus query is returning an average across pods already, you need to use Value. The clue is in your kubectl get hpa output: 3500m/70 (avg). See docs on HPA with external metrics. In KEDA that is specified using the metricType option under the triggers field. See KEDA: Scaling Deployments
###Instruction: Below are the setup details: keda installed, prometheus deployed, through application using below scaled object template for hpa: keda: triggers: - metadata: metricName: container_memory_usage query: avg(floor((container_memory_usage_bytes{pod=~"pythonb-.*",container=~"proxy"} / container_spec_memory_limit_bytes != +Inf) * 100)) serverAddress: <serveraddress> threshold: '70' type: prometheus basically we want to scale the deployment based on the given prom query.(based on container memory utilisation of particular pods..if it exceeds 70% then hpa will scale the pods. ) when we try the above query on Prometheus it returns the results as 8., 10.. , 25.3. Basically single element response But though keda it gives the result as below: kubectl get hpa -n integration keda-pythonb NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-pythonb Deployment/pythonb 3500m/70 (avg), 34%/87% + 1 more... 2 10 2 14m Instead of single value it gives 3500m as current value. does keda convert the data returned from prom query? Any pointers would be helpful. I hope the prom query is correct. ### Response: We just solved this one after a lot of off-and-on hunting. Turns out KEDA has an option called metricType that you can specify under triggers. TLDR you need to set that to "Value". To understand why you need to dive into how HPA works in Kubernetes. When you define a kind: HorizontalPodAutoscaler you specify the metrics that are used for scaling. KEDA does this for you and creates an external metric like this: metrics: - external: metric: name: ... selector: matchLabels: scaledobject.keda.sh/name: ... target: type: AverageValue averageValue: ... type: External There are Value and AverageValue metric types. AverageValue is the default, meant for metrics like http-requests-per-second, which would need to be divided by the number of replicas before compared to the target. Value, on the other hand, takes the direct value from your metric without averaging it. Since your Prometheus query is returning an average across pods already, you need to use Value. The clue is in your kubectl get hpa output: 3500m/70 (avg). See docs on HPA with external metrics. In KEDA that is specified using the metricType option under the triggers field. See KEDA: Scaling Deployments
Below are the setup details: keda installed, prometheus deployed, through application using below scaled object template for hpa: keda: triggers: - metadata: metricName: container_memory_usage query: avg(floor((container_memory_usage_bytes{pod=~"pythonb-.*",container=~"proxy"} / container_spec_memory_limit_bytes != +Inf) * 100)) serverAddress: <serveraddress> threshold: '70' type: prometheus basically we want to scale the deployment based on the given prom query.(based on container memory utilisation of particular pods..if it exceeds 70% then hpa will scale the pods. ) when we try the above query on Prometheus it returns the results as 8., 10.. , 25.3. Basically single element response But though keda it gives the result as below: kubectl get hpa -n integration keda-pythonb NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE keda-pythonb Deployment/pythonb 3500m/70 (avg), 34%/87% + 1 more... 2 10 2 14m Instead of single value it gives 3500m as current value. does keda convert the data returned from prom query? Any pointers would be helpful. I hope the prom query is correct.
Either you need to delete the corresponding Argocd application or you need to roll back the deployment. If you delete the application it will remove all the resources created by the application and it will stop the operation. If you roll back to the previous version it will undo the changes you have made in the current deployment and bring all your resources to previous versions. You can use Argo CD CLI argocd app rollback’ -r ’, to roll back to the particular version you want. You can also roll back from Argo CD UI. If your finalizer is still present you need to manually remove the finalizer and then re-apply the resource definitions. Please check this document
###Instruction: Lets say you use Argocd to deploy helm charts to Kubernetes. Things work great but you have a kubernetes resource finalizer on a resource. Somebody deletes the resource and now Argo just waits in the state of 'Progressing' or 'Deleting'. It can't actually do the delete due to the finalizer. This is a good protection mechanism for very important files like aws iam files. But I am hopeful somebody can help me figure out. Is there anyway to stop the operation given to argo and instead just let it sync again as normal? Maybe somebody made a mistake and the finalizer worked as intended. Instead of clearing the finalizer and dealing with the consequences. Can the consequences be prevented by undoing argocds operation? Thank you ### Response: Either you need to delete the corresponding Argocd application or you need to roll back the deployment. If you delete the application it will remove all the resources created by the application and it will stop the operation. If you roll back to the previous version it will undo the changes you have made in the current deployment and bring all your resources to previous versions. You can use Argo CD CLI argocd app rollback’ -r ’, to roll back to the particular version you want. You can also roll back from Argo CD UI. If your finalizer is still present you need to manually remove the finalizer and then re-apply the resource definitions. Please check this document
Lets say you use Argocd to deploy helm charts to Kubernetes. Things work great but you have a kubernetes resource finalizer on a resource. Somebody deletes the resource and now Argo just waits in the state of 'Progressing' or 'Deleting'. It can't actually do the delete due to the finalizer. This is a good protection mechanism for very important files like aws iam files. But I am hopeful somebody can help me figure out. Is there anyway to stop the operation given to argo and instead just let it sync again as normal? Maybe somebody made a mistake and the finalizer worked as intended. Instead of clearing the finalizer and dealing with the consequences. Can the consequences be prevented by undoing argocds operation? Thank you
You can use the downward API of Kubernetes as Environment variables : - name: pod_name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name Feel free to refer my GitHub YAML : https://github.com/harsh4870/OCI-public-logging-uma-agent/blob/main/deployment.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: app image: <App image> env: - name: pod_name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name volumeMounts: - mountPath: /app_logs name: app-logs subPathExpr: $(pod_name) volumes: - name: mydir hostPath: path: /var/log/app_logs type: DirectoryOrCreate Here is full article i have used downwardAPI for logging : https://medium.com/@harsh.manvar111/oke-logging-with-uma-agent-in-oci-d6f55a8bcc02
###Instruction: I have a use-case where I need to mount a hostpath to a pod but that hostpath should have a subdirectory with PODNAME, I know something like this can be used with statefulsets but due to some constraints a hostpath mount is needed in deployment where the root directory will have sub directories as podnames. Can this be achieved? ### Response: You can use the downward API of Kubernetes as Environment variables : - name: pod_name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name Feel free to refer my GitHub YAML : https://github.com/harsh4870/OCI-public-logging-uma-agent/blob/main/deployment.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: app image: <App image> env: - name: pod_name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name volumeMounts: - mountPath: /app_logs name: app-logs subPathExpr: $(pod_name) volumes: - name: mydir hostPath: path: /var/log/app_logs type: DirectoryOrCreate Here is full article i have used downwardAPI for logging : https://medium.com/@harsh.manvar111/oke-logging-with-uma-agent-in-oci-d6f55a8bcc02
I have a use-case where I need to mount a hostpath to a pod but that hostpath should have a subdirectory with PODNAME, I know something like this can be used with statefulsets but due to some constraints a hostpath mount is needed in deployment where the root directory will have sub directories as podnames. Can this be achieved?
Stop a kubernetes namespace from terminating by updating the finalizers associated with it. The finalizers of a namespace determine the conditions that must be met before the namespace can be deleted. If a namespace is in a "Terminating" state, it means that there are still finalizers associated with it that need to be resolved. To stop the termination, you need to remove the finalizers from the namespace's resource definition. You can do it by: retrieving the current definition of the namespace: kubectl get namespace <namespace-name> -o yaml > namespace.yaml then in the definition file, look for the finalizers field and remove the items associated with it: # vi/nano/emacs namespace.yaml # open with your favorite editor # and replace: apiVersion: v1 kind: Namespace metadata: name: <namespace-name> finalizers: - kubernetes save it. and then update the namespace definition with the edited file: kubectl replace -f namespace.yaml After that, the namespace should no longer be in a "Terminating" state and will remain intact.
###Instruction: I have a kubernetes namespace in phase: Terminating (waiting on finalizers or something) and I want it to stop terminating, without deleting it. Is there a way to do this? ### Response: Stop a kubernetes namespace from terminating by updating the finalizers associated with it. The finalizers of a namespace determine the conditions that must be met before the namespace can be deleted. If a namespace is in a "Terminating" state, it means that there are still finalizers associated with it that need to be resolved. To stop the termination, you need to remove the finalizers from the namespace's resource definition. You can do it by: retrieving the current definition of the namespace: kubectl get namespace <namespace-name> -o yaml > namespace.yaml then in the definition file, look for the finalizers field and remove the items associated with it: # vi/nano/emacs namespace.yaml # open with your favorite editor # and replace: apiVersion: v1 kind: Namespace metadata: name: <namespace-name> finalizers: - kubernetes save it. and then update the namespace definition with the edited file: kubectl replace -f namespace.yaml After that, the namespace should no longer be in a "Terminating" state and will remain intact.
I have a kubernetes namespace in phase: Terminating (waiting on finalizers or something) and I want it to stop terminating, without deleting it. Is there a way to do this?
You're hitting problems with whitespace in the first line of the block scalar. You should check two things: The template block containing indent must not itself be indented, it must start at the first column of its line; and The template block containing indent must not have a - inside the opening curly braces. {{- $configTpl := .Values.configTpl | default (.Files.Get "tiles/config.tpl") }} config.tpl: | {{ tpl $configTpl . | indent 4 }} The templating language isn't really aware of YAML syntax as it runs. If you have spaces in front of the indent line, they will get emitted, and then indent adds its own leading space, resulting in the last output you get where the first line is indented extra. The - whitespace control marker will also consume the preceding newline, resulting in the first line of the output being on the same line as the YAML block scalar marker.
###Instruction: I have a basic kubernetes helm template like below apiVersion: v1 kind: ConfigMap metadata: name: test namespace: {{ .Release.Namespace }} labels: app.kubernetes.io/version: {{ .Chart.AppVersion }} app.kubernetes.io/managed-by: {{ .Release.Service }} app.kubernetes.io/instance: {{ .Release.Name }} data: config.tpl: | {{- default ((tpl .Values.configTpl .) | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} and the values.yml file configTpl: | {{ x=8gwifi.org }} When i apply the helm chart it throw me an error ❯ helm upgrade mychart . --namespace=test --create-namespace --debug upgrade.go:142: [debug] preparing upgrade for mychart Error: UPGRADE FAILED: YAML parse error on 8gwifi.org-test/templates/configmap-logfilegen.yaml: error converting YAML to JSON: yaml: line 11: did not find expected comment or line break helm.go:84: [debug] error converting YAML to JSON: yaml: line 11: did not find expected comment or line break I tried different configuration config.tpl: | {{- default (tpl .Values.configTpl . | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} still resulting in same error, Is there a way to specify a config value if none is passed then used the hardcoded one I'm sure it's an YAML syntx issue couldn't figure it out checked all cases Based on David suggestion Template debug is showing this data: config.tpl: |- x := 8gwifi.org y := "functions" I can cleary see y is not indent and throwing YAML syntax error, not sure how to fix this This is the updated definition data: config.tpl: |- {{ (tpl .Values.configTpl . | indent 4) | default (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} values.yml configTpl: | x := "8gwifi.org" y := "function" ### Response: You're hitting problems with whitespace in the first line of the block scalar. You should check two things: The template block containing indent must not itself be indented, it must start at the first column of its line; and The template block containing indent must not have a - inside the opening curly braces. {{- $configTpl := .Values.configTpl | default (.Files.Get "tiles/config.tpl") }} config.tpl: | {{ tpl $configTpl . | indent 4 }} The templating language isn't really aware of YAML syntax as it runs. If you have spaces in front of the indent line, they will get emitted, and then indent adds its own leading space, resulting in the last output you get where the first line is indented extra. The - whitespace control marker will also consume the preceding newline, resulting in the first line of the output being on the same line as the YAML block scalar marker.
I have a basic kubernetes helm template like below apiVersion: v1 kind: ConfigMap metadata: name: test namespace: {{ .Release.Namespace }} labels: app.kubernetes.io/version: {{ .Chart.AppVersion }} app.kubernetes.io/managed-by: {{ .Release.Service }} app.kubernetes.io/instance: {{ .Release.Name }} data: config.tpl: | {{- default ((tpl .Values.configTpl .) | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} and the values.yml file configTpl: | {{ x=8gwifi.org }} When i apply the helm chart it throw me an error ❯ helm upgrade mychart . --namespace=test --create-namespace --debug upgrade.go:142: [debug] preparing upgrade for mychart Error: UPGRADE FAILED: YAML parse error on 8gwifi.org-test/templates/configmap-logfilegen.yaml: error converting YAML to JSON: yaml: line 11: did not find expected comment or line break helm.go:84: [debug] error converting YAML to JSON: yaml: line 11: did not find expected comment or line break I tried different configuration config.tpl: | {{- default (tpl .Values.configTpl . | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} still resulting in same error, Is there a way to specify a config value if none is passed then used the hardcoded one I'm sure it's an YAML syntx issue couldn't figure it out checked all cases Based on David suggestion Template debug is showing this data: config.tpl: |- x := 8gwifi.org y := "functions" I can cleary see y is not indent and throwing YAML syntax error, not sure how to fix this This is the updated definition data: config.tpl: |- {{ (tpl .Values.configTpl . | indent 4) | default (tpl (.Files.Get "files/config.tpl") . | indent 4) -}} values.yml configTpl: | x := "8gwifi.org" y := "function"
Update Even better would be to get the new_pod id and exec directly into that. Also possible, yes. Try this: k rollout status deploy/myapp >/dev/null && \ k get po -l app=myapp | grep Running | awk '{print $1}' | xargs -I{} kubectl exec {} -- show_version I would love to know what controls that 30s time. This can be configured using the terminationGracePeriodSeconds field in the pod's spec. The value defaults to, you guessed it right, 30s. If you're not concerned about data loss (due to the immediate shutdown), it can be set to 0. After that you can directly exec into the new pod: spec: terminationGracePeriodSeconds: 0 k rollout status deploy/myapp >/dev/null && k exec deploy/myapp -- show_version While being "Terminated" the old pod is still in phase Running, and the kubectl exec deploy/myapp seems to use the first Running pod of the deployment . I would suggest: Retrieve and store the name of the currently running pod in a temp variable prior to deployment (assuming the pod has the label app=myapp) $ old_pod=$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}') Deploy $ k apply -f Deployment.yaml Wait until the rollout is done $ k rollout status deploy/myapp Wait until the old_pod is deleted $ k wait --for=delete pod/$old_pod --timeout -1s Check the new pod $ k exec deploy/myapp -- show_version
###Instruction: We use kubectl set image to rollout a new version 2.0.0 of an existing application. We then use kubectl rollout status to wait for the new pod to become ready so that we can run some basic tests. The problem is, kubectl rollout status returns (implying the new v2 pod is ready) but when we use kubectl exec we ALWAYS land in the old v1 pod. $ date Mon 13 Feb 2023 02:33:50 PM CET $ k set image deploy/myapp myapp=myapp:2.0.0 && k rollout status deploy/myapp deployment.apps/myapp image updated Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... deployment "myapp" successfully rolled out Here, we assume the new version is running. Let's check: $ k exec deploy/myapp -- show_version 1.0.0 Nope, it's still the old version. Check the deplyoment: $ k get deploy/myapp NAME READY UP-TO-DATE AVAILABLE AGE myapp 1/1 1 1 273d Looks ready (K9S shows 1 pod "Terminating" and 1 pod ready). Check again: $date Mon 13 Feb 2023 02:34:00 PM CET $ k exec deploy/myapp -- show_version 1.0.0 Nope, check the pods: kubectl get pod | grep myapp- myapp-79454d746f-zw5kg 1/1 Running 0 31s myapp-6c484f86d4-2zsk5 1/1 Terminating 0 3m5s So our pod is running, we just can't exec into it - it always "picks" the terminating pod: $date Mon 13 Feb 2023 02:34:10 PM CET $ k exec deploy/myapp -- show_version 1.0.0 Wait 20-30s: $ date Mon 13 Feb 2023 02:34:25 PM CET $ k exec deploy/myapp -- show_version 2.0.0 Finally we have exec on the correct pod. Why/how can we wait for the old pod to terminate? OR How can we ensure we exec into the correct pod for testing? ### Response: Update Even better would be to get the new_pod id and exec directly into that. Also possible, yes. Try this: k rollout status deploy/myapp >/dev/null && \ k get po -l app=myapp | grep Running | awk '{print $1}' | xargs -I{} kubectl exec {} -- show_version I would love to know what controls that 30s time. This can be configured using the terminationGracePeriodSeconds field in the pod's spec. The value defaults to, you guessed it right, 30s. If you're not concerned about data loss (due to the immediate shutdown), it can be set to 0. After that you can directly exec into the new pod: spec: terminationGracePeriodSeconds: 0 k rollout status deploy/myapp >/dev/null && k exec deploy/myapp -- show_version While being "Terminated" the old pod is still in phase Running, and the kubectl exec deploy/myapp seems to use the first Running pod of the deployment . I would suggest: Retrieve and store the name of the currently running pod in a temp variable prior to deployment (assuming the pod has the label app=myapp) $ old_pod=$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}') Deploy $ k apply -f Deployment.yaml Wait until the rollout is done $ k rollout status deploy/myapp Wait until the old_pod is deleted $ k wait --for=delete pod/$old_pod --timeout -1s Check the new pod $ k exec deploy/myapp -- show_version
We use kubectl set image to rollout a new version 2.0.0 of an existing application. We then use kubectl rollout status to wait for the new pod to become ready so that we can run some basic tests. The problem is, kubectl rollout status returns (implying the new v2 pod is ready) but when we use kubectl exec we ALWAYS land in the old v1 pod. $ date Mon 13 Feb 2023 02:33:50 PM CET $ k set image deploy/myapp myapp=myapp:2.0.0 && k rollout status deploy/myapp deployment.apps/myapp image updated Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination... deployment "myapp" successfully rolled out Here, we assume the new version is running. Let's check: $ k exec deploy/myapp -- show_version 1.0.0 Nope, it's still the old version. Check the deplyoment: $ k get deploy/myapp NAME READY UP-TO-DATE AVAILABLE AGE myapp 1/1 1 1 273d Looks ready (K9S shows 1 pod "Terminating" and 1 pod ready). Check again: $date Mon 13 Feb 2023 02:34:00 PM CET $ k exec deploy/myapp -- show_version 1.0.0 Nope, check the pods: kubectl get pod | grep myapp- myapp-79454d746f-zw5kg 1/1 Running 0 31s myapp-6c484f86d4-2zsk5 1/1 Terminating 0 3m5s So our pod is running, we just can't exec into it - it always "picks" the terminating pod: $date Mon 13 Feb 2023 02:34:10 PM CET $ k exec deploy/myapp -- show_version 1.0.0 Wait 20-30s: $ date Mon 13 Feb 2023 02:34:25 PM CET $ k exec deploy/myapp -- show_version 2.0.0 Finally we have exec on the correct pod. Why/how can we wait for the old pod to terminate? OR How can we ensure we exec into the correct pod for testing?
i would recommend creating the wildcard certificate using issuer/clusterissuer. So you will be having the single secret with wildcard cert so you can use that across all ingress. As you are already using DNS verification it will work well, as wildcard not supports the HTTP apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-prod spec: acme: email: test123@gmail.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-prod solvers: - selector: dnsZones: - "devops.example.in" dns01: route53: region: us-east-1 hostedZoneID: Z0152EXAMPLE accessKeyID: AKIA5EXAMPLE secretAccessKeySecretRef: name: route53-secret key: secret-access-key --- apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: le-crt spec: secretName: tls-secret issuerRef: kind: Issuer name: letsencrypt-prod commonName: "*.devops.example.in" dnsNames: - "*.devops.example.in" Read my full article : https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2 Ingress & secret example apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" certmanager.k8s.io/issuer: "letsencrypt-prod" certmanager.k8s.io/acme-challenge-type: dns01 certmanager.k8s.io/acme-dns01-provider: route53 name: ingress-resource-tls namespace: default spec: rules: - host: "hello.devops.example.in" http: paths: - backend: serviceName: hello-app servicePort: 8080 path: / pathType: ImplementationSpecific tls: - hosts: - "hello.devops.example.in" secretName: tls-secret
###Instruction: I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster. From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken. Is there anyway to create one cluster-issuer to manage all ingresses across the cluster? Code included below Cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-grafana namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: foo@gmail.com privateKeySecretRef: name: letsencrypt-grafana solvers: - selector: dnsZones: - "foo.com" dns01: route53: region: eu-central-1 hostedZoneID: foo accessKeyID: foo secretAccessKeySecretRef: name: aws-route53-creds key: password.txt Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: loki annotations: cert-manager.io/cluster-issuer: letsencrypt-grafana kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - grafana.foo.com secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret. rules: - host: grafana.foo.com http: paths: - path: / pathType: Prefix backend: service: name: loki-grafana port: number: 80 ### Response: i would recommend creating the wildcard certificate using issuer/clusterissuer. So you will be having the single secret with wildcard cert so you can use that across all ingress. As you are already using DNS verification it will work well, as wildcard not supports the HTTP apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-prod spec: acme: email: test123@gmail.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-prod solvers: - selector: dnsZones: - "devops.example.in" dns01: route53: region: us-east-1 hostedZoneID: Z0152EXAMPLE accessKeyID: AKIA5EXAMPLE secretAccessKeySecretRef: name: route53-secret key: secret-access-key --- apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: le-crt spec: secretName: tls-secret issuerRef: kind: Issuer name: letsencrypt-prod commonName: "*.devops.example.in" dnsNames: - "*.devops.example.in" Read my full article : https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2 Ingress & secret example apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" certmanager.k8s.io/issuer: "letsencrypt-prod" certmanager.k8s.io/acme-challenge-type: dns01 certmanager.k8s.io/acme-dns01-provider: route53 name: ingress-resource-tls namespace: default spec: rules: - host: "hello.devops.example.in" http: paths: - backend: serviceName: hello-app servicePort: 8080 path: / pathType: ImplementationSpecific tls: - hosts: - "hello.devops.example.in" secretName: tls-secret
I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster. From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken. Is there anyway to create one cluster-issuer to manage all ingresses across the cluster? Code included below Cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-grafana namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: foo@gmail.com privateKeySecretRef: name: letsencrypt-grafana solvers: - selector: dnsZones: - "foo.com" dns01: route53: region: eu-central-1 hostedZoneID: foo accessKeyID: foo secretAccessKeySecretRef: name: aws-route53-creds key: password.txt Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: loki annotations: cert-manager.io/cluster-issuer: letsencrypt-grafana kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - grafana.foo.com secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret. rules: - host: grafana.foo.com http: paths: - path: / pathType: Prefix backend: service: name: loki-grafana port: number: 80
There's a pretty simple way to let your main chart install the objects defined by your dependency chart's CRDs. Just make them installed and upgraded with the post-install and post-upgrade hooks. Just a simple example for you. Let's imagine, you need to install a cert-manager as a subchart and then your main chart needs to install an Issuer. Obviously, the initial installation fails, as the CRDs aren't installed yet, so the Issuer doesn't pass validation. But if you use the hook (by adding the following annotation to the template of your Issuer: "helm.sh/hook": post-install,post-upgrade), then the Issuer will be installed only in the very end of the installation procedure, when cert-manager is up and kicking.
###Instruction: I have a helm chart A which depends on a third-party subchart B. Chart B defines some CRDs which are used by chart A. However, when I install chart A (hence B too), I get an error saying that the CRDs are not recognized. It seems that the CRs are being stored before CRDs. Helm documentation about CRD describes two ways to handle this order, either put the CRDs in a folder called crds, or use two separate charts and install them one after the other. My questions are the following: Why doesn't Helm apply first the CRDs regardless of where they are? Why is the crds folder needed? What if the CRDs are in a chart that you don't wish to modify (like in my case)? Doesn't the second option make the dependencies specification useless. Can't there be an order of execution for dependencies? Is there a way, that I might have overlooked, to still keep 1 chart with a dependency and somehow make sure that the CRDs defined in the dependency are stored before being used? (hooks?) (you don't have to answer all the questions, an answer to any of them is appreciated) ### Response: There's a pretty simple way to let your main chart install the objects defined by your dependency chart's CRDs. Just make them installed and upgraded with the post-install and post-upgrade hooks. Just a simple example for you. Let's imagine, you need to install a cert-manager as a subchart and then your main chart needs to install an Issuer. Obviously, the initial installation fails, as the CRDs aren't installed yet, so the Issuer doesn't pass validation. But if you use the hook (by adding the following annotation to the template of your Issuer: "helm.sh/hook": post-install,post-upgrade), then the Issuer will be installed only in the very end of the installation procedure, when cert-manager is up and kicking.
I have a helm chart A which depends on a third-party subchart B. Chart B defines some CRDs which are used by chart A. However, when I install chart A (hence B too), I get an error saying that the CRDs are not recognized. It seems that the CRs are being stored before CRDs. Helm documentation about CRD describes two ways to handle this order, either put the CRDs in a folder called crds, or use two separate charts and install them one after the other. My questions are the following: Why doesn't Helm apply first the CRDs regardless of where they are? Why is the crds folder needed? What if the CRDs are in a chart that you don't wish to modify (like in my case)? Doesn't the second option make the dependencies specification useless. Can't there be an order of execution for dependencies? Is there a way, that I might have overlooked, to still keep 1 chart with a dependency and somehow make sure that the CRDs defined in the dependency are stored before being used? (hooks?) (you don't have to answer all the questions, an answer to any of them is appreciated)
The accepted solution does work -- a different issuer is one way to go. Though if you want to use the ACME issuer, you'll need to solve challenges. This can be done via either a HTTP01 solver or a DNS01 solver. If you choose to go with the DNS01 solver, you'll either need: to move your DNS hosting from GoDaddy to one of the supported providers. or you can try using this GoDaddy Webhook provider, which you may already be aware of. Though I can't guarantee that the project is in working status.
###Instruction: I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (https://github.com/jetstack/cert-manager/issues/1083) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates. I'm fairly new to kubernetes, so if I missed something let me know. Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes? For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services. Thanks in advance! ### Response: The accepted solution does work -- a different issuer is one way to go. Though if you want to use the ACME issuer, you'll need to solve challenges. This can be done via either a HTTP01 solver or a DNS01 solver. If you choose to go with the DNS01 solver, you'll either need: to move your DNS hosting from GoDaddy to one of the supported providers. or you can try using this GoDaddy Webhook provider, which you may already be aware of. Though I can't guarantee that the project is in working status.
I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (https://github.com/jetstack/cert-manager/issues/1083) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates. I'm fairly new to kubernetes, so if I missed something let me know. Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes? For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services. Thanks in advance!
"If x is truthy, then use its value, otherwise use y" is what the Helm (Sprig) default function does. You could replace the inner conditional with namespace: {{ .Values.global.namespace | default .Values.namespace }} The outer conditional is trickier. The problem you're trying to work around here is, if .Values.global isn't defined, it will evaluate to nil, and then .Values.global.namespace is an error. The usual approach I use here is to again use default to get an empty dictionary if it isn't defined, at which point you can successfully do a lookup. So you should be able to replace the entire block with {{- $global := .Values.global | default dict }} namespace: {{ $global.namespace | default .Values.namespace }}
###Instruction: I need to reduce my 'if else code' in my helm chart template How can I do that ?. {{- if .Values.global }} {{- if .Values.global.namespace }} namespace: {{ .Values.global.namespace }} {{- else }} namespace: {{ .Values.namespace }} {{- end }} {{- else }} namespace: {{ .Values.namespace }} {{- end}} name: {{.Values.name}} ### Response: "If x is truthy, then use its value, otherwise use y" is what the Helm (Sprig) default function does. You could replace the inner conditional with namespace: {{ .Values.global.namespace | default .Values.namespace }} The outer conditional is trickier. The problem you're trying to work around here is, if .Values.global isn't defined, it will evaluate to nil, and then .Values.global.namespace is an error. The usual approach I use here is to again use default to get an empty dictionary if it isn't defined, at which point you can successfully do a lookup. So you should be able to replace the entire block with {{- $global := .Values.global | default dict }} namespace: {{ $global.namespace | default .Values.namespace }}
I need to reduce my 'if else code' in my helm chart template How can I do that ?. {{- if .Values.global }} {{- if .Values.global.namespace }} namespace: {{ .Values.global.namespace }} {{- else }} namespace: {{ .Values.namespace }} {{- end }} {{- else }} namespace: {{ .Values.namespace }} {{- end}} name: {{.Values.name}}
You could use a variable and also {{with}} (which sets the dot), e.g.: {{- $ns := .Values.namespace -}} {{- with .Values.global }}{{ with.namespace }}{{ $ns = . }}{{end}{{ end -}} namespace: {{ $ns }} name: {{.Values.name}}
###Instruction: I need to reduce my 'if else code' in my helm chart template How can I do that ?. {{- if .Values.global }} {{- if .Values.global.namespace }} namespace: {{ .Values.global.namespace }} {{- else }} namespace: {{ .Values.namespace }} {{- end }} {{- else }} namespace: {{ .Values.namespace }} {{- end}} name: {{.Values.name}} ### Response: You could use a variable and also {{with}} (which sets the dot), e.g.: {{- $ns := .Values.namespace -}} {{- with .Values.global }}{{ with.namespace }}{{ $ns = . }}{{end}{{ end -}} namespace: {{ $ns }} name: {{.Values.name}}
I need to reduce my 'if else code' in my helm chart template How can I do that ?. {{- if .Values.global }} {{- if .Values.global.namespace }} namespace: {{ .Values.global.namespace }} {{- else }} namespace: {{ .Values.namespace }} {{- end }} {{- else }} namespace: {{ .Values.namespace }} {{- end}} name: {{.Values.name}}
The startup probe is optional. Originally, there were two types of probes in Kubernetes: readiness and liveness. However, people have encountered issues with slow-start containers. When a container takes a long time to start, Kubernetes does the first check on the liveness probe after initialDelaySeconds. If the check fails, Kubernetes attempts failureThreshold times with an interval of periodSeconds. If the liveness probe still fails, Kubernetes assumes that the container is not alive and restarts it. Unfortunately, the container will likely fail again, resulting in an endless cycle of restarting. You may want to increase failureThreshold and periodSeconds to avoid the endless restarting, but it can cause longer detection and recovery times in case of a thread deadlock. You may want to make the initialDelaySeconds longer to allow sufficient time for the container to start. However, it can be challenging to determine the appropriate delay since your application can run on various hardware. For instance, increasing initialDelaySeconds to 60 seconds to avoid this problem in one environment may cause unnecessary slow startup when deploying the service to a more advanced hardware that only requires 20 seconds to start. In such a scenario, Kubernetes waits for 60 seconds for the first liveness check, causing the pod to be idle for 40 seconds, and it still takes 60 seconds to serve. To address this issue, Kubernetes introduced the startup probe in 1.16, which defers all other probes until a pod completes its startup process. For slow-starting pods, the startup probe can poll at short intervals with a high failure threshold until it is satisfied, at which point the other probes can begin. If a container’s components take a long time to be ready except for the API component, the container can simply report 200 in the liveness probe, and the startup probe is not needed. Because the API component will be ready and report 200 very soon, Kubernetes will not restart the container endlessly, it will patiently wait until all the readiness probes indicate that the containers are all “ready” then take traffic to the pod. The startup probe can be implemented in the same way as the liveness probe. Once the startup probe confirms that the container is initialized, the liveness probe will immediately report that the container is alive, leaving no room for Kubernetes to mistakenly restart the container. Regarding the initialDelaySeconds, periodSeconds, failureThreshold, and timeout, it is really a balance between sensitivity and false positives. e.g., if you have a high failureThreshold and a high periodSeconds for the readiness probe, k8s cannot timely detect issues in the container and your pod continues to take traffic, hence many requests will fail. If you put a low failureThreshold and a low periodSeconds for the readiness probe, a temporary problem could take the pod out of traffic, that's a false positive. I kind of prefer default failureThreshold to 3 and periodSeconds to 5, successThreshold to 1 or 2. BTW, don't use the default health check from Spring Boot, you always need to customize them. More details here: https://danielw.cn/health-check-probes-in-k8s
###Instruction: I'm configuring startup/liveness/readiness probes for kubernetes deployments serving spring boot services. As of the spring boot documentation it's best practice to use the corresponding liveness & readiness actuator endpoints as describes here: https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot What do you use for your startup probe? What are your recommendations for failureThreshold, delay, period and timeout values? Did you encounter issues when deploying isito sidecars to an existing setup? ### Response: The startup probe is optional. Originally, there were two types of probes in Kubernetes: readiness and liveness. However, people have encountered issues with slow-start containers. When a container takes a long time to start, Kubernetes does the first check on the liveness probe after initialDelaySeconds. If the check fails, Kubernetes attempts failureThreshold times with an interval of periodSeconds. If the liveness probe still fails, Kubernetes assumes that the container is not alive and restarts it. Unfortunately, the container will likely fail again, resulting in an endless cycle of restarting. You may want to increase failureThreshold and periodSeconds to avoid the endless restarting, but it can cause longer detection and recovery times in case of a thread deadlock. You may want to make the initialDelaySeconds longer to allow sufficient time for the container to start. However, it can be challenging to determine the appropriate delay since your application can run on various hardware. For instance, increasing initialDelaySeconds to 60 seconds to avoid this problem in one environment may cause unnecessary slow startup when deploying the service to a more advanced hardware that only requires 20 seconds to start. In such a scenario, Kubernetes waits for 60 seconds for the first liveness check, causing the pod to be idle for 40 seconds, and it still takes 60 seconds to serve. To address this issue, Kubernetes introduced the startup probe in 1.16, which defers all other probes until a pod completes its startup process. For slow-starting pods, the startup probe can poll at short intervals with a high failure threshold until it is satisfied, at which point the other probes can begin. If a container’s components take a long time to be ready except for the API component, the container can simply report 200 in the liveness probe, and the startup probe is not needed. Because the API component will be ready and report 200 very soon, Kubernetes will not restart the container endlessly, it will patiently wait until all the readiness probes indicate that the containers are all “ready” then take traffic to the pod. The startup probe can be implemented in the same way as the liveness probe. Once the startup probe confirms that the container is initialized, the liveness probe will immediately report that the container is alive, leaving no room for Kubernetes to mistakenly restart the container. Regarding the initialDelaySeconds, periodSeconds, failureThreshold, and timeout, it is really a balance between sensitivity and false positives. e.g., if you have a high failureThreshold and a high periodSeconds for the readiness probe, k8s cannot timely detect issues in the container and your pod continues to take traffic, hence many requests will fail. If you put a low failureThreshold and a low periodSeconds for the readiness probe, a temporary problem could take the pod out of traffic, that's a false positive. I kind of prefer default failureThreshold to 3 and periodSeconds to 5, successThreshold to 1 or 2. BTW, don't use the default health check from Spring Boot, you always need to customize them. More details here: https://danielw.cn/health-check-probes-in-k8s
I'm configuring startup/liveness/readiness probes for kubernetes deployments serving spring boot services. As of the spring boot documentation it's best practice to use the corresponding liveness & readiness actuator endpoints as describes here: https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot What do you use for your startup probe? What are your recommendations for failureThreshold, delay, period and timeout values? Did you encounter issues when deploying isito sidecars to an existing setup?
The download of the jar happens in flink-kubernetes-operator pod. So, when you apply FlinkSessionJob, the fink-operator would recognize the Crd and will try to download the jar from jarUri location and construct a JobGraph and submit the sessionJob to JobDeployment. Flink Kubernetes Operator will also have flink running inside it to build a JobGraph. So, You will have to add flink-s3-fs-hadoop-1.15.3.jar in location /opt/flink/plugins/s3-fs-hadoop/ inside flink-kubernetes-operator You can add the jar either by extending the ghcr.io/apache/flink-kubernetes-operator image, curl the jar and copy it to plugins location or You can write an initContainer which will download the jar to a volume and mount that volume volumes: - name: s3-plugin emptyDir: { } initContainers: - name: busybox image: busybox:latest volumeMounts: - mountPath: /opt/flink/plugins/s3-fs-hadoop name: s3-plugin containers: - image: 'ghcr.io/apache/flink-kubernetes-operator:95128bf' name: flink-kubernetes-operator volumeMounts: - mountPath: /opt/flink/plugins/s3-fs-hadoop name: s3-plugin Also, if you are using serviceAccount for S3 authentication, give below config in flinkConfig fs.s3a.aws.credentials.provider: com.amazonaws.auth.WebIdentityTokenCredentialsProvider
###Instruction: I'm experimenting with the new Flink Kubernetes operator and I've been able to do pretty much everything that I need besides one thing: getting a JAR file from the S3 file system. Context I have a Flink application running in a EKS cluster in AWS and have all the information saved in a S3 buckets. Things like savepoints, checkpoints, high availability and JARs files are all stored there. I've been able to save the savepoints, checkpoints and high availability information in the bucket, but when trying to get the JAR file from the same bucket I get the error: Could not find a file system implementation for scheme 's3'. The scheme is directly supported by Flink through the following plugins: flink-s3-fs-hadoop, flink-s3-fs-presto. I was able to get to this thread, but I wasn't able to get the resource fetcher to work correctly. Also the solution is not ideal and I was searching for a more direct approach. Deployment files Here's the files that I'm deploying in the cluster: deployment.yml apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: flink-deployment spec: podTemplate: apiVersion: v1 kind: Pod metadata: name: pod-template spec: containers: - name: flink-main-container env: - name: ENABLE_BUILT_IN_PLUGINS value: flink-s3-fs-presto-1.15.3.jar;flink-s3-fs-hadoop-1.15.3.jar volumeMounts: - mountPath: /flink-data name: flink-volume volumes: - name: flink-volume hostPath: path: /tmp type: Directory image: flink:1.15 flinkVersion: v1_15 flinkConfiguration: state.checkpoints.dir: s3://kubernetes-operator/checkpoints state.savepoints.dir: s3://kubernetes-operator/savepoints high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory high-availability.storageDir: s3://kubernetes-operator/ha jobManager: resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 serviceAccount: flink session-job.yml apiVersion: flink.apache.org/v1beta1 kind: FlinkSessionJob metadata: name: flink-session-job spec: deploymentName: flink-deployment job: jarURI: s3://kubernetes-operator/savepoints/flink.jar parallelism: 3 upgradeMode: savepoint savepointTriggerNonce: 0 The Flink Kubernetes operator version that I'm using is 1.3.1 Is there anything that I'm missing or doing wrong? ### Response: The download of the jar happens in flink-kubernetes-operator pod. So, when you apply FlinkSessionJob, the fink-operator would recognize the Crd and will try to download the jar from jarUri location and construct a JobGraph and submit the sessionJob to JobDeployment. Flink Kubernetes Operator will also have flink running inside it to build a JobGraph. So, You will have to add flink-s3-fs-hadoop-1.15.3.jar in location /opt/flink/plugins/s3-fs-hadoop/ inside flink-kubernetes-operator You can add the jar either by extending the ghcr.io/apache/flink-kubernetes-operator image, curl the jar and copy it to plugins location or You can write an initContainer which will download the jar to a volume and mount that volume volumes: - name: s3-plugin emptyDir: { } initContainers: - name: busybox image: busybox:latest volumeMounts: - mountPath: /opt/flink/plugins/s3-fs-hadoop name: s3-plugin containers: - image: 'ghcr.io/apache/flink-kubernetes-operator:95128bf' name: flink-kubernetes-operator volumeMounts: - mountPath: /opt/flink/plugins/s3-fs-hadoop name: s3-plugin Also, if you are using serviceAccount for S3 authentication, give below config in flinkConfig fs.s3a.aws.credentials.provider: com.amazonaws.auth.WebIdentityTokenCredentialsProvider
I'm experimenting with the new Flink Kubernetes operator and I've been able to do pretty much everything that I need besides one thing: getting a JAR file from the S3 file system. Context I have a Flink application running in a EKS cluster in AWS and have all the information saved in a S3 buckets. Things like savepoints, checkpoints, high availability and JARs files are all stored there. I've been able to save the savepoints, checkpoints and high availability information in the bucket, but when trying to get the JAR file from the same bucket I get the error: Could not find a file system implementation for scheme 's3'. The scheme is directly supported by Flink through the following plugins: flink-s3-fs-hadoop, flink-s3-fs-presto. I was able to get to this thread, but I wasn't able to get the resource fetcher to work correctly. Also the solution is not ideal and I was searching for a more direct approach. Deployment files Here's the files that I'm deploying in the cluster: deployment.yml apiVersion: flink.apache.org/v1beta1 kind: FlinkDeployment metadata: name: flink-deployment spec: podTemplate: apiVersion: v1 kind: Pod metadata: name: pod-template spec: containers: - name: flink-main-container env: - name: ENABLE_BUILT_IN_PLUGINS value: flink-s3-fs-presto-1.15.3.jar;flink-s3-fs-hadoop-1.15.3.jar volumeMounts: - mountPath: /flink-data name: flink-volume volumes: - name: flink-volume hostPath: path: /tmp type: Directory image: flink:1.15 flinkVersion: v1_15 flinkConfiguration: state.checkpoints.dir: s3://kubernetes-operator/checkpoints state.savepoints.dir: s3://kubernetes-operator/savepoints high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory high-availability.storageDir: s3://kubernetes-operator/ha jobManager: resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 serviceAccount: flink session-job.yml apiVersion: flink.apache.org/v1beta1 kind: FlinkSessionJob metadata: name: flink-session-job spec: deploymentName: flink-deployment job: jarURI: s3://kubernetes-operator/savepoints/flink.jar parallelism: 3 upgradeMode: savepoint savepointTriggerNonce: 0 The Flink Kubernetes operator version that I'm using is 1.3.1 Is there anything that I'm missing or doing wrong?
I had the same issue with this configuration and it seems like the pipelineStages at this level is being ignored. I solved my problem by moving it to snippets. promtail: enabled: true config: snippets: pipelineStages: - docker: {} - drop: source: namespace expression: "kube-.*" This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml
###Instruction: I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it promtail: enabled: true # # Enable Promtail service monitoring # serviceMonitor: # enabled: true # # User defined pipeline stages pipelineStages: - docker: {} - drop: source: namespace expression: "kube-.*" Please help in solving inside container this values are not getting updated The configuration is already mentioned above ### Response: I had the same issue with this configuration and it seems like the pipelineStages at this level is being ignored. I solved my problem by moving it to snippets. promtail: enabled: true config: snippets: pipelineStages: - docker: {} - drop: source: namespace expression: "kube-.*" This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml
I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it promtail: enabled: true # # Enable Promtail service monitoring # serviceMonitor: # enabled: true # # User defined pipeline stages pipelineStages: - docker: {} - drop: source: namespace expression: "kube-.*" Please help in solving inside container this values are not getting updated The configuration is already mentioned above
The correct way to initialise the client can be found here: https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.14.4/pkg/client#example-New cl, err := client.New(config.GetConfigOrDie(), client.Options{}) if err != nil { fmt.Println("failed to create client") os.Exit(1) }
###Instruction: Following the k8s/controller-runtime/client example code (see here), which goes a bit like this var c client.Client func main() { // Using a typed object. pod := &corev1.Pod{ ObjectMeta: metav1.ObjectMeta{ Namespace: "namespace", Name: "name", }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Image: "nginx", Name: "nginx", }, }, }, } // c is a created client. _ = c.Create(context.Background(), pod) // nil deref here } I get a nullptr dereference on _ = c.Create(context.Background(), pod). To me this makes sense, since I declared c, but never initialised it. However the example code also does that. What is going on here? ### Response: The correct way to initialise the client can be found here: https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.14.4/pkg/client#example-New cl, err := client.New(config.GetConfigOrDie(), client.Options{}) if err != nil { fmt.Println("failed to create client") os.Exit(1) }
Following the k8s/controller-runtime/client example code (see here), which goes a bit like this var c client.Client func main() { // Using a typed object. pod := &corev1.Pod{ ObjectMeta: metav1.ObjectMeta{ Namespace: "namespace", Name: "name", }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Image: "nginx", Name: "nginx", }, }, }, } // c is a created client. _ = c.Create(context.Background(), pod) // nil deref here } I get a nullptr dereference on _ = c.Create(context.Background(), pod). To me this makes sense, since I declared c, but never initialised it. However the example code also does that. What is going on here?
Found the answer: go to Options / Legend and include the field to be used as legend :) In this case: {{pod}} Thanks Fabio Vieira
###Instruction: I created this chart using Grafana and the query rate(container_memory_usage_bytes{ image!="", container_name!="POD"}[5m]) /1024 / 1024. When highlighting the data points I receive all the details about the pod that is too much information. How can I show only the pod field? ### Response: Found the answer: go to Options / Legend and include the field to be used as legend :) In this case: {{pod}} Thanks Fabio Vieira
I created this chart using Grafana and the query rate(container_memory_usage_bytes{ image!="", container_name!="POD"}[5m]) /1024 / 1024. When highlighting the data points I receive all the details about the pod that is too much information. How can I show only the pod field?
The answer is No, Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned. Documentation is here Reason being garbage collection used this identifier, In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state Read through this for detailed understanding Foreground cascading deletion
###Instruction: In Kubernetes, there is deletionTimestamp to signal an ongoing deletion and there are finalizers to model tasks during the process of deletion. However, it could be, that during the deletion, the specification of a parent object changes in a way that would effective make cancelling the deletion the most desirable solution. I'd expect a clear and complete documentation of deletionTimestamp and finalization covering the entire lifecycle of deletionTimestamp. It seems that most people seem to assume that it is either zero or nonzero and cannot be changed while it is nonzero. However, there seems to be no documentation on that. I also do not want to "just check", because just check is subject to change and may stop working tomorrow. ### Response: The answer is No, Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned. Documentation is here Reason being garbage collection used this identifier, In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state Read through this for detailed understanding Foreground cascading deletion
In Kubernetes, there is deletionTimestamp to signal an ongoing deletion and there are finalizers to model tasks during the process of deletion. However, it could be, that during the deletion, the specification of a parent object changes in a way that would effective make cancelling the deletion the most desirable solution. I'd expect a clear and complete documentation of deletionTimestamp and finalization covering the entire lifecycle of deletionTimestamp. It seems that most people seem to assume that it is either zero or nonzero and cannot be changed while it is nonzero. However, there seems to be no documentation on that. I also do not want to "just check", because just check is subject to change and may stop working tomorrow.
Not sure how you have setup the Redis and you are running the Redis cluster or running Redis master-slave with sentinel option. Official doc If you have used helm chart they have the option to enable the sentinel. sentinel: enabled: true Check this Github Doc there is two helm charts or topology available and which one you have used. So the idea behind sentinel is it writes or gives you back the current IP of the master and slave. For example below the python code snippet, when you will hit the service of Redis it will give you both master and slave IP, you can use it as per need either write or read. from redis import Sentinel sentinel = Sentinel([('<**Single K8s service Name**>', 26379)], socket_timeout=0.1) sentinel.discover_master('mymaster') ('127.0.0.1', 6379) sentinel.discover_slaves('mymaster') [('127.0.0.1', 6380)] Reference : https://github.com/redis/redis-py#sentinel-support Ref for few more answer might be helpful : https://stackoverflow.com/a/68551193/5525824 https://stackoverflow.com/a/70271427/5525824 https://stackoverflow.com/a/70454335/5525824
###Instruction: I have a redis statefulset running on my cluster, with 3 nodes (master, slave, slave), the master node is for write only, and the other 2 nodes for read, they replication is working fine, but I want to connect a backend application to this redis-cluster and somehow manage the requests, in which the write requests goes to master and the reads goes to slaves, any idea how I can achieve this ? Does it have to be controlled in application level, or is there any solution in Kubernetes to achieve this? In addition, I want to know about the situation when I am scaling up the statefulset replicas or scale down, then the traffic direction still should be going to appropriate pod. My Kubernetes cluster is running on premises. ### Response: Not sure how you have setup the Redis and you are running the Redis cluster or running Redis master-slave with sentinel option. Official doc If you have used helm chart they have the option to enable the sentinel. sentinel: enabled: true Check this Github Doc there is two helm charts or topology available and which one you have used. So the idea behind sentinel is it writes or gives you back the current IP of the master and slave. For example below the python code snippet, when you will hit the service of Redis it will give you both master and slave IP, you can use it as per need either write or read. from redis import Sentinel sentinel = Sentinel([('<**Single K8s service Name**>', 26379)], socket_timeout=0.1) sentinel.discover_master('mymaster') ('127.0.0.1', 6379) sentinel.discover_slaves('mymaster') [('127.0.0.1', 6380)] Reference : https://github.com/redis/redis-py#sentinel-support Ref for few more answer might be helpful : https://stackoverflow.com/a/68551193/5525824 https://stackoverflow.com/a/70271427/5525824 https://stackoverflow.com/a/70454335/5525824
I have a redis statefulset running on my cluster, with 3 nodes (master, slave, slave), the master node is for write only, and the other 2 nodes for read, they replication is working fine, but I want to connect a backend application to this redis-cluster and somehow manage the requests, in which the write requests goes to master and the reads goes to slaves, any idea how I can achieve this ? Does it have to be controlled in application level, or is there any solution in Kubernetes to achieve this? In addition, I want to know about the situation when I am scaling up the statefulset replicas or scale down, then the traffic direction still should be going to appropriate pod. My Kubernetes cluster is running on premises.
I have faced the same issue, so i had purged the cache here: C:\Users\user.kube\cache And then, i have modified the path of the program pinniped in the config file below, at the line command (obviously, the program pinniped itself has to be present in this path) : C:\Users\user.kube\config name: cluster-np-a2op-cluster-pinniped user: exec: apiVersion: client.authentication.k8s.io/v1beta1 ..... ..... ..... --upstream-identity-provider-flow=browser_authcode command: C:\Program Files\Pinniped\pinniped.exe env: [] ..... ..... ..... Hope this will help. ;)
###Instruction: pinniped cli is not working in widnows. pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized. C:\Users\hello>pinniped pinniped is not recognized as a internal command or external command, operable program or batch file. Seem windows is not recognizing this .exe file as published by a valid publisher. pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work. ### Response: I have faced the same issue, so i had purged the cache here: C:\Users\user.kube\cache And then, i have modified the path of the program pinniped in the config file below, at the line command (obviously, the program pinniped itself has to be present in this path) : C:\Users\user.kube\config name: cluster-np-a2op-cluster-pinniped user: exec: apiVersion: client.authentication.k8s.io/v1beta1 ..... ..... ..... --upstream-identity-provider-flow=browser_authcode command: C:\Program Files\Pinniped\pinniped.exe env: [] ..... ..... ..... Hope this will help. ;)
pinniped cli is not working in widnows. pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized. C:\Users\hello>pinniped pinniped is not recognized as a internal command or external command, operable program or batch file. Seem windows is not recognizing this .exe file as published by a valid publisher. pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.
You map or edit the configmap like you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local rewrite name example.io service.default.svc.cluster.local loadbalance round_robin prometheus {$POD_IP}:9153 forward . /etc/resolv.conf reload YAML for ref apiVersion: v1 data: Corefile: |- .:5353 { bind {$POD_IP} cache 30 errors health {$POD_IP}:8080 kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } rewrite name example.io service.default.svc.cluster.local loadbalance round_robin prometheus {$POD_IP}:9153 forward . /etc/resolv.conf reload } kind: ConfigMap metadata: labels: app: coredns k8s-app: coredns name: coredns namespace: kube-system Other answers for ref : https://stackoverflow.com/a/73078010/5525824 https://stackoverflow.com/a/70672297/5525824
###Instruction: I'm using rke2 cluster, i.e. a k8s distribution. And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the core file of coredns like below. .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . 172.16.0.1 cache 30 loop reload loadbalance } example.org:53 { #加一个block errors cache 30 forward . 10.150.0.1 } However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile. How should I achieve this. Thank you a lot. ### Response: You map or edit the configmap like you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local rewrite name example.io service.default.svc.cluster.local loadbalance round_robin prometheus {$POD_IP}:9153 forward . /etc/resolv.conf reload YAML for ref apiVersion: v1 data: Corefile: |- .:5353 { bind {$POD_IP} cache 30 errors health {$POD_IP}:8080 kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } rewrite name example.io service.default.svc.cluster.local loadbalance round_robin prometheus {$POD_IP}:9153 forward . /etc/resolv.conf reload } kind: ConfigMap metadata: labels: app: coredns k8s-app: coredns name: coredns namespace: kube-system Other answers for ref : https://stackoverflow.com/a/73078010/5525824 https://stackoverflow.com/a/70672297/5525824
I'm using rke2 cluster, i.e. a k8s distribution. And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the core file of coredns like below. .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . 172.16.0.1 cache 30 loop reload loadbalance } example.org:53 { #加一个block errors cache 30 forward . 10.150.0.1 } However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile. How should I achieve this. Thank you a lot.
First of all, suggest to backup all your objects (certs, orders, issuer, clusterissuer, etc. with velero for example) ! Cert-manger documentation suggests using the cert-manager cli as: cmctl upgrade migrate-api-version You may need --skip-stored-version-check if you already tried to fix CRD api manually (like described in this issue): cmctl upgrade migrate-api-version --skip-stored-version-check Finally, if it's failing with the same message, install the 1.6.3 (if CRD is manually installed, upgrade CRD with 1.6.3) and repeat the command cmctl upgrade migrate-api-version. At last, you can upgrade minor version one by one (1.7, 1.8, etc.) as recommended
###Instruction: I removing other versions from cert-manager. After that i install the new version with helm using. Installation works fine. but when i use the command: $ kubectl get orders,challenges,clusterissuers Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2 Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2 Error from server: request to convert CR from an invalid group/version: cert-manager.io/v1alpha2 The CRDs: Name: orders.acme.cert-manager.io Namespace: Labels: app=cert-manager app.kubernetes.io/instance=cert-manager app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=cert-manager app.kubernetes.io/version=v1.8.2 helm.sh/chart=cert-manager-v1.8.2 Annotations: cert-manager.io/inject-ca-from-secret: cert-manager/cert-manager-webhook-ca meta.helm.sh/release-name: cert-manager meta.helm.sh/release-namespace: cert-manager API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition .... Last Transition Time: 2022-06-24T15:25:23Z Message: no conflicts found Reason: NoConflicts Status: True Type: NamesAccepted Last Transition Time: 2022-06-24T15:25:23Z Message: the initial names have been accepted Reason: InitialNamesAccepted Status: True Type: Established Stored Versions: v1 i dont found the CR where still using cert-manager.io/v1alpha2 and acme.cert-manager.io/v1alpha2 ### Response: First of all, suggest to backup all your objects (certs, orders, issuer, clusterissuer, etc. with velero for example) ! Cert-manger documentation suggests using the cert-manager cli as: cmctl upgrade migrate-api-version You may need --skip-stored-version-check if you already tried to fix CRD api manually (like described in this issue): cmctl upgrade migrate-api-version --skip-stored-version-check Finally, if it's failing with the same message, install the 1.6.3 (if CRD is manually installed, upgrade CRD with 1.6.3) and repeat the command cmctl upgrade migrate-api-version. At last, you can upgrade minor version one by one (1.7, 1.8, etc.) as recommended
I removing other versions from cert-manager. After that i install the new version with helm using. Installation works fine. but when i use the command: $ kubectl get orders,challenges,clusterissuers Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2 Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2 Error from server: request to convert CR from an invalid group/version: cert-manager.io/v1alpha2 The CRDs: Name: orders.acme.cert-manager.io Namespace: Labels: app=cert-manager app.kubernetes.io/instance=cert-manager app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=cert-manager app.kubernetes.io/version=v1.8.2 helm.sh/chart=cert-manager-v1.8.2 Annotations: cert-manager.io/inject-ca-from-secret: cert-manager/cert-manager-webhook-ca meta.helm.sh/release-name: cert-manager meta.helm.sh/release-namespace: cert-manager API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition .... Last Transition Time: 2022-06-24T15:25:23Z Message: no conflicts found Reason: NoConflicts Status: True Type: NamesAccepted Last Transition Time: 2022-06-24T15:25:23Z Message: the initial names have been accepted Reason: InitialNamesAccepted Status: True Type: Established Stored Versions: v1 i dont found the CR where still using cert-manager.io/v1alpha2 and acme.cert-manager.io/v1alpha2
Your output text: 2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" Implies the request is timing out on the IP 10.244.0.228:3000 Things to check: Is the service IP 10.244.0.228: kubectl get svc equmedia-api (it will likely be of type ClusterIP) Port forward to the service directly: kubectl port-forward svc/equmedia-api 3000:3000 and then try to access localhost:3000 in another terminal or in your browser. Does it respond, does it error or does it timeout? Check the pods your service is trying to match: kubectl get pods -l io.kompose.service=equmedia-api -- does this return any pods? If so, are they in Ready state or are they erroring? Do they have a value greater than 0 in the Restarts count? Check the logs of the pod(s) kubectl logs -f {pod-name} and see if it is unable to start up, or if it is repeatedly starting. UPDATE 1 Please add the output of the following commands to your question. Wrap the output with three backticks (`) on a single line before and after to preserve formatting: kubectl get pods -l io.kompose.service=equmedia-api kubectl get svc UPDATE 2 Since the IP that your controller is 10.244.0.228 see if any of your pods or services actually have that IP. Please add the output of these commands: kubectl get pods -A -o wide | grep 10.244.0.228 kubectl get svc -A | grep 10.244.0.228 UPDATE 3 I've yet to try deploying the gist, but I have noticed something You have networkpolicies setup and you have labelled your pod apiVersion: v1 kind: Pod metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.network/backend: "true" # <<--- HERE io.kompose.service: equmedia-api name: equmedia-api-pod spec: ... This matches your network policy here: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/backend: "true" podSelector: matchLabels: io.kompose.network/backend: "true" Now, this network policy reads (based in the information off this link) "Allow connections from Pods with the label io.kompose.network/backend="true" (last three lines) to pods that match the labels io.kompose.network/backend="true" (the ingress.from.podSelector bit) Sooo.... assuming I'm reading this correct, the reason the ingress controller is not able to talk to the pod, is because the controller pod does not have a label io.kompose.network/backend="true", and since you did not include that in your gist, I'm assuming you're using the ingress controller chart as a subchart/dependency. And if so, then out of the box, that chart won't have this label. This would explain why we were able to port-forward to the pod and the service directly, but the controller pod was not able to talk to the pod. And easy way to verify this is to either delete the backend networkpolicy, or modify it to allow all ingress traffic as a test (something like the example here) If this works, it will confirm the networkpolicy is blocking the traffic.
###Instruction: I’m quite new to k8s in general, only been using for smaller projects but made it work. I hope btw this is the right channel to ask questions (in this case about ingress-nginx). I’m trying to setup a cluster with a gateway-api and a few microservices (all written in NestJs). To give a little background, I first had everything in docker-compose and my entry was also a Nginx container with letsencrypt. The whole docker, works great locally. This was the config used for my NGinx Docker: upstream equmedia-api { server equmedia-api:3000; } server { listen 80 default_server; listen [::]:80 default_server; server_name localhost; return 301 https://$server_name$request_uri; } server { listen 80; listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; keepalive_timeout 70; server_name subdomain.example.com; ssl_session_cache shared:SSR:10m; ssl_session_timeout 10m; ssl_certificate /etc/letsencrypt/live/equmedia.pixeliner.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/equmedia.pixeliner.com/privkey.pem; access_log /var/log/nginx/nginx.access.log; error_log /var/log/nginx/nginx.error.log; location / { proxy_pass http://equmedia-api; # proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } As you can see, it upstreamed to my api container. Eventually I wanted to turn the whole deployment into k8s. Seemed like a good followup practice after the small projects. I learned about ingress-nginx and gave it my first try, but I seem to have struck a wall. Here is the setup I'm trying to achieve: Through DigitalOcean the setup will be behind a LoadBalancer. Here is my Ingress NGinx controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: equmedia-ingress-api annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/issuer: "letsencrypt-prod" nginx.ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/proxy-protocol: "true" nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https" spec: tls: - hosts: - subdomain.example.com secretName: quickstart-example-tls rules: - host: subdomain.example.com http: paths: - path: /api pathType: Prefix backend: service: name: equmedia-api port: number: 3000 And my api service: apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.service: equmedia-api name: equmedia-api spec: ports: - port: 3000 targetPort: 3000 selector: io.kompose.service: equmedia-api status: loadBalancer: {} When I try to access "https://subdomain.example.com/api/health", I get a 504 Gateway Time-out. Looking at the ingress controller logs I get the following: 2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" Anyone that can point me into the right direction, to fix this issue? EDIT The outcome for kubectl get pods -l io.kompose.service=equmedia-api: NAME READY STATUS RESTARTS AGE equmedia-api 1/1 Running 0 2d2h kubectl get svc: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE equmedia-api ClusterIP 10.245.173.11 <none> 3000/TCP 23h equmedia-api-rabbitmq ClusterIP 10.245.17.225 <none> 5672/TCP,15673/TCP 2d17h equmedia-api-redis ClusterIP 10.245.120.11 <none> 6379/TCP 2d17h equmedia-auth-db ClusterIP 10.245.94.21 <none> 5432/TCP 2d17h kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d17h quickstart-ingress-nginx-controller LoadBalancer 10.245.36.216 179.128.139.106 80:31194/TCP,443:31609/TCP 2d16h quickstart-ingress-nginx-controller-admission ClusterIP 10.245.232.77 <none> 443/TCP 2d16h EDIT2: I've requested my domain https://subdomain.example.com/api/health through browser, curl and postman. All of them return timeouts. kubectl get pods -A -o wide | grep 10.244.0.228 returns: default equmedia-api 1/1 Running 0 2d4h 10.244.0.228 temp-pool-qyhii <none> <none> kubectl get svc -A | grep 10.244.0.228 returns nothing EDIT3: Here is the logs of my API: [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [NestFactory] Starting Nest application... [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AppModule dependencies initialized +136ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] RedisCacheModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] UtilsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AxiosWrapperModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PassportModule dependencies initialized +32ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] JwtModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] TerminusModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] DiscoveryModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +0ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ScheduleModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +61ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +17ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +7ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HealthModule dependencies initialized +8ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CacheModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] MailModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HttpModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +24ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullQueueModule dependencies initialized +7ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PaymentModule dependencies initialized +8ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CustomerModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ContentModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AdserveModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AuthModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] OpenIdModule dependencies initialized +65ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] HealthController {/api/health}: +18ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health, GET} route +5ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-ping, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-disk, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-memory, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-microservice/:name, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] OpenIdController {/api/open-id}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/login, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/user, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/callback, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/logout, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AuthController {/api/auth}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signup, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signin, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signout, POST} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/refresh, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UserController {/api/user}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-user-id/email?, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-authenticated-user, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/change-user-password, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/delete-user-account, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/confirm/:token, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/forgot-password, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/set-new-password/:token, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UsersController {/api/users}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PaymentController {/api/payment}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/create/:id, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CustomerController {/api/customer}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/profile/:id, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/create, POST} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/delete/:id, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/update/:id, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ContentController {/api/content}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/create, POST} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/update/:contentId, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/delete/:contentId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/category/:categoryId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/slug/:slug, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CategoryController {/api/category}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/update/:categoryId, PUT} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/delete/:categoryId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] WidgetController {/api/widget}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/create, POST} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/update/:widgetId, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/delete/:widgetId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdvertiserController {/api/adserve/advertiser}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/assign, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdserveController {/api/adserve}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/serve, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/redirect, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PublisherController {/api/adserve}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId/zone/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ReportController {/api/adserve/report}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/report, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [NestApplication] Nest application successfully started +58ms -- API GATEWAY RUNNING - PORT: 3000 -- No errors are logged, and through a port-forward I also see my api working. EDIT4: Here is the gist with all pods/services/claims/... https://gist.github.com/pixeliner/2c89048294197155b0d4833ab4045f3c ### Response: Your output text: 2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" Implies the request is timing out on the IP 10.244.0.228:3000 Things to check: Is the service IP 10.244.0.228: kubectl get svc equmedia-api (it will likely be of type ClusterIP) Port forward to the service directly: kubectl port-forward svc/equmedia-api 3000:3000 and then try to access localhost:3000 in another terminal or in your browser. Does it respond, does it error or does it timeout? Check the pods your service is trying to match: kubectl get pods -l io.kompose.service=equmedia-api -- does this return any pods? If so, are they in Ready state or are they erroring? Do they have a value greater than 0 in the Restarts count? Check the logs of the pod(s) kubectl logs -f {pod-name} and see if it is unable to start up, or if it is repeatedly starting. UPDATE 1 Please add the output of the following commands to your question. Wrap the output with three backticks (`) on a single line before and after to preserve formatting: kubectl get pods -l io.kompose.service=equmedia-api kubectl get svc UPDATE 2 Since the IP that your controller is 10.244.0.228 see if any of your pods or services actually have that IP. Please add the output of these commands: kubectl get pods -A -o wide | grep 10.244.0.228 kubectl get svc -A | grep 10.244.0.228 UPDATE 3 I've yet to try deploying the gist, but I have noticed something You have networkpolicies setup and you have labelled your pod apiVersion: v1 kind: Pod metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.network/backend: "true" # <<--- HERE io.kompose.service: equmedia-api name: equmedia-api-pod spec: ... This matches your network policy here: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend spec: ingress: - from: - podSelector: matchLabels: io.kompose.network/backend: "true" podSelector: matchLabels: io.kompose.network/backend: "true" Now, this network policy reads (based in the information off this link) "Allow connections from Pods with the label io.kompose.network/backend="true" (last three lines) to pods that match the labels io.kompose.network/backend="true" (the ingress.from.podSelector bit) Sooo.... assuming I'm reading this correct, the reason the ingress controller is not able to talk to the pod, is because the controller pod does not have a label io.kompose.network/backend="true", and since you did not include that in your gist, I'm assuming you're using the ingress controller chart as a subchart/dependency. And if so, then out of the box, that chart won't have this label. This would explain why we were able to port-forward to the pod and the service directly, but the controller pod was not able to talk to the pod. And easy way to verify this is to either delete the backend networkpolicy, or modify it to allow all ingress traffic as a test (something like the example here) If this works, it will confirm the networkpolicy is blocking the traffic.
I’m quite new to k8s in general, only been using for smaller projects but made it work. I hope btw this is the right channel to ask questions (in this case about ingress-nginx). I’m trying to setup a cluster with a gateway-api and a few microservices (all written in NestJs). To give a little background, I first had everything in docker-compose and my entry was also a Nginx container with letsencrypt. The whole docker, works great locally. This was the config used for my NGinx Docker: upstream equmedia-api { server equmedia-api:3000; } server { listen 80 default_server; listen [::]:80 default_server; server_name localhost; return 301 https://$server_name$request_uri; } server { listen 80; listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; keepalive_timeout 70; server_name subdomain.example.com; ssl_session_cache shared:SSR:10m; ssl_session_timeout 10m; ssl_certificate /etc/letsencrypt/live/equmedia.pixeliner.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/equmedia.pixeliner.com/privkey.pem; access_log /var/log/nginx/nginx.access.log; error_log /var/log/nginx/nginx.error.log; location / { proxy_pass http://equmedia-api; # proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } As you can see, it upstreamed to my api container. Eventually I wanted to turn the whole deployment into k8s. Seemed like a good followup practice after the small projects. I learned about ingress-nginx and gave it my first try, but I seem to have struck a wall. Here is the setup I'm trying to achieve: Through DigitalOcean the setup will be behind a LoadBalancer. Here is my Ingress NGinx controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: equmedia-ingress-api annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/issuer: "letsencrypt-prod" nginx.ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/proxy-protocol: "true" nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https" spec: tls: - hosts: - subdomain.example.com secretName: quickstart-example-tls rules: - host: subdomain.example.com http: paths: - path: /api pathType: Prefix backend: service: name: equmedia-api port: number: 3000 And my api service: apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.service: equmedia-api name: equmedia-api spec: ports: - port: 3000 targetPort: 3000 selector: io.kompose.service: equmedia-api status: loadBalancer: {} When I try to access "https://subdomain.example.com/api/health", I get a 504 Gateway Time-out. Looking at the ingress controller logs I get the following: 2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" 2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com" Anyone that can point me into the right direction, to fix this issue? EDIT The outcome for kubectl get pods -l io.kompose.service=equmedia-api: NAME READY STATUS RESTARTS AGE equmedia-api 1/1 Running 0 2d2h kubectl get svc: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE equmedia-api ClusterIP 10.245.173.11 <none> 3000/TCP 23h equmedia-api-rabbitmq ClusterIP 10.245.17.225 <none> 5672/TCP,15673/TCP 2d17h equmedia-api-redis ClusterIP 10.245.120.11 <none> 6379/TCP 2d17h equmedia-auth-db ClusterIP 10.245.94.21 <none> 5432/TCP 2d17h kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d17h quickstart-ingress-nginx-controller LoadBalancer 10.245.36.216 179.128.139.106 80:31194/TCP,443:31609/TCP 2d16h quickstart-ingress-nginx-controller-admission ClusterIP 10.245.232.77 <none> 443/TCP 2d16h EDIT2: I've requested my domain https://subdomain.example.com/api/health through browser, curl and postman. All of them return timeouts. kubectl get pods -A -o wide | grep 10.244.0.228 returns: default equmedia-api 1/1 Running 0 2d4h 10.244.0.228 temp-pool-qyhii <none> <none> kubectl get svc -A | grep 10.244.0.228 returns nothing EDIT3: Here is the logs of my API: [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [NestFactory] Starting Nest application... [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AppModule dependencies initialized +136ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] RedisCacheModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] UtilsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AxiosWrapperModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PassportModule dependencies initialized +32ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] JwtModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] TerminusModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] DiscoveryModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +0ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ScheduleModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +61ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +17ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +7ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HealthModule dependencies initialized +8ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CacheModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] MailModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HttpModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +24ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullQueueModule dependencies initialized +7ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PaymentModule dependencies initialized +8ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CustomerModule dependencies initialized +1ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ContentModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AdserveModule dependencies initialized +3ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AuthModule dependencies initialized +2ms [Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] OpenIdModule dependencies initialized +65ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] HealthController {/api/health}: +18ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health, GET} route +5ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-ping, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-disk, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-memory, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-microservice/:name, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] OpenIdController {/api/open-id}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/login, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/user, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/callback, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/logout, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AuthController {/api/auth}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signup, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signin, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signout, POST} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/refresh, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UserController {/api/user}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-user-id/email?, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-authenticated-user, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/change-user-password, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/delete-user-account, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/confirm/:token, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/forgot-password, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/set-new-password/:token, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UsersController {/api/users}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PaymentController {/api/payment}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/create/:id, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CustomerController {/api/customer}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/profile/:id, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/create, POST} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/delete/:id, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/update/:id, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ContentController {/api/content}: +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content, GET} route +2ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/create, POST} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/update/:contentId, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/delete/:contentId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/category/:categoryId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/slug/:slug, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CategoryController {/api/category}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/update/:categoryId, PUT} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/delete/:categoryId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] WidgetController {/api/widget}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/create, POST} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/update/:widgetId, PUT} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/delete/:widgetId, DELETE} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdvertiserController {/api/adserve/advertiser}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/assign, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdserveController {/api/adserve}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/serve, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/redirect, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PublisherController {/api/adserve}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId, GET} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId/zone/create, POST} route +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ReportController {/api/adserve/report}: +1ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/report, GET} route +0ms [Nest] 1 - 02/17/2023, 10:52:28 AM LOG [NestApplication] Nest application successfully started +58ms -- API GATEWAY RUNNING - PORT: 3000 -- No errors are logged, and through a port-forward I also see my api working. EDIT4: Here is the gist with all pods/services/claims/... https://gist.github.com/pixeliner/2c89048294197155b0d4833ab4045f3c
PowerShell completions by alias in cobra supported only from version 1.6.0. kubectl depends on cobra 1.6.0 since v1.26 version. So, you should use kubectl v1.26+ and powershell 5.0+ I use it with powershell 7.3, kubectl v1.26.1 in windows 10 but it suggest variants only after typing first letter (it doesn't show select menu without typing any letter) Set-Alias -Name k -Value kubectl kubectl completion powershell | Out-String | Invoke-Expression Register-ArgumentCompleter -CommandName k -ScriptBlock $__kubectlCompleterBlock
###Instruction: Trying to get the auto-complete feature of kubectl working with an alias of k, using Windows PowerShell and PowerShell Core on Windows Server 2022, but can't get it to work tested with echo 'Set-Alias -Name k -Value kubectl' >> $PROFILE kubectl completion powershell >> $PROFILE or even simply this to try to get it working in the same session, without worrying about saving it to the PowerShell profile Set-Alias -Name k -Value kubectl kubectl completion powershell | Out-String | Invoke-Expression Results so far: Powershell 5.1 Tab completion with kubectl: OK Tab completion with k: Fails Powershell Core 7.3 Tab completion with kubectl: Fails Tab completion with k: Fails The results above are tested with both Windows Terminal and cmd Am I missing something obvious? (like I normally do) Thanks! ### Response: PowerShell completions by alias in cobra supported only from version 1.6.0. kubectl depends on cobra 1.6.0 since v1.26 version. So, you should use kubectl v1.26+ and powershell 5.0+ I use it with powershell 7.3, kubectl v1.26.1 in windows 10 but it suggest variants only after typing first letter (it doesn't show select menu without typing any letter) Set-Alias -Name k -Value kubectl kubectl completion powershell | Out-String | Invoke-Expression Register-ArgumentCompleter -CommandName k -ScriptBlock $__kubectlCompleterBlock
Trying to get the auto-complete feature of kubectl working with an alias of k, using Windows PowerShell and PowerShell Core on Windows Server 2022, but can't get it to work tested with echo 'Set-Alias -Name k -Value kubectl' >> $PROFILE kubectl completion powershell >> $PROFILE or even simply this to try to get it working in the same session, without worrying about saving it to the PowerShell profile Set-Alias -Name k -Value kubectl kubectl completion powershell | Out-String | Invoke-Expression Results so far: Powershell 5.1 Tab completion with kubectl: OK Tab completion with k: Fails Powershell Core 7.3 Tab completion with kubectl: Fails Tab completion with k: Fails The results above are tested with both Windows Terminal and cmd Am I missing something obvious? (like I normally do) Thanks!
Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.
###Instruction: How to deploy on K8 via Pulumi using the ArgoCD Helm Chart? Pulumi up Diagnostics: kubernetes:helm.sh/v3:Release (argocd): error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0 THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0 ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml Source: ... cluster = eks.Cluster("argo-example") # version="1.23" # Cluster provider provider = k8s.Provider( "eks", kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k)) #kubeconfig=cluster.kubeconfig ) ns = k8s.core.v1.Namespace( 'argocd', metadata={ "name": "argocd", }, opts=pulumi.ResourceOptions( provider=provider ) ) argo = k8s.helm.v3.Release( "argocd", args=k8s.helm.v3.ReleaseArgs( chart="argo-cd", namespace=ns.metadata.name, repository_opts=k8s.helm.v3.RepositoryOptsArgs( repo="https://argoproj.github.io/argo-helm" ), values={ "server": { "service": { "type": "LoadBalancer", } } }, ), opts=pulumi.ResourceOptions(provider=provider, parent=ns), ) Any ideas as to fixing this oddity between the version error and the actual cluster version? I've tried: Deleting everything and starting over. Updating to the latest ArgoCD install yaml. ### Response: Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.
How to deploy on K8 via Pulumi using the ArgoCD Helm Chart? Pulumi up Diagnostics: kubernetes:helm.sh/v3:Release (argocd): error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0 THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0 ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml Source: ... cluster = eks.Cluster("argo-example") # version="1.23" # Cluster provider provider = k8s.Provider( "eks", kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k)) #kubeconfig=cluster.kubeconfig ) ns = k8s.core.v1.Namespace( 'argocd', metadata={ "name": "argocd", }, opts=pulumi.ResourceOptions( provider=provider ) ) argo = k8s.helm.v3.Release( "argocd", args=k8s.helm.v3.ReleaseArgs( chart="argo-cd", namespace=ns.metadata.name, repository_opts=k8s.helm.v3.RepositoryOptsArgs( repo="https://argoproj.github.io/argo-helm" ), values={ "server": { "service": { "type": "LoadBalancer", } } }, ), opts=pulumi.ResourceOptions(provider=provider, parent=ns), ) Any ideas as to fixing this oddity between the version error and the actual cluster version? I've tried: Deleting everything and starting over. Updating to the latest ArgoCD install yaml.
sorted - kubectl config use-context docker-desktop
###Instruction: I was using kubectl with Docker Desktop - everything was fine I think logged into Openshift via the Command line and kubectl seemed to point to OC cluster I then logged out of Openshift and now kubectl doesn't seem to point at anything? How do I get to point it at docker desktop again? ### Response: sorted - kubectl config use-context docker-desktop
I was using kubectl with Docker Desktop - everything was fine I think logged into Openshift via the Command line and kubectl seemed to point to OC cluster I then logged out of Openshift and now kubectl doesn't seem to point at anything? How do I get to point it at docker desktop again?
There might be multiple reasons for this, initially let’s check whether the pod is really created or not. Like ehmad11 suggested use kubectl get pods --all-namespaces for listing pods in all the namespaces. However in your case it might not work because your application is getting directly deployed in defaulf namespace. Regarding the error “Response Status: 503 Service Unavailable in 8 milliseconds” once you are able to locate the pod use kubectl describe <pod> for finding logs specific to your pod and follow the troubleshooting steps provided in this document for rectifying it. Note: The reference document is provided from komodor site, here they have articulated each troubleshooting step in highly detailed and understandable manner.
###Instruction: I am a beginner in Kubernetes and have been using the kubectl command to create pods for several months. However, I recently encountered a problem where Kubernetes did not create a pod after I executed the kubectl create -f mypod.yaml command. When I run kubectl get pods, the mypod does not appear in the list of pods and I am unable to access it by name as if it does not exist. However, if I try to create it again, I receive a message saying that the pod has already been created. To illustrate my point, let me give you an example. I frequently generate pods using a YAML file called tpcds-25-query.yaml. The contents of this file are as follows: apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: tpcds-25-query namespace: default spec: type: Scala mode: cluster image: registry.cn-beijing.aliyuncs.com/kube-ai/ack-spark-benchmark:1.0.1 imagePullPolicy: Always sparkVersion: 2.4.5 mainClass: com.aliyun.spark.benchmark.tpcds.BenchmarkSQL mainApplicationFile: "local:///opt/spark/jars/ack-spark-benchmark-assembly-0.1.jar" arguments: # TPC-DS data localtion - "oss://spark/data/tpc-ds-data/150g" # results location - "oss://spark/result/tpcds-25-query" # Path to kit in the docker image - "/tmp/tpcds-kit/tools" # Data Format - "parquet" # Scale factor (in GB) - "150" # Number of iterations - "1" # Optimize queries - "false" # Filter queries, will run all if empty - "q70-v2.4,q82-v2.4,q64-v2.4" - "q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4" # Logging set to WARN - "true" hostNetwork: true dnsPolicy: ClusterFirstWithHostNet restartPolicy: type: Never timeToLiveSeconds: 86400 hadoopConf: # OSS "fs.oss.impl": "OSSFileSystem" "fs.oss.endpoint": "oss.com" "fs.oss.accessKeyId": "DFDSMGDNDFMSNGDFMNGCU" "fs.oss.accessKeySecret": "secret" sparkConf: "spark.kubernetes.allocation.batch.size": "200" "spark.sql.adaptive.join.enabled": "true" "spark.eventLog.enabled": "true" "spark.eventLog.dir": "oss://spark/spark-events" driver: cores: 4 memory: "8192m" labels: version: 2.4.5 spark-app: spark-tpcds role: driver serviceAccount: spark nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge executor: cores: 48 instances: 1 memory: "160g" memoryOverhead: "16g" labels: version: 2.4.5 role: executor nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge After I executed kubectl create --validate=false -f tpcds-25-query.yaml command, k8s returns this: sparkapplication.sparkoperator.k8s.io/tpcds-25-query created which means the pod has been created. However, when I executed kubectl get pods, it gave me this: No resources found in default namespace. When I created the pod again, it gave me this: Error from server (AlreadyExists): error when creating "tpcds-25-query.yaml": sparkapplications.sparkoperator.k8s.io "tpcds-25-query" already exists I know the option -v=8 can print more detailed logs. So I execute the command kubectl create --validate=false -f tpcds-25-query.yaml -v=8, its output is: I0219 05:50:17.121661 2148722 loader.go:372] Config loaded from file: /root/.kube/config I0219 05:50:17.124735 2148722 round_trippers.go:432] GET https://172.16.0.212:6443/apis/metrics.k8s.io/v1beta1?timeout=32s I0219 05:50:17.124747 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.124753 2148722 round_trippers.go:442] Accept: application/json, */* I0219 05:50:17.124759 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.132864 2148722 round_trippers.go:457] Response Status: 503 Service Unavailable in 8 milliseconds I0219 05:50:17.132876 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.132881 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.132890 2148722 round_trippers.go:463] Content-Length: 20 I0219 05:50:17.132894 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.132898 2148722 round_trippers.go:463] Audit-Id: 3ab06f73-0c88-469a-834d-54ec06e910f1 I0219 05:50:17.132902 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.132906 2148722 round_trippers.go:463] Content-Type: text/plain; charset=utf-8 I0219 05:50:17.132909 2148722 round_trippers.go:463] X-Content-Type-Options: nosniff I0219 05:50:17.132913 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.134365 2148722 request.go:1181] Response Body: service unavailable I0219 05:50:17.135255 2148722 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0219 05:50:17.135265 2148722 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0219 05:50:17.136050 2148722 request.go:1181] Request Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"name":"tpcds-25-query","namespace":"default"},"spec":{"arguments":["oss://lfpapertest/spark/data/tpc-ds-data/150g","oss://lfpapertest/spark/result/tpcds-runc-150g-48core-160g-1pod-25-query","/tmp/tpcds-kit/tools","parquet","150","1","false","q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4","true"],"dnsPolicy":"ClusterFirstWithHostNet","driver":{"cores":4,"labels":{"role":"driver","spark-app":"spark-tpcds","version":"2.4.5"},"memory":"8192m","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"},"serviceAccount":"spark"},"executor":{"cores":48,"instances":1,"labels":{"role":"executor","version":"2.4.5"},"memory":"160g","memoryOverhead":"16g","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"}},"hadoopConf":{"fs.oss.acce [truncated 802 chars] I0219 05:50:17.136091 2148722 round_trippers.go:432] POST https://172.16.0.212:6443/apis/sparkoperator.k8s.io/v1beta2/namespaces/default/sparkapplications?fieldManager=kubectl-create I0219 05:50:17.136098 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.136104 2148722 round_trippers.go:442] Accept: application/json I0219 05:50:17.136108 2148722 round_trippers.go:442] Content-Type: application/json I0219 05:50:17.136113 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.144313 2148722 round_trippers.go:457] Response Status: 201 Created in 8 milliseconds I0219 05:50:17.144327 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.144332 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.144337 2148722 round_trippers.go:463] Content-Length: 2989 I0219 05:50:17.144341 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.144345 2148722 round_trippers.go:463] Audit-Id: 8eef9d08-04c0-44f7-87bf-e820853cd9c6 I0219 05:50:17.144349 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.144352 2148722 round_trippers.go:463] Content-Type: application/json I0219 05:50:17.144356 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.144396 2148722 request.go:1181] Response Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"creationTimestamp":"2023-02-19T05:50:17Z","generation":1,"managedFields":[{"apiVersion":"sparkoperator.k8s.io/v1beta2","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{".":{},"f:arguments":{},"f:driver":{".":{},"f:cores":{},"f:labels":{".":{},"f:role":{},"f:spark-app":{},"f:version":{}},"f:memory":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}},"f:serviceAccount":{}},"f:executor":{".":{},"f:cores":{},"f:instances":{},"f:labels":{".":{},"f:role":{},"f:version":{}},"f:memory":{},"f:memoryOverhead":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}}},"f:hadoopConf":{".":{},"f:fs.oss.accessKeyId":{},"f:fs.oss.accessKeySecret":{},"f:fs.oss.endpoint":{},"f:fs.oss.impl":{}},"f:image":{},"f:imagePullPolicy":{},"f:mainApplicationFile":{},"f:mainClass":{},"f:mode":{},"f:restartPolicy":{".":{},"f:type":{}},"f:sparkConf":{".":{},"f:spark.eventLog.dir":{},"f:spark.eventLog.enabled":{},"f:spark.kubernetes. [truncated 1965 chars] sparkapplication.sparkoperator.k8s.io/tpcds-25-query created From the logs, we can see the only error "Response Status: 503 Service Unavailable in 8 milliseconds", I don't know what it means. So I want to ask what may cause this, and how would I diagnose the problem? Any help is appreciated! ### Response: There might be multiple reasons for this, initially let’s check whether the pod is really created or not. Like ehmad11 suggested use kubectl get pods --all-namespaces for listing pods in all the namespaces. However in your case it might not work because your application is getting directly deployed in defaulf namespace. Regarding the error “Response Status: 503 Service Unavailable in 8 milliseconds” once you are able to locate the pod use kubectl describe <pod> for finding logs specific to your pod and follow the troubleshooting steps provided in this document for rectifying it. Note: The reference document is provided from komodor site, here they have articulated each troubleshooting step in highly detailed and understandable manner.
I am a beginner in Kubernetes and have been using the kubectl command to create pods for several months. However, I recently encountered a problem where Kubernetes did not create a pod after I executed the kubectl create -f mypod.yaml command. When I run kubectl get pods, the mypod does not appear in the list of pods and I am unable to access it by name as if it does not exist. However, if I try to create it again, I receive a message saying that the pod has already been created. To illustrate my point, let me give you an example. I frequently generate pods using a YAML file called tpcds-25-query.yaml. The contents of this file are as follows: apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: tpcds-25-query namespace: default spec: type: Scala mode: cluster image: registry.cn-beijing.aliyuncs.com/kube-ai/ack-spark-benchmark:1.0.1 imagePullPolicy: Always sparkVersion: 2.4.5 mainClass: com.aliyun.spark.benchmark.tpcds.BenchmarkSQL mainApplicationFile: "local:///opt/spark/jars/ack-spark-benchmark-assembly-0.1.jar" arguments: # TPC-DS data localtion - "oss://spark/data/tpc-ds-data/150g" # results location - "oss://spark/result/tpcds-25-query" # Path to kit in the docker image - "/tmp/tpcds-kit/tools" # Data Format - "parquet" # Scale factor (in GB) - "150" # Number of iterations - "1" # Optimize queries - "false" # Filter queries, will run all if empty - "q70-v2.4,q82-v2.4,q64-v2.4" - "q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4" # Logging set to WARN - "true" hostNetwork: true dnsPolicy: ClusterFirstWithHostNet restartPolicy: type: Never timeToLiveSeconds: 86400 hadoopConf: # OSS "fs.oss.impl": "OSSFileSystem" "fs.oss.endpoint": "oss.com" "fs.oss.accessKeyId": "DFDSMGDNDFMSNGDFMNGCU" "fs.oss.accessKeySecret": "secret" sparkConf: "spark.kubernetes.allocation.batch.size": "200" "spark.sql.adaptive.join.enabled": "true" "spark.eventLog.enabled": "true" "spark.eventLog.dir": "oss://spark/spark-events" driver: cores: 4 memory: "8192m" labels: version: 2.4.5 spark-app: spark-tpcds role: driver serviceAccount: spark nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge executor: cores: 48 instances: 1 memory: "160g" memoryOverhead: "16g" labels: version: 2.4.5 role: executor nodeSelector: beta.kubernetes.io/instance-type: ecs.g6.13xlarge After I executed kubectl create --validate=false -f tpcds-25-query.yaml command, k8s returns this: sparkapplication.sparkoperator.k8s.io/tpcds-25-query created which means the pod has been created. However, when I executed kubectl get pods, it gave me this: No resources found in default namespace. When I created the pod again, it gave me this: Error from server (AlreadyExists): error when creating "tpcds-25-query.yaml": sparkapplications.sparkoperator.k8s.io "tpcds-25-query" already exists I know the option -v=8 can print more detailed logs. So I execute the command kubectl create --validate=false -f tpcds-25-query.yaml -v=8, its output is: I0219 05:50:17.121661 2148722 loader.go:372] Config loaded from file: /root/.kube/config I0219 05:50:17.124735 2148722 round_trippers.go:432] GET https://172.16.0.212:6443/apis/metrics.k8s.io/v1beta1?timeout=32s I0219 05:50:17.124747 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.124753 2148722 round_trippers.go:442] Accept: application/json, */* I0219 05:50:17.124759 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.132864 2148722 round_trippers.go:457] Response Status: 503 Service Unavailable in 8 milliseconds I0219 05:50:17.132876 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.132881 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.132890 2148722 round_trippers.go:463] Content-Length: 20 I0219 05:50:17.132894 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.132898 2148722 round_trippers.go:463] Audit-Id: 3ab06f73-0c88-469a-834d-54ec06e910f1 I0219 05:50:17.132902 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.132906 2148722 round_trippers.go:463] Content-Type: text/plain; charset=utf-8 I0219 05:50:17.132909 2148722 round_trippers.go:463] X-Content-Type-Options: nosniff I0219 05:50:17.132913 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.134365 2148722 request.go:1181] Response Body: service unavailable I0219 05:50:17.135255 2148722 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I0219 05:50:17.135265 2148722 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request I0219 05:50:17.136050 2148722 request.go:1181] Request Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"name":"tpcds-25-query","namespace":"default"},"spec":{"arguments":["oss://lfpapertest/spark/data/tpc-ds-data/150g","oss://lfpapertest/spark/result/tpcds-runc-150g-48core-160g-1pod-25-query","/tmp/tpcds-kit/tools","parquet","150","1","false","q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4","true"],"dnsPolicy":"ClusterFirstWithHostNet","driver":{"cores":4,"labels":{"role":"driver","spark-app":"spark-tpcds","version":"2.4.5"},"memory":"8192m","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"},"serviceAccount":"spark"},"executor":{"cores":48,"instances":1,"labels":{"role":"executor","version":"2.4.5"},"memory":"160g","memoryOverhead":"16g","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"}},"hadoopConf":{"fs.oss.acce [truncated 802 chars] I0219 05:50:17.136091 2148722 round_trippers.go:432] POST https://172.16.0.212:6443/apis/sparkoperator.k8s.io/v1beta2/namespaces/default/sparkapplications?fieldManager=kubectl-create I0219 05:50:17.136098 2148722 round_trippers.go:438] Request Headers: I0219 05:50:17.136104 2148722 round_trippers.go:442] Accept: application/json I0219 05:50:17.136108 2148722 round_trippers.go:442] Content-Type: application/json I0219 05:50:17.136113 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577 I0219 05:50:17.144313 2148722 round_trippers.go:457] Response Status: 201 Created in 8 milliseconds I0219 05:50:17.144327 2148722 round_trippers.go:460] Response Headers: I0219 05:50:17.144332 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1 I0219 05:50:17.144337 2148722 round_trippers.go:463] Content-Length: 2989 I0219 05:50:17.144341 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT I0219 05:50:17.144345 2148722 round_trippers.go:463] Audit-Id: 8eef9d08-04c0-44f7-87bf-e820853cd9c6 I0219 05:50:17.144349 2148722 round_trippers.go:463] Cache-Control: no-cache, private I0219 05:50:17.144352 2148722 round_trippers.go:463] Content-Type: application/json I0219 05:50:17.144356 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede I0219 05:50:17.144396 2148722 request.go:1181] Response Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"creationTimestamp":"2023-02-19T05:50:17Z","generation":1,"managedFields":[{"apiVersion":"sparkoperator.k8s.io/v1beta2","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{".":{},"f:arguments":{},"f:driver":{".":{},"f:cores":{},"f:labels":{".":{},"f:role":{},"f:spark-app":{},"f:version":{}},"f:memory":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}},"f:serviceAccount":{}},"f:executor":{".":{},"f:cores":{},"f:instances":{},"f:labels":{".":{},"f:role":{},"f:version":{}},"f:memory":{},"f:memoryOverhead":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}}},"f:hadoopConf":{".":{},"f:fs.oss.accessKeyId":{},"f:fs.oss.accessKeySecret":{},"f:fs.oss.endpoint":{},"f:fs.oss.impl":{}},"f:image":{},"f:imagePullPolicy":{},"f:mainApplicationFile":{},"f:mainClass":{},"f:mode":{},"f:restartPolicy":{".":{},"f:type":{}},"f:sparkConf":{".":{},"f:spark.eventLog.dir":{},"f:spark.eventLog.enabled":{},"f:spark.kubernetes. [truncated 1965 chars] sparkapplication.sparkoperator.k8s.io/tpcds-25-query created From the logs, we can see the only error "Response Status: 503 Service Unavailable in 8 milliseconds", I don't know what it means. So I want to ask what may cause this, and how would I diagnose the problem? Any help is appreciated!
If you are running in cluster mode then you need your input files to be shared on a shared FS like HDFS or S3 but not on local FS, since both of driver and executors should have access to the input file.
###Instruction: I'm new to spark and doing on POC to download a file and then read it. However, I am facing issue that the file doesn't exists. java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist But when I printed the path of the file I find out the file exists and the path is also correct. This is the output 23/02/19 13:10:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.14.142.21:37515 with 2.2 GiB RAM, BlockManagerId(1, 10.14.142.21, 37515, None) FILE IS DOWNLOADED ['/app/data-Feb-19-2023_131049.json'] 23/02/19 13:10:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir. 23/02/19 13:10:49 INFO SharedState: Warehouse path is 'file:/app/spark-warehouse'. 23/02/19 13:10:50 INFO InMemoryFileIndex: It took 39 ms to list leaf files for 1 paths. 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 206.6 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.8 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 35.8 KiB, free: 1048.8 MiB) 23/02/19 13:10:51 INFO SparkContext: Created broadcast 0 from json at <unknown>:0 23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1 23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1 23/02/19 13:10:51 INFO SparkContext: Starting job: json at <unknown>:0 23/02/19 13:10:51 INFO DAGScheduler: Got job 0 (json at <unknown>:0) with 1 output partitions 23/02/19 13:10:51 INFO DAGScheduler: Final stage: ResultStage 0 (json at <unknown>:0) 23/02/19 13:10:51 INFO DAGScheduler: Parents of final stage: List() 23/02/19 13:10:51 INFO DAGScheduler: Missing parents: List() 23/02/19 13:10:51 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0), which has no missing parents 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 9.0 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.8 KiB, free 1048.5 MiB) 23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 4.8 KiB, free: 1048.8 MiB) 23/02/19 13:10:51 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1513 23/02/19 13:10:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0) (first 15 tasks are for partitions Vector(0)) 23/02/19 13:10:51 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0 23/02/19 13:10:51 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.14.142.21:37515 (size: 4.8 KiB, free: 2.2 GiB) 23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.14.142.21:37515 (size: 35.8 KiB, free: 2.2 GiB) 23/02/19 13:10:52 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86) at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator.isEmpty(Iterator.scala:387) at scala.collection.Iterator.isEmpty$(Iterator.scala:387) at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249) at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248) at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256) at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256) at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 1] 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 2] 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 3] 23/02/19 13:10:52 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job 23/02/19 13:10:52 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 23/02/19 13:10:52 INFO TaskSchedulerImpl: Cancelling stage 0 23/02/19 13:10:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled 23/02/19 13:10:52 INFO DAGScheduler: ResultStage 0 (json at <unknown>:0) failed in 1.128 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86) at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator.isEmpty(Iterator.scala:387) at scala.collection.Iterator.isEmpty$(Iterator.scala:387) at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249) at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248) at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256) at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256) at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) This is my code to download the file and and print its path def find_files(self, filename, search_path): result = [] # Wlaking top-down from the root for root, dir, files in os.walk(search_path): if filename in files: result.append(os.path.join(root, filename)) return result def downloadData(self, access_token, data): headers = { 'Content-Type': 'application/json', 'Charset': 'UTF-8', 'Authorization': f'Bearer {access_token}' } try: response = requests.post(self.kyc_url, data=json.dumps(data), headers=headers) response.raise_for_status() logger.debug("received kyc data") response_filename = ("data-" + time.strftime('%b-%d-%Y_%H%M%S', time.localtime()) + ".json") with open(response_filename, 'w', encoding='utf-8') as f: json.dump(response.json(), f, ensure_ascii=False, indent=4) f.close() print("FILE IS DOWNLOADED") print(self.find_files(response_filename, "/")) except requests.exceptions.HTTPError as err: logger.error("failed to fetch kyc data") raise SystemExit(err) return response_filename This is my code to read the file and upload to minio def load(spark: SparkSession, json_file_path: str, destination_path: str) -> None: df = spark.read.option("multiline", "true").json(json_file_path) df.write.format("delta").save(f"s3a://{destination_path}") I'm running spark in k8s with spark operator. This is my SparkApplication manifest apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: myApp namespace: demo spec: type: Python pythonVersion: "3" mode: cluster image: "myImage" imagePullPolicy: Always mainApplicationFile: local:///app/main.py sparkVersion: "3.3.1" restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 timeToLiveSeconds: 86400 deps: packages: - io.delta:delta-core_2.12:2.2.0 - org.apache.hadoop:hadoop-aws:3.3.1 driver: env: - name: NAMESPACE value: demo cores: 2 coreLimit: "2000m" memory: "2048m" labels: version: 3.3.1 serviceAccount: spark-driver executor: cores: 4 instances: 1 memory: "4096m" coreRequest: "500m" coreLimit: "4000m" labels: version: 3.3.1 dynamicAllocation: enabled: false Can someone please point out what I am doing wrong ? Thank you ### Response: If you are running in cluster mode then you need your input files to be shared on a shared FS like HDFS or S3 but not on local FS, since both of driver and executors should have access to the input file.
I'm new to spark and doing on POC to download a file and then read it. However, I am facing issue that the file doesn't exists. java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist But when I printed the path of the file I find out the file exists and the path is also correct. This is the output 23/02/19 13:10:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.14.142.21:37515 with 2.2 GiB RAM, BlockManagerId(1, 10.14.142.21, 37515, None) FILE IS DOWNLOADED ['/app/data-Feb-19-2023_131049.json'] 23/02/19 13:10:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir. 23/02/19 13:10:49 INFO SharedState: Warehouse path is 'file:/app/spark-warehouse'. 23/02/19 13:10:50 INFO InMemoryFileIndex: It took 39 ms to list leaf files for 1 paths. 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 206.6 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.8 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 35.8 KiB, free: 1048.8 MiB) 23/02/19 13:10:51 INFO SparkContext: Created broadcast 0 from json at <unknown>:0 23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1 23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1 23/02/19 13:10:51 INFO SparkContext: Starting job: json at <unknown>:0 23/02/19 13:10:51 INFO DAGScheduler: Got job 0 (json at <unknown>:0) with 1 output partitions 23/02/19 13:10:51 INFO DAGScheduler: Final stage: ResultStage 0 (json at <unknown>:0) 23/02/19 13:10:51 INFO DAGScheduler: Parents of final stage: List() 23/02/19 13:10:51 INFO DAGScheduler: Missing parents: List() 23/02/19 13:10:51 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0), which has no missing parents 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 9.0 KiB, free 1048.6 MiB) 23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.8 KiB, free 1048.5 MiB) 23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 4.8 KiB, free: 1048.8 MiB) 23/02/19 13:10:51 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1513 23/02/19 13:10:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0) (first 15 tasks are for partitions Vector(0)) 23/02/19 13:10:51 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0 23/02/19 13:10:51 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.14.142.21:37515 (size: 4.8 KiB, free: 2.2 GiB) 23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.14.142.21:37515 (size: 35.8 KiB, free: 2.2 GiB) 23/02/19 13:10:52 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86) at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator.isEmpty(Iterator.scala:387) at scala.collection.Iterator.isEmpty$(Iterator.scala:387) at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249) at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248) at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256) at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256) at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 1] 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 2] 23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map() 23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 3] 23/02/19 13:10:52 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job 23/02/19 13:10:52 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 23/02/19 13:10:52 INFO TaskSchedulerImpl: Cancelling stage 0 23/02/19 13:10:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled 23/02/19 13:10:52 INFO DAGScheduler: ResultStage 0 (json at <unknown>:0) failed in 1.128 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40) at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199) at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86) at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86) at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492) at scala.collection.Iterator.isEmpty(Iterator.scala:387) at scala.collection.Iterator.isEmpty$(Iterator.scala:387) at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249) at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248) at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431) at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256) at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256) at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431) at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) This is my code to download the file and and print its path def find_files(self, filename, search_path): result = [] # Wlaking top-down from the root for root, dir, files in os.walk(search_path): if filename in files: result.append(os.path.join(root, filename)) return result def downloadData(self, access_token, data): headers = { 'Content-Type': 'application/json', 'Charset': 'UTF-8', 'Authorization': f'Bearer {access_token}' } try: response = requests.post(self.kyc_url, data=json.dumps(data), headers=headers) response.raise_for_status() logger.debug("received kyc data") response_filename = ("data-" + time.strftime('%b-%d-%Y_%H%M%S', time.localtime()) + ".json") with open(response_filename, 'w', encoding='utf-8') as f: json.dump(response.json(), f, ensure_ascii=False, indent=4) f.close() print("FILE IS DOWNLOADED") print(self.find_files(response_filename, "/")) except requests.exceptions.HTTPError as err: logger.error("failed to fetch kyc data") raise SystemExit(err) return response_filename This is my code to read the file and upload to minio def load(spark: SparkSession, json_file_path: str, destination_path: str) -> None: df = spark.read.option("multiline", "true").json(json_file_path) df.write.format("delta").save(f"s3a://{destination_path}") I'm running spark in k8s with spark operator. This is my SparkApplication manifest apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: myApp namespace: demo spec: type: Python pythonVersion: "3" mode: cluster image: "myImage" imagePullPolicy: Always mainApplicationFile: local:///app/main.py sparkVersion: "3.3.1" restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 timeToLiveSeconds: 86400 deps: packages: - io.delta:delta-core_2.12:2.2.0 - org.apache.hadoop:hadoop-aws:3.3.1 driver: env: - name: NAMESPACE value: demo cores: 2 coreLimit: "2000m" memory: "2048m" labels: version: 3.3.1 serviceAccount: spark-driver executor: cores: 4 instances: 1 memory: "4096m" coreRequest: "500m" coreLimit: "4000m" labels: version: 3.3.1 dynamicAllocation: enabled: false Can someone please point out what I am doing wrong ? Thank you
This one visually looks better for me: kubectl delete job --field-selector=status.phase==Failed
###Instruction: How to delete the failed jobs in the kubernetes cluster using a cron job in gke?. when i tried to delete the failed jobs using following YAML, it has deleted all the jobs (including running) apiVersion: batch/v1beta1 kind: CronJob metadata: name: XXX namespace: XXX spec: schedule: "*/30 * * * *" failedJobsHistoryLimit: 1 successfulJobsHistoryLimit: 1 jobTemplate: spec: template: spec: serviceAccountName: XXX containers: - name: kubectl-runner image: bitnami/kubectl:latest command: ["sh", "-c", "kubectl delete jobs $(kubectl get jobs | awk '$2 ~ 1/1' | awk '{print $1}')"] restartPolicy: OnFailure ### Response: This one visually looks better for me: kubectl delete job --field-selector=status.phase==Failed
How to delete the failed jobs in the kubernetes cluster using a cron job in gke?. when i tried to delete the failed jobs using following YAML, it has deleted all the jobs (including running) apiVersion: batch/v1beta1 kind: CronJob metadata: name: XXX namespace: XXX spec: schedule: "*/30 * * * *" failedJobsHistoryLimit: 1 successfulJobsHistoryLimit: 1 jobTemplate: spec: template: spec: serviceAccountName: XXX containers: - name: kubectl-runner image: bitnami/kubectl:latest command: ["sh", "-c", "kubectl delete jobs $(kubectl get jobs | awk '$2 ~ 1/1' | awk '{print $1}')"] restartPolicy: OnFailure
This can be achieved by using python script or by using an IAC pipeline, you can get the metrics of the kubernetes cluster and whenever these metrics exceed a certain threshold, trigger this python code for creating new pod with the required resources and shutdown the old pod. Follow this github link for more info on python plugin for kubernetes. Ansible can also be used for performing this operation. This can be achieved by triggering your ansible playbook whenever the threshold breaches a certain limit and you can specify the new sizes of the pods that need to be created. Follow this official ansible document for more information. However both these procedures involve manual analysis for selecting the desired pod size for scaling. So if you don’t want to use vertical scaling you can go for horizontal scaling. Note: The information is gathered from official Ansible and github pages and the urls are referred to in the post.
###Instruction: I am trying to use VPA for autoscaling my deployed services. Due to limitation in resources in my cluster I set the min_replica option to 1. The workflow of VPA that have seen so far is that it first deletes the existing pod and then re-create the pod. This approach will cause a downtime to my services. What I want is that the VPA first create the new pod and then deletes the old pod, completely similar to the rolling updates for deployments. Is there an option or hack to reverse the flow to the desired order in my case? ### Response: This can be achieved by using python script or by using an IAC pipeline, you can get the metrics of the kubernetes cluster and whenever these metrics exceed a certain threshold, trigger this python code for creating new pod with the required resources and shutdown the old pod. Follow this github link for more info on python plugin for kubernetes. Ansible can also be used for performing this operation. This can be achieved by triggering your ansible playbook whenever the threshold breaches a certain limit and you can specify the new sizes of the pods that need to be created. Follow this official ansible document for more information. However both these procedures involve manual analysis for selecting the desired pod size for scaling. So if you don’t want to use vertical scaling you can go for horizontal scaling. Note: The information is gathered from official Ansible and github pages and the urls are referred to in the post.
I am trying to use VPA for autoscaling my deployed services. Due to limitation in resources in my cluster I set the min_replica option to 1. The workflow of VPA that have seen so far is that it first deletes the existing pod and then re-create the pod. This approach will cause a downtime to my services. What I want is that the VPA first create the new pod and then deletes the old pod, completely similar to the rolling updates for deployments. Is there an option or hack to reverse the flow to the desired order in my case?
Default Nginx Ingress controller doesn't support having different Ingress resources with the same hostname. You can have one Ingress resource that contains multiple paths, but in this case all apps should live in one namespace. Like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1-ingress namespace: app1 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 Splitting ingresses between namespaces is currently not supported by standard Nginx Ingress controller. You may however take a look at an alternative implementation of Nginx Ingress by Nginx Inc. They have support for Mergeable Ingresses.
###Instruction: I'm trying to setup Nginx-ingress controller to manage two paths on the same hostname in bare metal based cluster. In the app1 namespace i have below nginx resource:- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1-ingress namespace: app1 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 And in the app2 namespace i have below nginx resource:- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app2-ingress namespace: app2 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 My app1-service applied first and it is running fine, now when i applied the second app2-service it shows below warning and not able to access it on browser. Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources How do i configure my nginx ingress resource to connect multiple service paths on the same hostname? ### Response: Default Nginx Ingress controller doesn't support having different Ingress resources with the same hostname. You can have one Ingress resource that contains multiple paths, but in this case all apps should live in one namespace. Like this: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1-ingress namespace: app1 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 Splitting ingresses between namespaces is currently not supported by standard Nginx Ingress controller. You may however take a look at an alternative implementation of Nginx Ingress by Nginx Inc. They have support for Mergeable Ingresses.
I'm trying to setup Nginx-ingress controller to manage two paths on the same hostname in bare metal based cluster. In the app1 namespace i have below nginx resource:- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1-ingress namespace: app1 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 And in the app2 namespace i have below nginx resource:- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app2-ingress namespace: app2 spec: ingressClassName: nginx rules: - host: web.example.com http: paths: - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 My app1-service applied first and it is running fine, now when i applied the second app2-service it shows below warning and not able to access it on browser. Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources How do i configure my nginx ingress resource to connect multiple service paths on the same hostname?
This one worked!!! kubectl get pods | grep -P 'my-pod-[0-9]+' Thank you all (and especially to @Timur who enlightened me about the Perl regexp option)
###Instruction: I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise. ### Response: This one worked!!! kubectl get pods | grep -P 'my-pod-[0-9]+' Thank you all (and especially to @Timur who enlightened me about the Perl regexp option)
I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise.
Check the official Keda docs: When running your agents as a deployment you have no control on which pod gets killed when scaling down. So, to solve it you need to use ScaledJob: If you run your agents as a Job, KEDA will start a Kubernetes job for each job that is in the agent pool queue. The agents will accept one job when they are started and terminate afterwards. Since an agent is always created for every pipeline job, you can achieve fully isolated build environments by using Kubernetes jobs. See there how to implement it.
###Instruction: I tried Keda with AKS and I really appreciate when pod are automatically instanciate based on Azure Dev Ops queue job for release & build. However I noticed something strange and often AKS/Keda remove pod while processing which makes workflow failed. Message reads We stopped hearing from agent aks-linux-768d6647cc-ntmh4. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error. For more information, see: https://go.microsoft.com/fwlink/?linkid=846610 Expected behavior: pods must complete the tasks then Keda/AKS can remove this pod. I share with you my keda yml file: # deployment.yaml apiVersion: apps/v1 # The API resource where this workload resides kind: Deployment # The kind of workload we're creating metadata: name: aks-linux # This will be the name of the deployment spec: selector: # Define the wrapping strategy matchLabels: # Match all pods with the defined labels app: aks-linux # Labels follow the `name: value` template replicas: 1 template: # This is the template of the pod inside the deployment metadata: # Metadata for the pod labels: app: aks-linux spec: nodeSelector: agentpool: linux containers: # Here we define all containers - image: <My image here> name: aks-linux env: - name: "AZP_URL" value: "<myURL>" - name: "AZP_TOKEN" value: "<MyToken>" - name: "AZP_POOL" value: "<MyPool>" resources: requests: # Minimum amount of resources requested cpu: 2 memory: 4096Mi limits: # Maximum amount of resources requested cpu: 4 memory: 8192Mi I used latest version of AKS and Keda. Any idea ? ### Response: Check the official Keda docs: When running your agents as a deployment you have no control on which pod gets killed when scaling down. So, to solve it you need to use ScaledJob: If you run your agents as a Job, KEDA will start a Kubernetes job for each job that is in the agent pool queue. The agents will accept one job when they are started and terminate afterwards. Since an agent is always created for every pipeline job, you can achieve fully isolated build environments by using Kubernetes jobs. See there how to implement it.
I tried Keda with AKS and I really appreciate when pod are automatically instanciate based on Azure Dev Ops queue job for release & build. However I noticed something strange and often AKS/Keda remove pod while processing which makes workflow failed. Message reads We stopped hearing from agent aks-linux-768d6647cc-ntmh4. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error. For more information, see: https://go.microsoft.com/fwlink/?linkid=846610 Expected behavior: pods must complete the tasks then Keda/AKS can remove this pod. I share with you my keda yml file: # deployment.yaml apiVersion: apps/v1 # The API resource where this workload resides kind: Deployment # The kind of workload we're creating metadata: name: aks-linux # This will be the name of the deployment spec: selector: # Define the wrapping strategy matchLabels: # Match all pods with the defined labels app: aks-linux # Labels follow the `name: value` template replicas: 1 template: # This is the template of the pod inside the deployment metadata: # Metadata for the pod labels: app: aks-linux spec: nodeSelector: agentpool: linux containers: # Here we define all containers - image: <My image here> name: aks-linux env: - name: "AZP_URL" value: "<myURL>" - name: "AZP_TOKEN" value: "<MyToken>" - name: "AZP_POOL" value: "<MyPool>" resources: requests: # Minimum amount of resources requested cpu: 2 memory: 4096Mi limits: # Maximum amount of resources requested cpu: 4 memory: 8192Mi I used latest version of AKS and Keda. Any idea ?
You can create the wild card certificate with the Cert-manger. Here is my article on requesting the wild card certificate with DNS verification as it's not supported with HTTP. https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2 For GCP DNS verification you can follow official guide : https://cert-manager.io/docs/configuration/acme/dns01/google/ Once auth is successful you will be able to request the certificate and it will get stored in K8s secret. create a service account : PROJECT_ID=myproject-id gcloud iam service-accounts create dns01-solver --display-name "dns01-solver" Binding policy : gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:dns01-solver@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/dns.admin K8s secret : gcloud iam service-accounts keys create key.json \ --iam-account dns01-solver@$PROJECT_ID.iam.gserviceaccount.com kubectl create secret generic clouddns-dns01-solver-svc-acct \ --from-file=key.json issuer apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: gcp-issuer spec: acme: ... solvers: - dns01: cloudDNS: # The ID of the GCP project project: $PROJECT_ID # This is the secret used to access the service account serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct key: key.json --- apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: le-crt spec: secretName: tls-secret issuerRef: kind: Issuer name: letsencrypt-prod commonName: "*.devops.example.in" dnsNames: - "*.devops.example.in" You can attach this newly auto-created secret to Ingress or Gateway in Istio as per need. That secret will be storing your wild card certificate. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway annotations: cert-manager.io/issuer: gcp-issuer spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: tls-secret # This should match the Certificate secretName hosts: - *.devops.example.in
###Instruction: I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ? We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate. What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ? ### Response: You can create the wild card certificate with the Cert-manger. Here is my article on requesting the wild card certificate with DNS verification as it's not supported with HTTP. https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2 For GCP DNS verification you can follow official guide : https://cert-manager.io/docs/configuration/acme/dns01/google/ Once auth is successful you will be able to request the certificate and it will get stored in K8s secret. create a service account : PROJECT_ID=myproject-id gcloud iam service-accounts create dns01-solver --display-name "dns01-solver" Binding policy : gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:dns01-solver@$PROJECT_ID.iam.gserviceaccount.com \ --role roles/dns.admin K8s secret : gcloud iam service-accounts keys create key.json \ --iam-account dns01-solver@$PROJECT_ID.iam.gserviceaccount.com kubectl create secret generic clouddns-dns01-solver-svc-acct \ --from-file=key.json issuer apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: gcp-issuer spec: acme: ... solvers: - dns01: cloudDNS: # The ID of the GCP project project: $PROJECT_ID # This is the secret used to access the service account serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct key: key.json --- apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: le-crt spec: secretName: tls-secret issuerRef: kind: Issuer name: letsencrypt-prod commonName: "*.devops.example.in" dnsNames: - "*.devops.example.in" You can attach this newly auto-created secret to Ingress or Gateway in Istio as per need. That secret will be storing your wild card certificate. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway annotations: cert-manager.io/issuer: gcp-issuer spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: tls-secret # This should match the Certificate secretName hosts: - *.devops.example.in
I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ? We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate. What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ?
YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created. Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a pod selector. Refer to this link on Understanding kubernetes Object for more information.
###Instruction: Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces? ### Response: YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created. Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a pod selector. Refer to this link on Understanding kubernetes Object for more information.
Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces?
Ok, posting the solution and journey to it here, so just in case someone comes across similar issue. There was a network issue in the cluster after the upgrade. This was the reason all pods had DNS issue. Due to these network issues metrics-server was not in running state and it's pdb allowed disruption remained at 0 and it was causing PodDrainFaulure errors while upgrading the node. I was able to force the upgrade to upgrade all nodes to 1.25.5 by running the command az aks nodepool upgrade -n agentpool -g rg_name --cluster-name aks_name --node-image-only . However, after executing this, I had to keep on deleting the pdb to get all the nodes to upgrade. This helped to get control plane and all nodes to reach 1.25.5 version, however overall status still remained in failed(Running) state. This was solved by triggering another upgrade with --cluster-plane-only flag --resource-group <ResourceGroupName> --name <AKSClusterName> \ --control-plane-only \ --kubernetes-version <KubernetesVersion> However, this did not solve the core networking issues, as starting from metrics server to application pods, all were failing trying to resolve hostnames. Interesting thing was that internal services were not reachable at all, however outside network e.g. github.com, microsoft.com etc would work intermittently. Based on AKS issue 2903 and related ingress-nginx issue 8501 found that after k8s 1.24 ingress-nginx needs special annotation to keep health probes running properly. had to update helm with below command helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --reuse-values \ --namespace <NAMESPACE> \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz This did get the Azure aks health dashboard and metrics server to show as running state, but it did not solve the underlying network issue. used all the scenarios and commands mentioned in all sections of MS troubleshooting guide for outbound connections to narrow down that issue is somehow with kube-dns service and CoreDNS pods. as DNS resolution would work if upstreaming nameserver directly to coreDNS pod IP (need to run on the same node) OR public DNS, but it will fail if it was using kube-dns service ip as configured in resolv.conf jovyan@elyra-web-59f899c447-xw5x2:~$ host -a microsoft.com Trying "microsoft.com.elyra-airflow.svc.cluster.local" ;; connection timed out; no servers could be reached jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com ;; connection timed out; no servers could be reached jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com 1.1.1.1 Server: 1.1.1.1 Address: 1.1.1.1#53 Non-authoritative answer: Name: microsoft.com Address: 20.103.85.33 Name: microsoft.com Address: 20.112.52.29 Name: microsoft.com Address: 20.81.111.85 Name: microsoft.com Address: 20.84.181.62 Name: microsoft.com Address: 20.53.203.50 I restarted coreDNS, konnectivity-agent and so on, but no help. At the end found a hint from AKS issue 1320 which helped solved the issue. Even though this issue is related to k8s version 1.13 thus suggesting it was not a version specific problem. I deleted ALL pods from namespace kube-system at once. Immediately after these pods were up and running, DNS issue was gone and all is working as before. Phew, this was some journey of 5 days to get it solved. Looking forward to next upgrade in March now !
###Instruction: I upgrade AKS using Azure portal from 1.23.5 to 1.24.9. This part finished properly (or so I assumed) based on below status on Azure portal. I continued with 1.24.9 to 1.25.5. This time it worked partly. Azure portal shows 1.25.5 for nodepool with provision state "Failed". While nodes are still at 1.24.9. I found that some nodes were having issues connecting to network including outside e.g. github as well as internal "services". For some reason it is intermittent issue. On same node it sometime works and sometimes not. (I had pods running on each node with python.) Each node has cluster IP in resolv.conf One of the question on SO had a hint about ingress-nginx compatibility. I found that I had an incompatible version. So I upgraded it to 1.6.4 which is compatible with 1.24 and 1.25 both But this network issue still persists. I am not sure if this is because AKS provisioning state of "Failed". Connectivity check for this cluster in Azure portal is Success. Only issue reported in Azure portal diagnostics is nodepool provisioning state. is there anything I need to do after ingress-nginx upgrade for all nodes/pods to get the new config? Or is there a way to re-trigger this upgrade? although I am not sure why, but just assuming that it may reset the configs on all nodes and might work. ### Response: Ok, posting the solution and journey to it here, so just in case someone comes across similar issue. There was a network issue in the cluster after the upgrade. This was the reason all pods had DNS issue. Due to these network issues metrics-server was not in running state and it's pdb allowed disruption remained at 0 and it was causing PodDrainFaulure errors while upgrading the node. I was able to force the upgrade to upgrade all nodes to 1.25.5 by running the command az aks nodepool upgrade -n agentpool -g rg_name --cluster-name aks_name --node-image-only . However, after executing this, I had to keep on deleting the pdb to get all the nodes to upgrade. This helped to get control plane and all nodes to reach 1.25.5 version, however overall status still remained in failed(Running) state. This was solved by triggering another upgrade with --cluster-plane-only flag --resource-group <ResourceGroupName> --name <AKSClusterName> \ --control-plane-only \ --kubernetes-version <KubernetesVersion> However, this did not solve the core networking issues, as starting from metrics server to application pods, all were failing trying to resolve hostnames. Interesting thing was that internal services were not reachable at all, however outside network e.g. github.com, microsoft.com etc would work intermittently. Based on AKS issue 2903 and related ingress-nginx issue 8501 found that after k8s 1.24 ingress-nginx needs special annotation to keep health probes running properly. had to update helm with below command helm upgrade ingress-nginx ingress-nginx/ingress-nginx \ --reuse-values \ --namespace <NAMESPACE> \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz This did get the Azure aks health dashboard and metrics server to show as running state, but it did not solve the underlying network issue. used all the scenarios and commands mentioned in all sections of MS troubleshooting guide for outbound connections to narrow down that issue is somehow with kube-dns service and CoreDNS pods. as DNS resolution would work if upstreaming nameserver directly to coreDNS pod IP (need to run on the same node) OR public DNS, but it will fail if it was using kube-dns service ip as configured in resolv.conf jovyan@elyra-web-59f899c447-xw5x2:~$ host -a microsoft.com Trying "microsoft.com.elyra-airflow.svc.cluster.local" ;; connection timed out; no servers could be reached jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com ;; connection timed out; no servers could be reached jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com 1.1.1.1 Server: 1.1.1.1 Address: 1.1.1.1#53 Non-authoritative answer: Name: microsoft.com Address: 20.103.85.33 Name: microsoft.com Address: 20.112.52.29 Name: microsoft.com Address: 20.81.111.85 Name: microsoft.com Address: 20.84.181.62 Name: microsoft.com Address: 20.53.203.50 I restarted coreDNS, konnectivity-agent and so on, but no help. At the end found a hint from AKS issue 1320 which helped solved the issue. Even though this issue is related to k8s version 1.13 thus suggesting it was not a version specific problem. I deleted ALL pods from namespace kube-system at once. Immediately after these pods were up and running, DNS issue was gone and all is working as before. Phew, this was some journey of 5 days to get it solved. Looking forward to next upgrade in March now !
I upgrade AKS using Azure portal from 1.23.5 to 1.24.9. This part finished properly (or so I assumed) based on below status on Azure portal. I continued with 1.24.9 to 1.25.5. This time it worked partly. Azure portal shows 1.25.5 for nodepool with provision state "Failed". While nodes are still at 1.24.9. I found that some nodes were having issues connecting to network including outside e.g. github as well as internal "services". For some reason it is intermittent issue. On same node it sometime works and sometimes not. (I had pods running on each node with python.) Each node has cluster IP in resolv.conf One of the question on SO had a hint about ingress-nginx compatibility. I found that I had an incompatible version. So I upgraded it to 1.6.4 which is compatible with 1.24 and 1.25 both But this network issue still persists. I am not sure if this is because AKS provisioning state of "Failed". Connectivity check for this cluster in Azure portal is Success. Only issue reported in Azure portal diagnostics is nodepool provisioning state. is there anything I need to do after ingress-nginx upgrade for all nodes/pods to get the new config? Or is there a way to re-trigger this upgrade? although I am not sure why, but just assuming that it may reset the configs on all nodes and might work.
I'm not K3S expert but I think I found a piece of documentation that is addressing your issue. Take a look: Service Load Balancer Any service load balancer (LB) can be used in your K3s cluster. By default, K3s provides a load balancer known as ServiceLB (formerly Klipper Load Balancer) that uses available host ports. Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain pending until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration. How the Service LB Works The ServiceLB controller watches Kubernetes Services with the spec.type field set to LoadBalancer. For each LoadBalancer Service, a DaemonSet is created in the kube-system namespace. This DaemonSet in turn creates Pods with a svc- prefix, on each node. These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port. If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list. Otherwise, the node's internal IP is used. If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service. It is possible to expose multiple Services on the same node, as long as they use different ports. If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending. -- Docs.k3s.io: Networking As a possible solution, I'd recommend to use Traefik as it's a default Ingress controller within K3S. The Pending status on your LoadBalancer is most likely caused by another service used on that port (Traefik). If you wish to still use NGINX, the same documentation page explains how you can disable Traefik. UPDATE I'd be more careful to delete resources as you did. The following command: k delete all --all -n ingress-nginx Will not delete all of the resources created. The better way in my opinion would be to use the command that you've used to create and instead of: kubectl create -f ... Use: kubectl delete -f ... I assume that you did not modify your Ingress definition, hence you receive the error and the kubectl get ingress is showing incorrect results. What you will need to do: spec: ingressClassName: nginx # <-- DELETE IT OR CHANGE TO "traefik" Either delete or change should work as traefik is set to be a default IngressClass for this setup.
###Instruction: I have a k3s (light weighted k8s) cluster running on my Raspberry PI. So, I am not using any cloud hosted cluster but a bear metal one on my Raspberry PI. I have deployed a application with this manifest: apiVersion: apps/v1 kind: Deployment metadata: name: hello-world namespace: myapp spec: replicas: 3 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: bashofmann/rancher-demo:1.0.0 imagePullPolicy: Always resources: requests: cpu: 200m ports: - containerPort: 8080 name: web protocol: TCP I also created a service to forward traffic to the application pod. Its manifest is: apiVersion: v1 kind: Service metadata: name: demo-app-svc namespace: myapp spec: selector: app: hello-world ports: - name: web protocol: TCP port: 31113 targetPort: 8080 Then, I created a Ingress for the routing rules: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ing namespace: myapp annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: myapp.com http: paths: - pathType: Prefix path: / backend: service: name: demo-app-svc port: number: 31113 I successfully deployed above application pod, service & Ingress to my k3s cluster. Like the manifests indicate, they are under namespace myapp. The next thing I would like to do is to deploy the Kubernetes Nginx Ingress Controller in order to have the clients outside the cluster be able to access the deployed application. So, I deployed it by : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml The above command successfully deployed Ingress Controller under namespace ingress-nginx along with other objects as shown below with command k get all -n ingress-nginx: As you can see above, the LoadBalancer type service external IP is with value <pending>. So, client outside the cluster still can not access the application pod. Why is that & what do I miss deploying the Nginx Ingress Controller on a bear metal machine? The goal is to have an external IP that can be used to access the application pod from outside cluster, how can I achieve that? ===== Update ===== Based on the answer below from @Dawid Kruk , I decided to use the k3s default Traefik Ingress Controller. So, I deleted all the deployed Nginx Ingress Controller resources by k delete all --all -n ingress-nginx . Then, I checked the Traefik Ingress related LoadBalancer type service: The external IP of that Traefik service is exactly my Raspberry PI's IP address! So, added this IP to /etc/hosts to map it to the hostname defined in my Ingress object: 192.168.10.203 myapp.com I opened browser & use address http://myapp.com, with the routing rules defined in my Ingress object (see the manifest for my ingress above), I hoped I could see my deployed web application now. But get 404 Page Not Found. What am I missing now to access my deployed application? Another side question: I noticed when I check the deployed Ingress object, its IP address is empty, I wonder am I supposed to see an IP address for this object or not when the Traefik Ingress Controller takes effect? Another issue: Now, when I re-deploy my ingress manifest by k apply -f ingress.yaml, I get error: Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress" ... for: "ingress.yaml": error when patching "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found It looks like even I decided to use Traefik Ingress Controller, I still need to instal Nginx Ingress Controller. I get confused now, anyone can explain it? ### Response: I'm not K3S expert but I think I found a piece of documentation that is addressing your issue. Take a look: Service Load Balancer Any service load balancer (LB) can be used in your K3s cluster. By default, K3s provides a load balancer known as ServiceLB (formerly Klipper Load Balancer) that uses available host ports. Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain pending until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration. How the Service LB Works The ServiceLB controller watches Kubernetes Services with the spec.type field set to LoadBalancer. For each LoadBalancer Service, a DaemonSet is created in the kube-system namespace. This DaemonSet in turn creates Pods with a svc- prefix, on each node. These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port. If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list. Otherwise, the node's internal IP is used. If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service. It is possible to expose multiple Services on the same node, as long as they use different ports. If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending. -- Docs.k3s.io: Networking As a possible solution, I'd recommend to use Traefik as it's a default Ingress controller within K3S. The Pending status on your LoadBalancer is most likely caused by another service used on that port (Traefik). If you wish to still use NGINX, the same documentation page explains how you can disable Traefik. UPDATE I'd be more careful to delete resources as you did. The following command: k delete all --all -n ingress-nginx Will not delete all of the resources created. The better way in my opinion would be to use the command that you've used to create and instead of: kubectl create -f ... Use: kubectl delete -f ... I assume that you did not modify your Ingress definition, hence you receive the error and the kubectl get ingress is showing incorrect results. What you will need to do: spec: ingressClassName: nginx # <-- DELETE IT OR CHANGE TO "traefik" Either delete or change should work as traefik is set to be a default IngressClass for this setup.
I have a k3s (light weighted k8s) cluster running on my Raspberry PI. So, I am not using any cloud hosted cluster but a bear metal one on my Raspberry PI. I have deployed a application with this manifest: apiVersion: apps/v1 kind: Deployment metadata: name: hello-world namespace: myapp spec: replicas: 3 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: bashofmann/rancher-demo:1.0.0 imagePullPolicy: Always resources: requests: cpu: 200m ports: - containerPort: 8080 name: web protocol: TCP I also created a service to forward traffic to the application pod. Its manifest is: apiVersion: v1 kind: Service metadata: name: demo-app-svc namespace: myapp spec: selector: app: hello-world ports: - name: web protocol: TCP port: 31113 targetPort: 8080 Then, I created a Ingress for the routing rules: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapp-ing namespace: myapp annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: myapp.com http: paths: - pathType: Prefix path: / backend: service: name: demo-app-svc port: number: 31113 I successfully deployed above application pod, service & Ingress to my k3s cluster. Like the manifests indicate, they are under namespace myapp. The next thing I would like to do is to deploy the Kubernetes Nginx Ingress Controller in order to have the clients outside the cluster be able to access the deployed application. So, I deployed it by : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml The above command successfully deployed Ingress Controller under namespace ingress-nginx along with other objects as shown below with command k get all -n ingress-nginx: As you can see above, the LoadBalancer type service external IP is with value <pending>. So, client outside the cluster still can not access the application pod. Why is that & what do I miss deploying the Nginx Ingress Controller on a bear metal machine? The goal is to have an external IP that can be used to access the application pod from outside cluster, how can I achieve that? ===== Update ===== Based on the answer below from @Dawid Kruk , I decided to use the k3s default Traefik Ingress Controller. So, I deleted all the deployed Nginx Ingress Controller resources by k delete all --all -n ingress-nginx . Then, I checked the Traefik Ingress related LoadBalancer type service: The external IP of that Traefik service is exactly my Raspberry PI's IP address! So, added this IP to /etc/hosts to map it to the hostname defined in my Ingress object: 192.168.10.203 myapp.com I opened browser & use address http://myapp.com, with the routing rules defined in my Ingress object (see the manifest for my ingress above), I hoped I could see my deployed web application now. But get 404 Page Not Found. What am I missing now to access my deployed application? Another side question: I noticed when I check the deployed Ingress object, its IP address is empty, I wonder am I supposed to see an IP address for this object or not when the Traefik Ingress Controller takes effect? Another issue: Now, when I re-deploy my ingress manifest by k apply -f ingress.yaml, I get error: Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress" ... for: "ingress.yaml": error when patching "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found It looks like even I decided to use Traefik Ingress Controller, I still need to instal Nginx Ingress Controller. I get confused now, anyone can explain it?