input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
Try changing: FROM repoURL: https://prometheus-community.github.io/helm-charts TO repoURL: git@github.com:prometheus-community/helm-charts.git OR repoURL: https://github.com/prometheus-community/helm-charts.git FROM targetRevision: 44.4.1 TO targetRevision: kube-prometheus-stack-44.4.1 And under the targetRevision field, add: path: charts/kube-prometheus-stack
###Instruction: I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster. The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm. However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown". The installation of ArgoCD was done with the next command. (Basic install) kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml And, as example, the kube-prometheus-stack Application I create in Github looks this way: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: kube-prometheus-stack namespace: argocd spec: project: default source: chart: kube-prometheus-stack repoURL: https://prometheus-community.github.io/helm-charts targetRevision: 44.4.1 helm: releaseName: kube-prometheus-stack destination: server: "https://kubernetes.default.svc" namespace: observability Any idea what I could be missing? Thanks! ### Response: Try changing: FROM repoURL: https://prometheus-community.github.io/helm-charts TO repoURL: git@github.com:prometheus-community/helm-charts.git OR repoURL: https://github.com/prometheus-community/helm-charts.git FROM targetRevision: 44.4.1 TO targetRevision: kube-prometheus-stack-44.4.1 And under the targetRevision field, add: path: charts/kube-prometheus-stack
I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster. The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm. However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown". The installation of ArgoCD was done with the next command. (Basic install) kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml And, as example, the kube-prometheus-stack Application I create in Github looks this way: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: kube-prometheus-stack namespace: argocd spec: project: default source: chart: kube-prometheus-stack repoURL: https://prometheus-community.github.io/helm-charts targetRevision: 44.4.1 helm: releaseName: kube-prometheus-stack destination: server: "https://kubernetes.default.svc" namespace: observability Any idea what I could be missing? Thanks!
TL;DR; your configOverrides should look like this: configOverrides: cache_config: | from datetime import timedelta from superset.superset_typing import CacheConfig CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Details: After running a helm install with your settings, your config file will look a bit like this: import os from cachelib.redis import RedisCache ... CACHE_CONFIG = { 'CACHE_TYPE': 'redis', 'CACHE_DEFAULT_TIMEOUT': 300, 'CACHE_KEY_PREFIX': 'superset_', 'CACHE_REDIS_HOST': env('REDIS_HOST'), 'CACHE_REDIS_PORT': env('REDIS_PORT'), 'CACHE_REDIS_PASSWORD': env('REDIS_PASSWORD'), 'CACHE_REDIS_DB': env('REDIS_DB', 1), } DATA_CACHE_CONFIG = CACHE_CONFIG ... # Overrides # cache_config CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } # data_cache_config DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } # enable_flags FEATURE_FLAGS = { "DASHBOARD_NATIVE_FILTERS": True, "ENABLE_TEMPLATE_PROCESSING": True, "DASHBOARD_CROSS_FILTERS": True, "DYNAMIC_PLUGINS": True, "VERSIONED_EXPORT": True, "DASHBOARD_RBAC": True, } # explore_cache_config EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } # filter_cache_config FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } When I looked at the pod logs, there were a lot of errors due to the function timedelta not being defined, here is a sample of the logs I can see: File "/app/pythonpath/superset_config.py", line 42, in <module> 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), NameError: name 'timedelta' is not defined The file in question, /app/pythonpath/superset_config.py , is loaded via an import here as mentioned in the comment at the top of the file. Notice that you're writing a fresh new .py file; which means that you need to add from datetime import timedelta at the top in the configOverrides section. However, since the doc in the helm chart states the following warning WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py, and you clearly want to use the function timedelta, we must combine all three blocks under the same section like this: configOverrides: cache_config: | from datetime import timedelta CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Furthermore, you wanted to use the type CacheConfig, so we should also include an import for it at the top.
###Instruction: I have superset installed via helmchart in my kubernetes environment, I took everything from official documentation and repository: https://github.com/apache/superset I'm trying to archive a data autorefresh of the dashboard every 12 hours via helmchart and not via the UI; I read that this can be done enabling superset cache so data will be cached for 12 hours and then dynamically refreshed and everyone that access superset UI can see the same values. My problem now is one.... I can see the cache configuration on the superset/config.py file: # Default cache for Superset objects CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"} # Cache for datasource metadata and query results DATA_CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"} # Cache for dashboard filter state (`CACHE_TYPE` defaults to `SimpleCache` when # running in debug mode unless overridden) FILTER_STATE_CACHE_CONFIG: CacheConfig = { "CACHE_DEFAULT_TIMEOUT": int(timedelta(days=90).total_seconds()), # should the timeout be reset when retrieving a cached value "REFRESH_TIMEOUT_ON_RETRIEVAL": True, } # Cache for explore form data state (`CACHE_TYPE` defaults to `SimpleCache` when # running in debug mode unless overridden) EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { "CACHE_DEFAULT_TIMEOUT": int(timedelta(days=7).total_seconds()), # should the timeout be reset when retrieving a cached value "REFRESH_TIMEOUT_ON_RETRIEVAL": True, } As per documentation I'm using the configOverrides section of the helmchart to overwrite the default values and enable the cache of config, data, filter ad explore but I can't find any example of how to do it and everything I try always fail in helmrelease. I try to read the helmchart but looks that it take all the configOverrides section and I was not able to find where it overwrite those specific values. Some example of what I try to overwrite, for example enabling some flag works without problem: configOverrides: enable_flags: | FEATURE_FLAGS = { "DASHBOARD_NATIVE_FILTERS": True, "ENABLE_TEMPLATE_PROCESSING": True, "DASHBOARD_CROSS_FILTERS": True, "DYNAMIC_PLUGINS": True, "VERSIONED_EXPORT": True, "DASHBOARD_RBAC": True, } But if I try to overwrite one or more cache value it fail (config.py https://github.com/apache/superset/blob/master/superset/config.py), this is one of the different way I try to overwrite checking the helm value file, the template and the superser config.py (and checkign other articles): configOverrides: cache_config: | CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } data_cache_config: | DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } filter_cache_config: | FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } explore_cache_config: | EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Any help pls? Or redirect to some good documentation that has example! Ps the redis installation I have it's the default one created by the helmchart, I didn't change anything on it. ### Response: TL;DR; your configOverrides should look like this: configOverrides: cache_config: | from datetime import timedelta from superset.superset_typing import CacheConfig CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Details: After running a helm install with your settings, your config file will look a bit like this: import os from cachelib.redis import RedisCache ... CACHE_CONFIG = { 'CACHE_TYPE': 'redis', 'CACHE_DEFAULT_TIMEOUT': 300, 'CACHE_KEY_PREFIX': 'superset_', 'CACHE_REDIS_HOST': env('REDIS_HOST'), 'CACHE_REDIS_PORT': env('REDIS_PORT'), 'CACHE_REDIS_PASSWORD': env('REDIS_PASSWORD'), 'CACHE_REDIS_DB': env('REDIS_DB', 1), } DATA_CACHE_CONFIG = CACHE_CONFIG ... # Overrides # cache_config CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } # data_cache_config DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } # enable_flags FEATURE_FLAGS = { "DASHBOARD_NATIVE_FILTERS": True, "ENABLE_TEMPLATE_PROCESSING": True, "DASHBOARD_CROSS_FILTERS": True, "DYNAMIC_PLUGINS": True, "VERSIONED_EXPORT": True, "DASHBOARD_RBAC": True, } # explore_cache_config EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } # filter_cache_config FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } When I looked at the pod logs, there were a lot of errors due to the function timedelta not being defined, here is a sample of the logs I can see: File "/app/pythonpath/superset_config.py", line 42, in <module> 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), NameError: name 'timedelta' is not defined The file in question, /app/pythonpath/superset_config.py , is loaded via an import here as mentioned in the comment at the top of the file. Notice that you're writing a fresh new .py file; which means that you need to add from datetime import timedelta at the top in the configOverrides section. However, since the doc in the helm chart states the following warning WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py, and you clearly want to use the function timedelta, we must combine all three blocks under the same section like this: configOverrides: cache_config: | from datetime import timedelta CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Furthermore, you wanted to use the type CacheConfig, so we should also include an import for it at the top.
I have superset installed via helmchart in my kubernetes environment, I took everything from official documentation and repository: https://github.com/apache/superset I'm trying to archive a data autorefresh of the dashboard every 12 hours via helmchart and not via the UI; I read that this can be done enabling superset cache so data will be cached for 12 hours and then dynamically refreshed and everyone that access superset UI can see the same values. My problem now is one.... I can see the cache configuration on the superset/config.py file: # Default cache for Superset objects CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"} # Cache for datasource metadata and query results DATA_CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"} # Cache for dashboard filter state (`CACHE_TYPE` defaults to `SimpleCache` when # running in debug mode unless overridden) FILTER_STATE_CACHE_CONFIG: CacheConfig = { "CACHE_DEFAULT_TIMEOUT": int(timedelta(days=90).total_seconds()), # should the timeout be reset when retrieving a cached value "REFRESH_TIMEOUT_ON_RETRIEVAL": True, } # Cache for explore form data state (`CACHE_TYPE` defaults to `SimpleCache` when # running in debug mode unless overridden) EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { "CACHE_DEFAULT_TIMEOUT": int(timedelta(days=7).total_seconds()), # should the timeout be reset when retrieving a cached value "REFRESH_TIMEOUT_ON_RETRIEVAL": True, } As per documentation I'm using the configOverrides section of the helmchart to overwrite the default values and enable the cache of config, data, filter ad explore but I can't find any example of how to do it and everything I try always fail in helmrelease. I try to read the helmchart but looks that it take all the configOverrides section and I was not able to find where it overwrite those specific values. Some example of what I try to overwrite, for example enabling some flag works without problem: configOverrides: enable_flags: | FEATURE_FLAGS = { "DASHBOARD_NATIVE_FILTERS": True, "ENABLE_TEMPLATE_PROCESSING": True, "DASHBOARD_CROSS_FILTERS": True, "DYNAMIC_PLUGINS": True, "VERSIONED_EXPORT": True, "DASHBOARD_RBAC": True, } But if I try to overwrite one or more cache value it fail (config.py https://github.com/apache/superset/blob/master/superset/config.py), this is one of the different way I try to overwrite checking the helm value file, the template and the superser config.py (and checkign other articles): configOverrides: cache_config: | CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_cache_' } data_cache_config: | DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_data_' } filter_cache_config: | FILTER_STATE_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_filter_' } explore_cache_config: | EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = { 'CACHE_TYPE': 'RedisCache', 'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()), 'CACHE_KEY_PREFIX': 'superset_explore_' } Any help pls? Or redirect to some good documentation that has example! Ps the redis installation I have it's the default one created by the helmchart, I didn't change anything on it.
Never used MWAA on aws before. But your problem seems to be: Is your template file stored in S3 available to the DAG script file through path "/opt/airflow/pod_template_file_example-1.yaml"? Check out the docs for Difference between KubernetesPodOperator and Kubernetes object spec. So KubernetesPodOperator just creates the equivalent YAML/JSON object spec for the Pod, and it contains 4 types of arguments for this purpose with the following order of precedence: KubernetesPodOperator arguments, like name/namespace/image...; full pod spec, which is arg: full_pod_spec; pod template file, which is arg: pod_template_file; airflow connection;
###Instruction: I am running airflow via MWAA on aws and the worker nodes are running k8s. The pods are getting scheduled just fine but I am trying to use pod_template_file with KubernetesPodOperator, it's giving me a bunch of uncertain behavior. My template file stored in S3 apiVersion: v1 kind: Pod metadata: name: app1 namespace: app1 spec: containers: - name: base image: "alpine:latest" command: ["/bin/sh"] args: ["-c", "while true; do echo hi>> /data/app.log; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: claim1 My DAG file t_1 = KubernetesPodOperator( task_id=job_name_1, namespace="app", name=job_name_1, get_logs=True, is_delete_operator_pod=False, in_cluster=False, config_file=kube_config_path, startup_timeout_seconds=240, cluster_context='test', pod_template_file="/opt/airflow/pod_template_file_example-1.yaml", dag=dag) When I go with this, I get an error that the pod spec is invalid as it's missing image field. This is surprising as image is present in pod-template. I also tried below, it works, but it totally ignores the pod-template file and spins up a alpine container and exits. So looks like it's totally ignoring the pod_template_file param. full_pod_spec = k8s.V1Pod( metadata=metadata_2, spec=k8s.V1PodSpec(containers=[ k8s.V1Container( name="base", image="alpine:latest", ) ], )) t_1 = KubernetesPodOperator( task_id=job_name_1, namespace="mlops", name=job_name_1, get_logs=True, is_delete_operator_pod=False, in_cluster=False, config_file=kube_config_path, startup_timeout_seconds=240, cluster_context='aws', full_pod_spec=full_pod_spec, pod_template_file="/opt/airflow/pod_template_file_example-1.yaml", dag=dag) What is the correct way to reference a pod_template_file in KubernetesPodOperator in airflow? References : medium ### Response: Never used MWAA on aws before. But your problem seems to be: Is your template file stored in S3 available to the DAG script file through path "/opt/airflow/pod_template_file_example-1.yaml"? Check out the docs for Difference between KubernetesPodOperator and Kubernetes object spec. So KubernetesPodOperator just creates the equivalent YAML/JSON object spec for the Pod, and it contains 4 types of arguments for this purpose with the following order of precedence: KubernetesPodOperator arguments, like name/namespace/image...; full pod spec, which is arg: full_pod_spec; pod template file, which is arg: pod_template_file; airflow connection;
I am running airflow via MWAA on aws and the worker nodes are running k8s. The pods are getting scheduled just fine but I am trying to use pod_template_file with KubernetesPodOperator, it's giving me a bunch of uncertain behavior. My template file stored in S3 apiVersion: v1 kind: Pod metadata: name: app1 namespace: app1 spec: containers: - name: base image: "alpine:latest" command: ["/bin/sh"] args: ["-c", "while true; do echo hi>> /data/app.log; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: claim1 My DAG file t_1 = KubernetesPodOperator( task_id=job_name_1, namespace="app", name=job_name_1, get_logs=True, is_delete_operator_pod=False, in_cluster=False, config_file=kube_config_path, startup_timeout_seconds=240, cluster_context='test', pod_template_file="/opt/airflow/pod_template_file_example-1.yaml", dag=dag) When I go with this, I get an error that the pod spec is invalid as it's missing image field. This is surprising as image is present in pod-template. I also tried below, it works, but it totally ignores the pod-template file and spins up a alpine container and exits. So looks like it's totally ignoring the pod_template_file param. full_pod_spec = k8s.V1Pod( metadata=metadata_2, spec=k8s.V1PodSpec(containers=[ k8s.V1Container( name="base", image="alpine:latest", ) ], )) t_1 = KubernetesPodOperator( task_id=job_name_1, namespace="mlops", name=job_name_1, get_logs=True, is_delete_operator_pod=False, in_cluster=False, config_file=kube_config_path, startup_timeout_seconds=240, cluster_context='aws', full_pod_spec=full_pod_spec, pod_template_file="/opt/airflow/pod_template_file_example-1.yaml", dag=dag) What is the correct way to reference a pod_template_file in KubernetesPodOperator in airflow? References : medium
If you're able to change your deployment to a statefulset, each pod of your replica will contain a sequence number starting from 0. The podname is fixed like my-kafka-consumer-0, my-kafka-consumer-1, my-kafka-consumer-(N-1), and so on. Next you should use the podname as an environment variable inside your pod which can be read by your application. You can do it by set the env variable like this: env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name If this is done, your application should be able to read this information from the environment variables. This is needed to identify a consumer inside the consumer group. So the next thing what you're looking for it's called a static membership in your kafka client configuration. You can reach this by setting the group.id and the group.instance.id 'group.id': 'consumer-group' 'group.instance.id': '<MY_POD_NAME FROM THE ENVIRONMENT VARS>' Not tested, but i think this should do the trick.
###Instruction: I have a kafka topic that has 5 partitions. I have 5 pods currently consuming these 5 partitions. However due to a particular need, I need each pod to only consume from its assigned partition. But due to pods all having the identical configuration on kubernetes I can't tell each pod which partition to consumer. What's the best way to achieve this? Should I just have 5 independent deployments instead? That seems odd with all 5 having the same code. I've seen the use of stateful set, but that seems to rely on an index. Not sure how robust that approach is. ### Response: If you're able to change your deployment to a statefulset, each pod of your replica will contain a sequence number starting from 0. The podname is fixed like my-kafka-consumer-0, my-kafka-consumer-1, my-kafka-consumer-(N-1), and so on. Next you should use the podname as an environment variable inside your pod which can be read by your application. You can do it by set the env variable like this: env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name If this is done, your application should be able to read this information from the environment variables. This is needed to identify a consumer inside the consumer group. So the next thing what you're looking for it's called a static membership in your kafka client configuration. You can reach this by setting the group.id and the group.instance.id 'group.id': 'consumer-group' 'group.instance.id': '<MY_POD_NAME FROM THE ENVIRONMENT VARS>' Not tested, but i think this should do the trick.
I have a kafka topic that has 5 partitions. I have 5 pods currently consuming these 5 partitions. However due to a particular need, I need each pod to only consume from its assigned partition. But due to pods all having the identical configuration on kubernetes I can't tell each pod which partition to consumer. What's the best way to achieve this? Should I just have 5 independent deployments instead? That seems odd with all 5 having the same code. I've seen the use of stateful set, but that seems to rely on an index. Not sure how robust that approach is.
9999 isn't a valid value. https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md I suggest not to increase the verbosity of the logs in the production environment (activate them only if necessary, otherwise you may have performance problems).
###Instruction: I want to change kube-apiserver log level use --v parameter , I checked all the documents, and none of them gave the value range of this parameter. Check the code, the value is int32 type, I tried to fill in 0, 1, 2, and even 9999, all can pass the verification and start kube-apiserver normally. So I want to know what is the value range of this parameter.Is the value 9999 really meaningful? ### Response: 9999 isn't a valid value. https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md I suggest not to increase the verbosity of the logs in the production environment (activate them only if necessary, otherwise you may have performance problems).
I want to change kube-apiserver log level use --v parameter , I checked all the documents, and none of them gave the value range of this parameter. Check the code, the value is int32 type, I tried to fill in 0, 1, 2, and even 9999, all can pass the verification and start kube-apiserver normally. So I want to know what is the value range of this parameter.Is the value 9999 really meaningful?
You didn't mention your k8s version. My answer might not be suitable for you, when you're using k8s below of v1.23. Kubernetes can setup the permissions for you. Use fsGroup and fsGroupChangePolicy and k8s will take over the job for you. containers: - name: laravel-api-app image: me/laravel-api:v1.0.0 ports: - name: laravel containerPort: 8080 imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /var/www/html/storage # this part is new securityContext: # user/group of nobody should have the highest possible id fsGroup: 65534 fsGroupChangePolicy: "OnRootMismatch" Related configuration specs from k8s
###Instruction: I am having a problem in my Kubernetes cluster. Currently I am running my Laravel application in kubernetes with success. Now I am trying to make the storage folder in my app a persistant volume, because it can be used to store images and stuff. My deployment looks like this now: apiVersion: apps/v1 kind: Deployment metadata: name: laravel-api-app namespace: my-project labels: app.kubernetes.io/name: laravel-api-app spec: replicas: 1 selector: matchLabels: app: laravel-api-app template: metadata: labels: app: laravel-api-app spec: containers: - name: laravel-api-app image: me/laravel-api:v1.0.0 ports: - name: laravel containerPort: 8080 imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /var/www/html/storage imagePullSecrets: - name: regcred volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim As you can see my claim is mounted to the /var/www/html/storage folder. Now in my Dockerfile I set all my folders to the user nobody like this: USER nobody COPY --chown=nobody . /var/www/html However, using this results in the following folder rights in my pod (ls -la): drwxrwxrwx 1 www-data www-data 4096 Mar 14 18:24 . drwxr-xr-x 1 root root 4096 Feb 26 17:43 .. -rw-rw-rw- 1 nobody nobody 48 Mar 12 22:27 .dockerignore -rw-rw-rw- 1 nobody nobody 220 Mar 12 22:27 .editorconfig -rw-r--r-- 1 nobody nobody 718 Mar 14 18:22 .env -rw-rw-rw- 1 nobody nobody 660 Mar 14 18:22 .env.example -rw-rw-rw- 1 nobody nobody 718 Mar 14 12:10 .env.pipeline -rw-rw-rw- 1 nobody nobody 111 Mar 12 22:27 .gitattributes -rw-rw-rw- 1 nobody nobody 171 Mar 14 12:10 .gitignore drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:30 .gitlab-ci-scripts -rw-rw-rw- 1 nobody nobody 2336 Mar 14 01:13 .gitlab-ci.yml -rw-rw-rw- 1 nobody nobody 174 Mar 12 22:27 .styleci.yml -rw-rw-rw- 1 nobody nobody 691 Mar 14 10:02 Makefile drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 app -rwxrwxrwx 1 nobody nobody 1686 Mar 12 22:27 artisan drwxrwxrwx 1 nobody nobody 4096 Mar 12 22:27 bootstrap -rw-rw-rw- 1 nobody nobody 1476 Mar 12 22:27 composer.json -rw-rw-rw- 1 nobody nobody 261287 Mar 12 22:27 composer.lock drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:10 config drwxrwxrwx 5 nobody nobody 4096 Mar 12 22:27 database drwxrwxrwx 5 nobody nobody 4096 Mar 13 09:45 docker -rw-rw-rw- 1 nobody nobody 569 Mar 14 12:27 docker-compose-test.yml -rw-rw-rw- 1 nobody nobody 584 Mar 14 12:27 docker-compose.yml -rw-rw-rw- 1 nobody nobody 1013 Mar 14 18:24 package.json -rw-rw-rw- 1 nobody nobody 1405 Mar 12 22:27 phpunit.xml drwxrwxrwx 5 nobody nobody 4096 Mar 14 18:23 public -rw-rw-rw- 1 nobody nobody 3496 Mar 12 22:27 readme.md drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 resources drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 routes drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 scripts -rw-rw-rw- 1 nobody nobody 563 Mar 12 22:27 server.php drwxr-xr-x 2 root root 4096 Mar 14 18:18 storage drwxrwxrwx 4 nobody nobody 4096 Mar 12 22:27 tests drwxr-xr-x 38 nobody nobody 4096 Mar 14 18:22 vendor -rw-rw-rw- 1 nobody nobody 538 Mar 12 22:27 webpack.mix.js As you can see, my storage folder has root/root which I also want to be nobody/nobody. I thought about creating an initContainer like this: initContainers: - name: setup-storage image: busybox command: ['sh', '-c', '/path/to/setup-script.sh'] volumeMounts: - name: storage mountPath: /path/to/storage/directory With setup-script.sh containing: #!/bin/sh chown -R nobody:nobody /path/to/storage/directory chmod -R 755 /path/to/storage/directory But I have a feeling that there should be (or is) something much simpler to get the result I want. I already tried adding securityContext with id: 65534 like so: securityContext: runAsUser: 65534 runAsGroup: 65534 fsGroup: 65534 But that resulted in the same root/root owner/group. The last thing I tried was creating a initContainer like this: initContainers: - name: laravel-api-init image: me/laravel-api:v1.0.0 args: - /bin/bash - -c - cp -Rnp /var/www/html/storage/* /mnt imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /mnt This "should" copy all the content to /mnt which is the mounted location for the storage and then start the real deployment which mounts the copied data in the app. Unfortunatly this returns the error: Init:ExitCode:127 kubernetes, which is weird, because both of those locations do exist. One other thing with this approach that should not happen (I don't know if it will) is that once the volume contains data from a previous session (maybe after server reboot), that it doesn't tamper with the already existing data of the app. In short So after this explanation and my tries, here is what I am trying to achieve. I want my Laravel application to have a Persistant Volume (the storage folder), so that I limit the developers of that Laravel app to a given storage. For instance, when I create a PV of 5GB, they cannot store more than 5GB of data for their application. This storage has to be persistant, so that after a server reboot, the storage is still there! Update Here is the updated yaml with security context: apiVersion: apps/v1 kind: Deployment metadata: name: laravel-api-app namespace: my-project labels: app.kubernetes.io/name: laravel-api-app spec: replicas: 1 selector: matchLabels: app: laravel-api-app template: metadata: labels: app: laravel-api-app spec: containers: - name: laravel-api-init image: docker.argoplan.nl/clients/opus-volvere/laravel-api/production:v1.0.0 args: - /bin/sh - -c - cp -Rnp /var/www/html/storage/* /mnt imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /mnt securityContext: fsGroup: 65534 fsGroupChangePolicy: "OnRootMismatch" imagePullSecrets: - name: regcred volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim For debugging purpose I copied my initContainer as actual container, so I can see my container logs in ArgoCD. If is is an initContainer, I can't see any logs. Using the yaml above, I see this in the logs: cp: can't create directory '/mnt/app': Permission denied cp: can't create directory '/mnt/framework': Permission denied This is the live manifest, which apparantly does not contain the new security context, while I generated the app just now: apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 0a4ce0e873c92442fdaf1ac8a1313966bd995ae65471b34f70b9de2634edecf9 cni.projectcalico.org/podIP: 10.1.10.55/32 cni.projectcalico.org/podIPs: 10.1.10.55/32 creationTimestamp: '2023-03-17T09:17:58Z' generateName: laravel-api-app-74b7d9584c- labels: app: laravel-api-app pod-template-hash: 74b7d9584c name: laravel-api-app-74b7d9584c-4dc9h namespace: my-project ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: laravel-api-app-74b7d9584c uid: d2e2ab4d-0916-43fc-b294-3e5eb2778c0d resourceVersion: '4954636' uid: 12327d67-cdf9-4387-afe8-3cf536531dd2 spec: containers: - args: - /bin/sh - '-c' - cp -Rnp /var/www/html/storage/* /mnt envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config image: 'me/laravel-api:v1.0.0' imagePullPolicy: Always name: laravel-api-init resources: {} securityContext: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt name: storage - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8cfg8 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: regcred nodeName: tohatsu preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim - name: kube-api-access-8cfg8 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' message: 'containers with unready status: [laravel-api-init]' reason: ContainersNotReady status: 'False' type: Ready - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' message: 'containers with unready status: [laravel-api-init]' reason: ContainersNotReady status: 'False' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' status: 'True' type: PodScheduled containerStatuses: - containerID: >- containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c image: 'me/laravel-api:v1.0.0' imageID: >- me/laravel-api@secret lastState: terminated: containerID: >- containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c exitCode: 1 finishedAt: '2023-03-17T09:20:53Z' reason: Error startedAt: '2023-03-17T09:20:53Z' name: laravel-api-init ready: false restartCount: 5 started: false state: waiting: message: >- back-off 2m40s restarting failed container=laravel-api-init pod=laravel-api-app-74b7d9584c-4dc9h_my-project(12327d67-cdf9-4387-afe8-3cf536531dd2) reason: CrashLoopBackOff hostIP: 192.168.1.8 phase: Running podIP: 10.1.10.55 podIPs: - ip: 10.1.10.55 qosClass: BestEffort startTime: '2023-03-17T09:17:58Z' ### Response: You didn't mention your k8s version. My answer might not be suitable for you, when you're using k8s below of v1.23. Kubernetes can setup the permissions for you. Use fsGroup and fsGroupChangePolicy and k8s will take over the job for you. containers: - name: laravel-api-app image: me/laravel-api:v1.0.0 ports: - name: laravel containerPort: 8080 imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /var/www/html/storage # this part is new securityContext: # user/group of nobody should have the highest possible id fsGroup: 65534 fsGroupChangePolicy: "OnRootMismatch" Related configuration specs from k8s
I am having a problem in my Kubernetes cluster. Currently I am running my Laravel application in kubernetes with success. Now I am trying to make the storage folder in my app a persistant volume, because it can be used to store images and stuff. My deployment looks like this now: apiVersion: apps/v1 kind: Deployment metadata: name: laravel-api-app namespace: my-project labels: app.kubernetes.io/name: laravel-api-app spec: replicas: 1 selector: matchLabels: app: laravel-api-app template: metadata: labels: app: laravel-api-app spec: containers: - name: laravel-api-app image: me/laravel-api:v1.0.0 ports: - name: laravel containerPort: 8080 imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /var/www/html/storage imagePullSecrets: - name: regcred volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim As you can see my claim is mounted to the /var/www/html/storage folder. Now in my Dockerfile I set all my folders to the user nobody like this: USER nobody COPY --chown=nobody . /var/www/html However, using this results in the following folder rights in my pod (ls -la): drwxrwxrwx 1 www-data www-data 4096 Mar 14 18:24 . drwxr-xr-x 1 root root 4096 Feb 26 17:43 .. -rw-rw-rw- 1 nobody nobody 48 Mar 12 22:27 .dockerignore -rw-rw-rw- 1 nobody nobody 220 Mar 12 22:27 .editorconfig -rw-r--r-- 1 nobody nobody 718 Mar 14 18:22 .env -rw-rw-rw- 1 nobody nobody 660 Mar 14 18:22 .env.example -rw-rw-rw- 1 nobody nobody 718 Mar 14 12:10 .env.pipeline -rw-rw-rw- 1 nobody nobody 111 Mar 12 22:27 .gitattributes -rw-rw-rw- 1 nobody nobody 171 Mar 14 12:10 .gitignore drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:30 .gitlab-ci-scripts -rw-rw-rw- 1 nobody nobody 2336 Mar 14 01:13 .gitlab-ci.yml -rw-rw-rw- 1 nobody nobody 174 Mar 12 22:27 .styleci.yml -rw-rw-rw- 1 nobody nobody 691 Mar 14 10:02 Makefile drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 app -rwxrwxrwx 1 nobody nobody 1686 Mar 12 22:27 artisan drwxrwxrwx 1 nobody nobody 4096 Mar 12 22:27 bootstrap -rw-rw-rw- 1 nobody nobody 1476 Mar 12 22:27 composer.json -rw-rw-rw- 1 nobody nobody 261287 Mar 12 22:27 composer.lock drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:10 config drwxrwxrwx 5 nobody nobody 4096 Mar 12 22:27 database drwxrwxrwx 5 nobody nobody 4096 Mar 13 09:45 docker -rw-rw-rw- 1 nobody nobody 569 Mar 14 12:27 docker-compose-test.yml -rw-rw-rw- 1 nobody nobody 584 Mar 14 12:27 docker-compose.yml -rw-rw-rw- 1 nobody nobody 1013 Mar 14 18:24 package.json -rw-rw-rw- 1 nobody nobody 1405 Mar 12 22:27 phpunit.xml drwxrwxrwx 5 nobody nobody 4096 Mar 14 18:23 public -rw-rw-rw- 1 nobody nobody 3496 Mar 12 22:27 readme.md drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 resources drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 routes drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 scripts -rw-rw-rw- 1 nobody nobody 563 Mar 12 22:27 server.php drwxr-xr-x 2 root root 4096 Mar 14 18:18 storage drwxrwxrwx 4 nobody nobody 4096 Mar 12 22:27 tests drwxr-xr-x 38 nobody nobody 4096 Mar 14 18:22 vendor -rw-rw-rw- 1 nobody nobody 538 Mar 12 22:27 webpack.mix.js As you can see, my storage folder has root/root which I also want to be nobody/nobody. I thought about creating an initContainer like this: initContainers: - name: setup-storage image: busybox command: ['sh', '-c', '/path/to/setup-script.sh'] volumeMounts: - name: storage mountPath: /path/to/storage/directory With setup-script.sh containing: #!/bin/sh chown -R nobody:nobody /path/to/storage/directory chmod -R 755 /path/to/storage/directory But I have a feeling that there should be (or is) something much simpler to get the result I want. I already tried adding securityContext with id: 65534 like so: securityContext: runAsUser: 65534 runAsGroup: 65534 fsGroup: 65534 But that resulted in the same root/root owner/group. The last thing I tried was creating a initContainer like this: initContainers: - name: laravel-api-init image: me/laravel-api:v1.0.0 args: - /bin/bash - -c - cp -Rnp /var/www/html/storage/* /mnt imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /mnt This "should" copy all the content to /mnt which is the mounted location for the storage and then start the real deployment which mounts the copied data in the app. Unfortunatly this returns the error: Init:ExitCode:127 kubernetes, which is weird, because both of those locations do exist. One other thing with this approach that should not happen (I don't know if it will) is that once the volume contains data from a previous session (maybe after server reboot), that it doesn't tamper with the already existing data of the app. In short So after this explanation and my tries, here is what I am trying to achieve. I want my Laravel application to have a Persistant Volume (the storage folder), so that I limit the developers of that Laravel app to a given storage. For instance, when I create a PV of 5GB, they cannot store more than 5GB of data for their application. This storage has to be persistant, so that after a server reboot, the storage is still there! Update Here is the updated yaml with security context: apiVersion: apps/v1 kind: Deployment metadata: name: laravel-api-app namespace: my-project labels: app.kubernetes.io/name: laravel-api-app spec: replicas: 1 selector: matchLabels: app: laravel-api-app template: metadata: labels: app: laravel-api-app spec: containers: - name: laravel-api-init image: docker.argoplan.nl/clients/opus-volvere/laravel-api/production:v1.0.0 args: - /bin/sh - -c - cp -Rnp /var/www/html/storage/* /mnt imagePullPolicy: Always envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config volumeMounts: - name: storage mountPath: /mnt securityContext: fsGroup: 65534 fsGroupChangePolicy: "OnRootMismatch" imagePullSecrets: - name: regcred volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim For debugging purpose I copied my initContainer as actual container, so I can see my container logs in ArgoCD. If is is an initContainer, I can't see any logs. Using the yaml above, I see this in the logs: cp: can't create directory '/mnt/app': Permission denied cp: can't create directory '/mnt/framework': Permission denied This is the live manifest, which apparantly does not contain the new security context, while I generated the app just now: apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 0a4ce0e873c92442fdaf1ac8a1313966bd995ae65471b34f70b9de2634edecf9 cni.projectcalico.org/podIP: 10.1.10.55/32 cni.projectcalico.org/podIPs: 10.1.10.55/32 creationTimestamp: '2023-03-17T09:17:58Z' generateName: laravel-api-app-74b7d9584c- labels: app: laravel-api-app pod-template-hash: 74b7d9584c name: laravel-api-app-74b7d9584c-4dc9h namespace: my-project ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: laravel-api-app-74b7d9584c uid: d2e2ab4d-0916-43fc-b294-3e5eb2778c0d resourceVersion: '4954636' uid: 12327d67-cdf9-4387-afe8-3cf536531dd2 spec: containers: - args: - /bin/sh - '-c' - cp -Rnp /var/www/html/storage/* /mnt envFrom: - secretRef: name: laravel-api-secret - configMapRef: name: laravel-api-config image: 'me/laravel-api:v1.0.0' imagePullPolicy: Always name: laravel-api-init resources: {} securityContext: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mnt name: storage - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8cfg8 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: regcred nodeName: tohatsu preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: storage persistentVolumeClaim: claimName: laravel-api-persistant-volume-claim - name: kube-api-access-8cfg8 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' message: 'containers with unready status: [laravel-api-init]' reason: ContainersNotReady status: 'False' type: Ready - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' message: 'containers with unready status: [laravel-api-init]' reason: ContainersNotReady status: 'False' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2023-03-17T09:17:58Z' status: 'True' type: PodScheduled containerStatuses: - containerID: >- containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c image: 'me/laravel-api:v1.0.0' imageID: >- me/laravel-api@secret lastState: terminated: containerID: >- containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c exitCode: 1 finishedAt: '2023-03-17T09:20:53Z' reason: Error startedAt: '2023-03-17T09:20:53Z' name: laravel-api-init ready: false restartCount: 5 started: false state: waiting: message: >- back-off 2m40s restarting failed container=laravel-api-init pod=laravel-api-app-74b7d9584c-4dc9h_my-project(12327d67-cdf9-4387-afe8-3cf536531dd2) reason: CrashLoopBackOff hostIP: 192.168.1.8 phase: Running podIP: 10.1.10.55 podIPs: - ip: 10.1.10.55 qosClass: BestEffort startTime: '2023-03-17T09:17:58Z'
When you delete a PVC, if there is a resource that uses it (for example if the volume is attached to a Deployments with running Pods) this remains ACTIVE. This is the reason: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection When you reapply the YAML describing the Deployment, Kubernetes will upgrade to rolling-update mode. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources. https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ This means that your "old" Pod will remain active until the "new" becomes Up&Running (but continuing to fail, the "old" will never be killed and consequently the PVC will never be destroyed, continuing to maintain the application configurations). To conclude, I suggest you delete the resources (postgresql Deployment in this case) that use the deleted PVC before re-installing them.
###Instruction: i got a configuration of a postgres deployment, something like this.. on the volumeMounts level volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-persistance-storage-new on the volumes level volumes: - name: postgres-persistance-storage-new persistentVolumeClaim: claimName: postgres-pv-claim-new the PersistentVolumeClaim spec apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim-new # name of PVC essential for identifying the storage data labels: app: postgres tier: database spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi My problem is the following : when i delete persistentVolumeClaim via a kubectl command, and then launch again my postgres deployment yaml spec, the persistentVolume seems to be always here, as logs in postgres container say : PostgreSQL Database directory appears to contain a database; Skipping initialization How is it possible ? ### Response: When you delete a PVC, if there is a resource that uses it (for example if the volume is attached to a Deployments with running Pods) this remains ACTIVE. This is the reason: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection When you reapply the YAML describing the Deployment, Kubernetes will upgrade to rolling-update mode. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources. https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/ This means that your "old" Pod will remain active until the "new" becomes Up&Running (but continuing to fail, the "old" will never be killed and consequently the PVC will never be destroyed, continuing to maintain the application configurations). To conclude, I suggest you delete the resources (postgresql Deployment in this case) that use the deleted PVC before re-installing them.
i got a configuration of a postgres deployment, something like this.. on the volumeMounts level volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-persistance-storage-new on the volumes level volumes: - name: postgres-persistance-storage-new persistentVolumeClaim: claimName: postgres-pv-claim-new the PersistentVolumeClaim spec apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim-new # name of PVC essential for identifying the storage data labels: app: postgres tier: database spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi My problem is the following : when i delete persistentVolumeClaim via a kubectl command, and then launch again my postgres deployment yaml spec, the persistentVolume seems to be always here, as logs in postgres container say : PostgreSQL Database directory appears to contain a database; Skipping initialization How is it possible ?
Both Kong and Apache APISIX are popular and feature-rich API gateway solutions. Choosing the right one depends on your specific requirements and use case. API Management Features: Both Kong and Apache APISIX provide a wide range of API management features including API authentication, rate limiting, caching, SSL/TLS termination, request/response transformations, and more. Scalability: Both solutions are built to scale horizontally and vertically. However, Apache APISIX uses a more lightweight and efficient architecture, making it a better option for high performance and low-latency workloads. Both solutions have a rich ecosystem of plugins and extensions, and can be installed and configured easily. In summary, for use cases with large-scale, high performance, and low-latency workloads, Apache APISIX might be a better fit. There has one comparison page may help you: https://api7.ai/apisix-vs-kong
###Instruction: Our project is undergoing a refactor to a micro-services architecture, and we are currently considering different API gateway solutions. We did our research, looked at the official sites for several solutions, went over some technical comparisons of different solutions, and read articles about our top picks. So far our main contenders are Apachee APISIX and Kong, but we are quite torn between them and would like to get a general opinion from actual users. Below are outlined the different properties and requirements of the project, I would appreciate it if any of you can point out some pros and cons of a solution you are familiar with in regard to them, and it would be great if someone facing similar requirements could share their experience with actually integrating one. General Info The project is of medium scale, has an active user base, and sees daily use around the clock with an incoming traffic count of a few thousand per minute on the backend. The project is hosted in a private network, and no cloud services are utilized, so we are looking for a good on-prem solution. Looking for a rather lightweight solution. Technical Info and Requirements AD FS-based authentication. Significant reliance on JWT. Using WebSocket in some micro-services, specifically Socket.io. Kubernetes deployment, supported by Helm. Full-stack under Monorepo. Repository and CI/CD are hosted and managed on GitLab. The team is trained in several coding languages but prefers working mainly with Typescript as we use React for the front-end, and NestJS for the back-end. Thank you! ### Response: Both Kong and Apache APISIX are popular and feature-rich API gateway solutions. Choosing the right one depends on your specific requirements and use case. API Management Features: Both Kong and Apache APISIX provide a wide range of API management features including API authentication, rate limiting, caching, SSL/TLS termination, request/response transformations, and more. Scalability: Both solutions are built to scale horizontally and vertically. However, Apache APISIX uses a more lightweight and efficient architecture, making it a better option for high performance and low-latency workloads. Both solutions have a rich ecosystem of plugins and extensions, and can be installed and configured easily. In summary, for use cases with large-scale, high performance, and low-latency workloads, Apache APISIX might be a better fit. There has one comparison page may help you: https://api7.ai/apisix-vs-kong
Our project is undergoing a refactor to a micro-services architecture, and we are currently considering different API gateway solutions. We did our research, looked at the official sites for several solutions, went over some technical comparisons of different solutions, and read articles about our top picks. So far our main contenders are Apachee APISIX and Kong, but we are quite torn between them and would like to get a general opinion from actual users. Below are outlined the different properties and requirements of the project, I would appreciate it if any of you can point out some pros and cons of a solution you are familiar with in regard to them, and it would be great if someone facing similar requirements could share their experience with actually integrating one. General Info The project is of medium scale, has an active user base, and sees daily use around the clock with an incoming traffic count of a few thousand per minute on the backend. The project is hosted in a private network, and no cloud services are utilized, so we are looking for a good on-prem solution. Looking for a rather lightweight solution. Technical Info and Requirements AD FS-based authentication. Significant reliance on JWT. Using WebSocket in some micro-services, specifically Socket.io. Kubernetes deployment, supported by Helm. Full-stack under Monorepo. Repository and CI/CD are hosted and managed on GitLab. The team is trained in several coding languages but prefers working mainly with Typescript as we use React for the front-end, and NestJS for the back-end. Thank you!
It refers to the Kubernetes Horizontal Pod Autoscaler. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ Example: https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli
###Instruction: What does the maxReplicas property mean in the pipeline yaml in Azure in context of the k8s deployment? E.g. in this documentation the maxReplicas: 10 is mentioned. But there is no explanation about what it means. At least I was not able to find one. Would be grateful if someone will help me to find the documentation on that. I have two assumptions. First, it means that we need to duplicate pods. I.e. with the maxReplicas: 10 we may have up to 10 clusters with identical pods. Second assumption, the maxReplicas: 10 means that in a k8s cluster we can have no more than 10 pods. ### Response: It refers to the Kubernetes Horizontal Pod Autoscaler. https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ Example: https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli
What does the maxReplicas property mean in the pipeline yaml in Azure in context of the k8s deployment? E.g. in this documentation the maxReplicas: 10 is mentioned. But there is no explanation about what it means. At least I was not able to find one. Would be grateful if someone will help me to find the documentation on that. I have two assumptions. First, it means that we need to duplicate pods. I.e. with the maxReplicas: 10 we may have up to 10 clusters with identical pods. Second assumption, the maxReplicas: 10 means that in a k8s cluster we can have no more than 10 pods.
From EKS 1.23 on a Container Storage Interface (CSI) driver is needed in order to get your PersisentVolumeClaims served by a PersistentVolume as you are used to from earlier EKS versions. The docs tell us, what needs to be configured: Solution: Configure Amazon EBS CSI driver for working PersistentVolumes in EKS In essence we need to enable the AWS EBS CSI driver as an EKS addon. But beforehand we need to enable the IAM OIDC provider and create the IAM role for the EBS CSI driver. The easiest way to do both is to use eksctl (other ways like using plain aws cli or the AWS GUI are described in the docs). 1.) Install eksctl We assume here that the aws cli is installed and configured - and you have access to your EKS cluster. To use eksctl we need to install it first. On a Mac use brew like: brew tap weaveworks/tap brew install weaveworks/tap/eksctl or on Linux use: curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin 2.) Enable IAM OIDC provider A prerequisite for the EBS CSI driver to work is to have an existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. This IAM OIDC provider can be enabled with the following command: eksctl utils associate-iam-oidc-provider --region=eu-central-1 --cluster=YourClusterNameHere --approve 3.) Create Amazon EBS CSI driver IAM role Now having eksctl in place, create the IAM role: eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace kube-system \ --cluster YourClusterNameHere \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole As you can see AWS maintains a managed policy for us we can simply use (AWS maintains a managed policy, available at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy). Only if you use encrypted EBS drives you need to additionally add configuration to the policy. The command... ...deploys an AWS CloudFormation stack that creates an IAM role, attaches the IAM policy to it, and annotates the existing ebs-csi-controller-sa service account with the Amazon Resource Name (ARN) of the IAM role. 4.) Add the Amazon EBS CSI add-on Now we can finally add the EBS CSI add-on. Therefor we also need the AWS Account id which we can obtain by running aws sts get-caller-identity --query Account --output text (see Quick way to get AWS Account number from the AWS CLI tools?). Now the eksctl create addon command looks like this: eksctl create addon --name aws-ebs-csi-driver --cluster YourClusterNameHere --service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/AmazonEKS_EBS_CSI_DriverRole --force Now your PersistentVolumeClaim should get the status Bound while a EBS volume got created for you - and the Tekton Pipeline should run again.
###Instruction: We have a EKS setup provisioned where we use a Cloud Native Buildpacks based Tekton Pipeline our PipelineRuns are stuck and pending forever without getting a Pod resource. We created a PersistentVolumeClaim like this: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: buildpacks-source-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi Having a look into the events of this PVC after creation there are the following events indicating something is wrong with our EKS setup: The command kubectl describe pvc buildpacks-source-pvc gives the following event messages: Name: buildpacks-source-pvc Namespace: default StorageClass: gp2 Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com volume.kubernetes.io/selected-node: ip-999-99-99-99.eu-central-1.compute.internal volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: affinity-assistant-0b3d266b91-0 affinity-assistant-53a7c08baf-0 affinity-assistant-a375f28de3-0 affinity-assistant-e8cb1a6e15-0 buildpacks-test-pipeline-run-9rz4l-fetch-repository-pod Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 3m43s (x561 over 143m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator What is this EBS CSI thing and how do we get our cluster working as before? ### Response: From EKS 1.23 on a Container Storage Interface (CSI) driver is needed in order to get your PersisentVolumeClaims served by a PersistentVolume as you are used to from earlier EKS versions. The docs tell us, what needs to be configured: Solution: Configure Amazon EBS CSI driver for working PersistentVolumes in EKS In essence we need to enable the AWS EBS CSI driver as an EKS addon. But beforehand we need to enable the IAM OIDC provider and create the IAM role for the EBS CSI driver. The easiest way to do both is to use eksctl (other ways like using plain aws cli or the AWS GUI are described in the docs). 1.) Install eksctl We assume here that the aws cli is installed and configured - and you have access to your EKS cluster. To use eksctl we need to install it first. On a Mac use brew like: brew tap weaveworks/tap brew install weaveworks/tap/eksctl or on Linux use: curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin 2.) Enable IAM OIDC provider A prerequisite for the EBS CSI driver to work is to have an existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. This IAM OIDC provider can be enabled with the following command: eksctl utils associate-iam-oidc-provider --region=eu-central-1 --cluster=YourClusterNameHere --approve 3.) Create Amazon EBS CSI driver IAM role Now having eksctl in place, create the IAM role: eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace kube-system \ --cluster YourClusterNameHere \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole As you can see AWS maintains a managed policy for us we can simply use (AWS maintains a managed policy, available at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy). Only if you use encrypted EBS drives you need to additionally add configuration to the policy. The command... ...deploys an AWS CloudFormation stack that creates an IAM role, attaches the IAM policy to it, and annotates the existing ebs-csi-controller-sa service account with the Amazon Resource Name (ARN) of the IAM role. 4.) Add the Amazon EBS CSI add-on Now we can finally add the EBS CSI add-on. Therefor we also need the AWS Account id which we can obtain by running aws sts get-caller-identity --query Account --output text (see Quick way to get AWS Account number from the AWS CLI tools?). Now the eksctl create addon command looks like this: eksctl create addon --name aws-ebs-csi-driver --cluster YourClusterNameHere --service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/AmazonEKS_EBS_CSI_DriverRole --force Now your PersistentVolumeClaim should get the status Bound while a EBS volume got created for you - and the Tekton Pipeline should run again.
We have a EKS setup provisioned where we use a Cloud Native Buildpacks based Tekton Pipeline our PipelineRuns are stuck and pending forever without getting a Pod resource. We created a PersistentVolumeClaim like this: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: buildpacks-source-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi Having a look into the events of this PVC after creation there are the following events indicating something is wrong with our EKS setup: The command kubectl describe pvc buildpacks-source-pvc gives the following event messages: Name: buildpacks-source-pvc Namespace: default StorageClass: gp2 Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com volume.kubernetes.io/selected-node: ip-999-99-99-99.eu-central-1.compute.internal volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: affinity-assistant-0b3d266b91-0 affinity-assistant-53a7c08baf-0 affinity-assistant-a375f28de3-0 affinity-assistant-e8cb1a6e15-0 buildpacks-test-pipeline-run-9rz4l-fetch-repository-pod Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 3m43s (x561 over 143m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator What is this EBS CSI thing and how do we get our cluster working as before?
Try using jdbc:mariadb://host/database jdbc:mariadb://mariadb-galera.default.svc.cluster.local/bitnami_keycloak as KC_DB_URL value. https://www.keycloak.org/server/containers (chapter “Relevant options”) https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb (“Run tests:” step)
###Instruction: I am trying to install using Helm Chart Repository image of Keycloak so that MariaDB Galera is used as database. Installation helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm upgrade keycloak bitnami/keycloak --create-namespace --install --namespace default --values values-keycloak.yaml --version 13.3.0 **values-keycloak.yaml ** global: storageClass: "hcloud-volumes" auth: adminUser: user adminPassword: "user" tls: enabled: true autoGenerated: true production: true extraEnvVars: - name: KC_DB value: 'mariadb' - name: KC_DB_URL value: 'jdbc:mariadb://mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;' replicaCount: 1 service: type: ClusterIP ingress: enabled: true hostname: example.com annotations: cert-manager.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-buffer-size: 128k tls: true postgresql: enabled: false externalDatabase: host: "mariadb-galera.default.svc.cluster.local" port: 3306 user: bn_keycloak database: bitnami_keycloak password: "password" Error kubectl logs -n default keycloak-0 keycloak 23:50:06.59 keycloak 23:50:06.59 Welcome to the Bitnami keycloak container keycloak 23:50:06.60 Subscribe to project updates by watching https://github.com/bitnami/containers keycloak 23:50:06.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues keycloak 23:50:06.60 keycloak 23:50:06.60 INFO ==> ** Starting keycloak setup ** keycloak 23:50:06.62 INFO ==> Validating settings in KEYCLOAK_* env vars... keycloak 23:50:06.66 INFO ==> Trying to connect to PostgreSQL server mariadb-galera.default.svc.cluster.local... keycloak 23:50:06.69 INFO ==> Found PostgreSQL server listening at mariadb-galera.default.svc.cluster.local:3306 keycloak 23:50:06.70 INFO ==> Configuring database settings keycloak 23:50:06.78 INFO ==> Enabling statistics keycloak 23:50:06.79 INFO ==> Configuring http settings keycloak 23:50:06.82 INFO ==> Configuring hostname settings keycloak 23:50:06.83 INFO ==> Configuring cache count keycloak 23:50:06.85 INFO ==> Configuring log level keycloak 23:50:06.89 INFO ==> Configuring proxy keycloak 23:50:06.91 INFO ==> Configuring Keycloak HTTPS settings keycloak 23:50:06.94 INFO ==> ** keycloak setup finished! ** keycloak 23:50:06.96 INFO ==> ** Starting keycloak ** Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=keycloak-headless.default.svc.cluster.local Changes detected in configuration. Updating the server image. Updating the configuration and installing your custom providers, if any. Please wait. 2023-03-18 23:50:13,551 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice 2023-03-18 23:50:14,494 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice 2023-03-18 23:50:25,703 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 15407ms Server configuration updated and persisted. Run the following command to review the configuration: kc.sh show-config Next time you run the server, just run: kc.sh start --optimized -cf=/opt/bitnami/keycloak/conf/keycloak.conf 2023-03-18 23:50:28,160 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true 2023-03-18 23:50:30,398 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly 2023-03-18 23:50:31,267 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; 2023-03-18 23:50:31,269 WARN [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread: keycloak-default) HHH000342: Could not obtain connection to query metadata: java.sql.SQLNonTransientConnectionException: Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:136) at org.mariadb.jdbc.client.impl.StandardClient.<init>(StandardClient.java:103) at org.mariadb.jdbc.Driver.connect(Driver.java:70) at org.mariadb.jdbc.MariaDbDataSource.getXAConnection(MariaDbDataSource.java:225) at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:232) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:535) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:516) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.net.UnknownHostException: mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) at java.base/java.net.Socket.connect(Socket.java:633) at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:130) ... 11 more I would like to get the correct connection. Perhaps this link will help (https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion), but I can't figure it out. ### Response: Try using jdbc:mariadb://host/database jdbc:mariadb://mariadb-galera.default.svc.cluster.local/bitnami_keycloak as KC_DB_URL value. https://www.keycloak.org/server/containers (chapter “Relevant options”) https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb (“Run tests:” step)
I am trying to install using Helm Chart Repository image of Keycloak so that MariaDB Galera is used as database. Installation helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm upgrade keycloak bitnami/keycloak --create-namespace --install --namespace default --values values-keycloak.yaml --version 13.3.0 **values-keycloak.yaml ** global: storageClass: "hcloud-volumes" auth: adminUser: user adminPassword: "user" tls: enabled: true autoGenerated: true production: true extraEnvVars: - name: KC_DB value: 'mariadb' - name: KC_DB_URL value: 'jdbc:mariadb://mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;' replicaCount: 1 service: type: ClusterIP ingress: enabled: true hostname: example.com annotations: cert-manager.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-buffer-size: 128k tls: true postgresql: enabled: false externalDatabase: host: "mariadb-galera.default.svc.cluster.local" port: 3306 user: bn_keycloak database: bitnami_keycloak password: "password" Error kubectl logs -n default keycloak-0 keycloak 23:50:06.59 keycloak 23:50:06.59 Welcome to the Bitnami keycloak container keycloak 23:50:06.60 Subscribe to project updates by watching https://github.com/bitnami/containers keycloak 23:50:06.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues keycloak 23:50:06.60 keycloak 23:50:06.60 INFO ==> ** Starting keycloak setup ** keycloak 23:50:06.62 INFO ==> Validating settings in KEYCLOAK_* env vars... keycloak 23:50:06.66 INFO ==> Trying to connect to PostgreSQL server mariadb-galera.default.svc.cluster.local... keycloak 23:50:06.69 INFO ==> Found PostgreSQL server listening at mariadb-galera.default.svc.cluster.local:3306 keycloak 23:50:06.70 INFO ==> Configuring database settings keycloak 23:50:06.78 INFO ==> Enabling statistics keycloak 23:50:06.79 INFO ==> Configuring http settings keycloak 23:50:06.82 INFO ==> Configuring hostname settings keycloak 23:50:06.83 INFO ==> Configuring cache count keycloak 23:50:06.85 INFO ==> Configuring log level keycloak 23:50:06.89 INFO ==> Configuring proxy keycloak 23:50:06.91 INFO ==> Configuring Keycloak HTTPS settings keycloak 23:50:06.94 INFO ==> ** keycloak setup finished! ** keycloak 23:50:06.96 INFO ==> ** Starting keycloak ** Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=keycloak-headless.default.svc.cluster.local Changes detected in configuration. Updating the server image. Updating the configuration and installing your custom providers, if any. Please wait. 2023-03-18 23:50:13,551 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice 2023-03-18 23:50:14,494 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice 2023-03-18 23:50:25,703 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 15407ms Server configuration updated and persisted. Run the following command to review the configuration: kc.sh show-config Next time you run the server, just run: kc.sh start --optimized -cf=/opt/bitnami/keycloak/conf/keycloak.conf 2023-03-18 23:50:28,160 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true 2023-03-18 23:50:30,398 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly 2023-03-18 23:50:31,267 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; 2023-03-18 23:50:31,269 WARN [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread: keycloak-default) HHH000342: Could not obtain connection to query metadata: java.sql.SQLNonTransientConnectionException: Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:136) at org.mariadb.jdbc.client.impl.StandardClient.<init>(StandardClient.java:103) at org.mariadb.jdbc.Driver.connect(Driver.java:70) at org.mariadb.jdbc.MariaDbDataSource.getXAConnection(MariaDbDataSource.java:225) at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:232) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:535) at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:516) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.net.UnknownHostException: mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak; at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) at java.base/java.net.Socket.connect(Socket.java:633) at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:130) ... 11 more I would like to get the correct connection. Perhaps this link will help (https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion), but I can't figure it out.
The reason why it is recommended to attach the AmazonEKS_CNI_Policy to a separate role used specifically for the Amazon VPC CNI add-on is to follow the principle of least privilege. The Amazon VPC CNI (Container Network Interface) is a plugin for Kubernetes that enables networking between pods and the rest of the cluster in a VPC (Virtual Private Cloud) environment. This plugin needs certain permissions to function properly, such as creating and managing network interfaces and route tables. By creating a separate role for the Amazon VPC CNI add-on, you can ensure that this plugin has only the necessary permissions to perform its specific tasks, and not other permissions that may be included in the AmazonEKSWorkerNodePolicy. This helps to reduce the risk of accidental or intentional misuse of privileges, and makes it easier to audit and manage permissions for different components of your cluster. Additionally, separating the Amazon VPC CNI permissions from the worker node IAM role can also help with troubleshooting, as it allows you to isolate issues related to the network plugin from other potential problems that may affect the worker nodes or other components of your cluster.
###Instruction: The EKS docs in the page Amazon EKS node IAM role state that before you create worker nodes, you must create a role with the following policies: AmazonEKSWorkerNodePolicy AmazonEC2ContainerRegistryReadOnly AmazonEKS_CNI_Policy Regarding the last one, the docs state that: Rather than attaching the policy to this role however, we recommend that you attach the policy to a separate role used specifically for the Amazon VPC CNI add-on Can someone explain why is this recommended? ### Response: The reason why it is recommended to attach the AmazonEKS_CNI_Policy to a separate role used specifically for the Amazon VPC CNI add-on is to follow the principle of least privilege. The Amazon VPC CNI (Container Network Interface) is a plugin for Kubernetes that enables networking between pods and the rest of the cluster in a VPC (Virtual Private Cloud) environment. This plugin needs certain permissions to function properly, such as creating and managing network interfaces and route tables. By creating a separate role for the Amazon VPC CNI add-on, you can ensure that this plugin has only the necessary permissions to perform its specific tasks, and not other permissions that may be included in the AmazonEKSWorkerNodePolicy. This helps to reduce the risk of accidental or intentional misuse of privileges, and makes it easier to audit and manage permissions for different components of your cluster. Additionally, separating the Amazon VPC CNI permissions from the worker node IAM role can also help with troubleshooting, as it allows you to isolate issues related to the network plugin from other potential problems that may affect the worker nodes or other components of your cluster.
The EKS docs in the page Amazon EKS node IAM role state that before you create worker nodes, you must create a role with the following policies: AmazonEKSWorkerNodePolicy AmazonEC2ContainerRegistryReadOnly AmazonEKS_CNI_Policy Regarding the last one, the docs state that: Rather than attaching the policy to this role however, we recommend that you attach the policy to a separate role used specifically for the Amazon VPC CNI add-on Can someone explain why is this recommended?
I'm assuming that you are referering to the kube-linter GitHub action, because that's the one referenced by the kube-linter documentation. Yes, that action can be given individual files, the directory parameter can be a single file, even though the name doesn't suggest this. See the documented parameters: Parameter name Required? Description directory (required) Path of file or directory to scan, absolute or relative to the root of the repo. (Bold italics emphasis mine). The parameter is simply given to the kube-linter command line; see the linting step in the action.yml definition file: ./kube-linter $CONFIG lint "${{ inputs.directory }}" --format "${{ inputs.format }}" $CONFIG is set to --config <filename> if you provided a config parameter. In short, it acts exactly like running the tool locally, which explicitly states that it can take either an individual file or a directory: The path to your Kubernetes yaml file: kube-linter lint /path/to/yaml-file.yaml The path to a directory containing your Kubernetes yaml files: kube-linter lint /path/to/directory/containing/yaml-files/
###Instruction: Can I lint only changes or pull requests in the kubernetes file instead of linting whole kubernetes files everytime I make any changes in the kubernetes folder? I was trying this https://docs.kubelinter.io/#/?id=using-docker ### Response: I'm assuming that you are referering to the kube-linter GitHub action, because that's the one referenced by the kube-linter documentation. Yes, that action can be given individual files, the directory parameter can be a single file, even though the name doesn't suggest this. See the documented parameters: Parameter name Required? Description directory (required) Path of file or directory to scan, absolute or relative to the root of the repo. (Bold italics emphasis mine). The parameter is simply given to the kube-linter command line; see the linting step in the action.yml definition file: ./kube-linter $CONFIG lint "${{ inputs.directory }}" --format "${{ inputs.format }}" $CONFIG is set to --config <filename> if you provided a config parameter. In short, it acts exactly like running the tool locally, which explicitly states that it can take either an individual file or a directory: The path to your Kubernetes yaml file: kube-linter lint /path/to/yaml-file.yaml The path to a directory containing your Kubernetes yaml files: kube-linter lint /path/to/directory/containing/yaml-files/
Can I lint only changes or pull requests in the kubernetes file instead of linting whole kubernetes files everytime I make any changes in the kubernetes folder? I was trying this https://docs.kubelinter.io/#/?id=using-docker
Yes you are right first need to clone the Helm template repo build the files You can review this file although i am not using Helm but mostly steps are similar might need to do last step tweaks : https://github.com/harsh4870/node-js-aws-codebuild-basic-ci-cd/blob/main/Jenkinsfile
###Instruction: Is it possible to run a deployment job that downloads helm chart from bitbucket repository and applies the helm chart into kubernetes: pipeline { agent any stages { stage('test') { steps { script { withKubeConfig([credentialsId: 'my_kubeconfig']) { sh ''' helm upgrade --install // download and install helm chart here ''' } } } } } // } If possible I want first to download the helm chart from bitbucket repository and then using kubernetes service account to apply the chart. ### Response: Yes you are right first need to clone the Helm template repo build the files You can review this file although i am not using Helm but mostly steps are similar might need to do last step tweaks : https://github.com/harsh4870/node-js-aws-codebuild-basic-ci-cd/blob/main/Jenkinsfile
Is it possible to run a deployment job that downloads helm chart from bitbucket repository and applies the helm chart into kubernetes: pipeline { agent any stages { stage('test') { steps { script { withKubeConfig([credentialsId: 'my_kubeconfig']) { sh ''' helm upgrade --install // download and install helm chart here ''' } } } } } // } If possible I want first to download the helm chart from bitbucket repository and then using kubernetes service account to apply the chart.
Unfortunately it often happens that the Helm Chart repositories of an application aren't suitable for the environment in which you are going to install them. I suggest modifying the templates following the patterns of how they were written and sending PRs to the application's GitHub repository. Unless these are "reasonable" configurations, such as things that complicate the application for whatever reason, there's no other way than to do a "git diff" before downloading the updated repo.
###Instruction: so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations. The question is: how do I conveniently upgrade the charts versions, while keeping the customizations? For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one. The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task. I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice. ### Response: Unfortunately it often happens that the Helm Chart repositories of an application aren't suitable for the environment in which you are going to install them. I suggest modifying the templates following the patterns of how they were written and sending PRs to the application's GitHub repository. Unless these are "reasonable" configurations, such as things that complicate the application for whatever reason, there's no other way than to do a "git diff" before downloading the updated repo.
so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations. The question is: how do I conveniently upgrade the charts versions, while keeping the customizations? For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one. The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task. I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice.
Looks like you are on K8s version 1.24 or above as mentioned in error you can only update a few fields in statefulset unlike Deployment or POD. So you have to delete and re-apply the new changes is the only solution. You can also create the new stateful set with a different name and keep the label same as the existing running one. Once one stateful set is up & running you can remove the older one and service will forward the traffic to new running replicas, guessing there won't be any issues with data as you mentioned about it being fine to run parallel. In case you have a requirement to use existing PVC to store the data, deleting the statefulset is only solution i am seeing.
###Instruction: I am using a StatefulSet to deploy my application and I have 10 replicas. Because each pod waits for the previous pod to be ready, it takes a long time to spin up my cluster. I found the podManagementPolicy option on the Kubernetes documentation. I want to set podManagementPolicy to Parallel so my pods don't wait for each other (which I don't need) and my build time is shorter. However, I am getting the following error when I try to set podManagementPolicy to Parallel: The StatefulSet "xxx" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden Here is the yaml file for my StatefulSet: apiVersion: apps/v1 kind: StatefulSet metadata: name: xxx spec: podManagementPolicy: Parallel selector: matchLabels: app: xxx serviceName: "xxx" replicas: 10 minReadySeconds: 0 template: metadata: labels: app: xxx spec: containers: - name: xxx image: xxx.com/xxx:latest imagePullPolicy: Always ports: - containerPort: 8000 imagePullSecrets: - name: dockerregistrykey I am using an Linode LKE cluster and the Kubernetes version is 1.25. ### Response: Looks like you are on K8s version 1.24 or above as mentioned in error you can only update a few fields in statefulset unlike Deployment or POD. So you have to delete and re-apply the new changes is the only solution. You can also create the new stateful set with a different name and keep the label same as the existing running one. Once one stateful set is up & running you can remove the older one and service will forward the traffic to new running replicas, guessing there won't be any issues with data as you mentioned about it being fine to run parallel. In case you have a requirement to use existing PVC to store the data, deleting the statefulset is only solution i am seeing.
I am using a StatefulSet to deploy my application and I have 10 replicas. Because each pod waits for the previous pod to be ready, it takes a long time to spin up my cluster. I found the podManagementPolicy option on the Kubernetes documentation. I want to set podManagementPolicy to Parallel so my pods don't wait for each other (which I don't need) and my build time is shorter. However, I am getting the following error when I try to set podManagementPolicy to Parallel: The StatefulSet "xxx" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden Here is the yaml file for my StatefulSet: apiVersion: apps/v1 kind: StatefulSet metadata: name: xxx spec: podManagementPolicy: Parallel selector: matchLabels: app: xxx serviceName: "xxx" replicas: 10 minReadySeconds: 0 template: metadata: labels: app: xxx spec: containers: - name: xxx image: xxx.com/xxx:latest imagePullPolicy: Always ports: - containerPort: 8000 imagePullSecrets: - name: dockerregistrykey I am using an Linode LKE cluster and the Kubernetes version is 1.25.
The error is speaking; the Service Account istiod-service-account has no privileges on the CRDs extensions.istio.io/v1alpha1/WasmPlugin. The solution to your problem is documented here: https://github.com/istio/istio/issues/36886#issue-1107794465
###Instruction: I am deploying to version 1.16 but the pods are getting crashed below are the pod's error. istiod pod: 2023-03-21T11:58:09.768255Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller 2023-03-21T11:58:09.868998Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller 2023-03-21T11:58:09.887383Z info klog k8s.io/client-go@v0.25.2/tools/cache/reflector.go:169: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope 2023-03-21T11:58:09.887472Z error watch error in cluster Kubernetes: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope external-dns: time="2023-03-21T12:17:22Z" level=fatal msg="failed to sync cache: timed out waiting for the condition" Version istioctl version: client version: 1.17.1 control plane version: 1.16.2 data plane version: none kubectl version --short: Client Version: v1.24.10 Kustomize Version: v4.5.4 Server Version: v1.24.10-eks-48e63af ### Response: The error is speaking; the Service Account istiod-service-account has no privileges on the CRDs extensions.istio.io/v1alpha1/WasmPlugin. The solution to your problem is documented here: https://github.com/istio/istio/issues/36886#issue-1107794465
I am deploying to version 1.16 but the pods are getting crashed below are the pod's error. istiod pod: 2023-03-21T11:58:09.768255Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller 2023-03-21T11:58:09.868998Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller 2023-03-21T11:58:09.887383Z info klog k8s.io/client-go@v0.25.2/tools/cache/reflector.go:169: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope 2023-03-21T11:58:09.887472Z error watch error in cluster Kubernetes: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope external-dns: time="2023-03-21T12:17:22Z" level=fatal msg="failed to sync cache: timed out waiting for the condition" Version istioctl version: client version: 1.17.1 control plane version: 1.16.2 data plane version: none kubectl version --short: Client Version: v1.24.10 Kustomize Version: v4.5.4 Server Version: v1.24.10-eks-48e63af
$request_uri will contain the path with arguments. To get your desired redirect you probably want to use $host$request_uri. For some examples of usage of $request_uri you can search it on https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ to see examples or https://www.webhosting24.com/understanding-nginx-request_uri/ also explains it quite well.
###Instruction: I'm using NGINX (v4.5.2) on a Kubernetes cluster, and I'm trying to configure it to redirect non-www URLs to the www equivalent. For example... https://example.com -> https://www.example.com https://example.com/about -> http://www.example.com/about So this is what I've added to the annotations (I'll carry on using example.com as the URL host)... annotations: ... nginx.ingress.kubernetes.io/from-to-www-redirect: 'true' nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'example.com' ) { rewrite ^ https://www.$request_uri permanent; } However, if I now navigate to https://example.com I get redirected to https://www.. The docs say $request_uri contains ... full original request URI (with arguments) ... so why is $request_uri apparently empty in my case? ### Response: $request_uri will contain the path with arguments. To get your desired redirect you probably want to use $host$request_uri. For some examples of usage of $request_uri you can search it on https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ to see examples or https://www.webhosting24.com/understanding-nginx-request_uri/ also explains it quite well.
I'm using NGINX (v4.5.2) on a Kubernetes cluster, and I'm trying to configure it to redirect non-www URLs to the www equivalent. For example... https://example.com -> https://www.example.com https://example.com/about -> http://www.example.com/about So this is what I've added to the annotations (I'll carry on using example.com as the URL host)... annotations: ... nginx.ingress.kubernetes.io/from-to-www-redirect: 'true' nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'example.com' ) { rewrite ^ https://www.$request_uri permanent; } However, if I now navigate to https://example.com I get redirected to https://www.. The docs say $request_uri contains ... full original request URI (with arguments) ... so why is $request_uri apparently empty in my case?
You cannot expose port 38412 via NodePort. If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport Take a look at this link to understand how to translate the port: Unable to expose SCTP server running in a kubernetes pod using NodePort Also, make sure you are using Calico as a network plugin (minimum version 3.3). Kubernetes 1.12 includes alpha Stream Control Transmission Protocol (SCTP) support. Calico v3.3 has been updated to support SCTP if included in your network policy spec.
###Instruction: I have an application in Docker Container which connects with the host application using SCTP protocol. When this container is deployed on Kubernetes Pods, connectivity with this pod from another pod inside a cluster is working fine. I have tried exposing this pod using a Load Balancer Service and NodePort service externally. When the host application tries to connect to this pod, I am getting an intermittent "Connection Reset By Peer" error. Sometimes after 1st request itself and sometimes after 3rd request. I have tried other SCTP based demo containers other than my application but having the same issue, where after certain no. of request getting connection reset by peer error. So it isn't problem of my appliation. My application is listening to the correct port. Below is the output of the command "netstat -anp" inside the pod. Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.244.0.27:80 0.0.0.0:* LISTEN 4579/./build/bin/AM sctp 10.244.0.27:38412 LISTEN 4579/./build/bin/AM My Service file is given below: apiVersion: v1 kind: Service metadata: name: my-service spec: clusterIP: 10.100.0.2 selector: app: my-app type: NodePort ports: - name: sctp protocol: SCTP port: 38412 targetPort: 38412 nodePort : 31000 - name: tcp protocol: TCP port: 80 targetPort: 80 I have this whole setup on Minikube.I haven't used any CNI. I am stuck due to this. Am I missing something ? Since I am working with K8s for the last 2 weeks only. Please help with this issue, and if possible mention any resource regarding SCTP on Kubernetes, since I could find very little. The following is the tcpdump collected from inside the pod running the sctp connection. tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 20:59:02.410219 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 20:59:02.410260 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 20:59:02.410308 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410348 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [COOKIE ACK] 20:59:02.410552 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 20:59:02.410590 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 20:59:02.410640 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410673 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [COOKIE ACK] 20:59:04.643163 IP (tos 0x2,ECT(0), ttl 64, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:05.155162 IP (tos 0x2,ECT(0), ttl 64, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:05.411135 IP (tos 0x2,ECT(0), ttl 64, id 60101, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.41024: sctp (1) [HB REQ] 20:59:05.411293 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.41024 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:06.179159 IP (tos 0x2,ECT(0), ttl 64, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:06.403172 IP (tos 0x2,ECT(0), ttl 64, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:06.695155 IP (tos 0x2,ECT(0), ttl 64, id 58516, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:06.695270 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:09.584088 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 116) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 20:59:09.584112 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [ABORT] 20:59:10.530610 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 40) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [SHUTDOWN] 20:59:10.530644 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [ABORT] The following is the tcpdump collected from the host trying to connect. tcpdump: listening on br-c54f52300570, link-type EN10MB (Ethernet), capture size 262144 bytes 02:29:02.410177 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.58648 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 02:29:02.410282 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 > charles-02.58648: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 02:29:02.410299 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.58648 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410360 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.58648: sctp (1) [COOKIE ACK] 02:29:02.410528 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.54336 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 02:29:02.410610 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 > charles-02.54336: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 02:29:02.410630 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.54336 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410686 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.54336: sctp (1) [COOKIE ACK] 02:29:04.643276 IP (tos 0x2,ECT(0), ttl 63, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:04.643303 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:05.155288 IP (tos 0x2,ECT(0), ttl 63, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:05.155322 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.179324 IP (tos 0x2,ECT(0), ttl 63, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:06.179376 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.403290 IP (tos 0x2,ECT(0), ttl 63, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:06.403332 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:09.584056 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 116) charles-02.58648 > 192.168.49.2.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 02:29:09.584132 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.58648: sctp (1) [ABORT] 02:29:10.530566 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 40) charles-02.54336 > 192.168.49.2.31000: sctp (1) [SHUTDOWN] 02:29:10.530668 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.54336: sctp (1) [ABORT] ### Response: You cannot expose port 38412 via NodePort. If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport Take a look at this link to understand how to translate the port: Unable to expose SCTP server running in a kubernetes pod using NodePort Also, make sure you are using Calico as a network plugin (minimum version 3.3). Kubernetes 1.12 includes alpha Stream Control Transmission Protocol (SCTP) support. Calico v3.3 has been updated to support SCTP if included in your network policy spec.
I have an application in Docker Container which connects with the host application using SCTP protocol. When this container is deployed on Kubernetes Pods, connectivity with this pod from another pod inside a cluster is working fine. I have tried exposing this pod using a Load Balancer Service and NodePort service externally. When the host application tries to connect to this pod, I am getting an intermittent "Connection Reset By Peer" error. Sometimes after 1st request itself and sometimes after 3rd request. I have tried other SCTP based demo containers other than my application but having the same issue, where after certain no. of request getting connection reset by peer error. So it isn't problem of my appliation. My application is listening to the correct port. Below is the output of the command "netstat -anp" inside the pod. Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.244.0.27:80 0.0.0.0:* LISTEN 4579/./build/bin/AM sctp 10.244.0.27:38412 LISTEN 4579/./build/bin/AM My Service file is given below: apiVersion: v1 kind: Service metadata: name: my-service spec: clusterIP: 10.100.0.2 selector: app: my-app type: NodePort ports: - name: sctp protocol: SCTP port: 38412 targetPort: 38412 nodePort : 31000 - name: tcp protocol: TCP port: 80 targetPort: 80 I have this whole setup on Minikube.I haven't used any CNI. I am stuck due to this. Am I missing something ? Since I am working with K8s for the last 2 weeks only. Please help with this issue, and if possible mention any resource regarding SCTP on Kubernetes, since I could find very little. The following is the tcpdump collected from inside the pod running the sctp connection. tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 20:59:02.410219 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 20:59:02.410260 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 20:59:02.410308 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410348 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [COOKIE ACK] 20:59:02.410552 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 20:59:02.410590 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 20:59:02.410640 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO] 20:59:02.410673 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [COOKIE ACK] 20:59:04.643163 IP (tos 0x2,ECT(0), ttl 64, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:05.155162 IP (tos 0x2,ECT(0), ttl 64, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:05.411135 IP (tos 0x2,ECT(0), ttl 64, id 60101, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.41024: sctp (1) [HB REQ] 20:59:05.411293 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.41024 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:06.179159 IP (tos 0x2,ECT(0), ttl 64, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:06.403172 IP (tos 0x2,ECT(0), ttl 64, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ] 20:59:06.695155 IP (tos 0x2,ECT(0), ttl 64, id 58516, offset 0, flags [DF], proto SCTP (132), length 92) amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ] 20:59:06.695270 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT] 20:59:09.584088 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 116) 10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 20:59:09.584112 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [ABORT] 20:59:10.530610 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 40) 10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [SHUTDOWN] 20:59:10.530644 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [ABORT] The following is the tcpdump collected from the host trying to connect. tcpdump: listening on br-c54f52300570, link-type EN10MB (Ethernet), capture size 262144 bytes 02:29:02.410177 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.58648 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134] 02:29:02.410282 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 > charles-02.58648: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342] 02:29:02.410299 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.58648 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410360 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.58648: sctp (1) [COOKIE ACK] 02:29:02.410528 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100) charles-02.54336 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664] 02:29:02.410610 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324) 192.168.49.2.31000 > charles-02.54336: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521] 02:29:02.410630 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296) charles-02.54336 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO] 02:29:02.410686 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.54336: sctp (1) [COOKIE ACK] 02:29:04.643276 IP (tos 0x2,ECT(0), ttl 63, id 58512, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:04.643303 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:05.155288 IP (tos 0x2,ECT(0), ttl 63, id 58513, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:05.155322 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.179324 IP (tos 0x2,ECT(0), ttl 63, id 58514, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:06.179376 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:06.403290 IP (tos 0x2,ECT(0), ttl 63, id 58515, offset 0, flags [DF], proto SCTP (132), length 92) 192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ] 02:29:06.403332 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36) charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT] 02:29:09.584056 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 116) charles-02.58648 > 192.168.49.2.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c] 02:29:09.584132 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.58648: sctp (1) [ABORT] 02:29:10.530566 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 40) charles-02.54336 > 192.168.49.2.31000: sctp (1) [SHUTDOWN] 02:29:10.530668 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36) 192.168.49.2.31000 > charles-02.54336: sctp (1) [ABORT]
I found out what I was doing to cause the issue in the first place, so will document it just in case someone runs across the same scenario. Some more background on this as it is very niche. But to some limitations we face, the worker nodes have 1 physical interface which is broken out into 2 additional sub-interfaces to allow for vlan tagged traffic. This being the case, I wrote iproute policy rules to direct traffic between the logical interfaces. So in summation, eth2 (the one actually cabled up) has logical interfaces eth2, eth2.3 and eth2.4 all on different subnets. The issue I caused was writing rules for the primary interface, eth2. This was causing the kubelet traffic for liveness and readiness probes to be mis-routed and not actually follow the kube-proxy iptables rules and calico's felix routing rules. Once the policy rules for the primary interface were removed and the pods restarted (this last bit was more of my impatience) traffic flowed as it should and the pods came up and the probes finished satisfactorily.
###Instruction: I am building a Kubernetes cluster using kubeadm and have an issue with a single node. The worker nodes are running with sub-interfacing and policy based routing, which work as intended; however, out of the 4 worker nodes, if pods are moved to one of them, they fail liveness and readiness checks over http. I am using Kubernetes version 1.26.1, calico 3.25.0, metallb 0.13.9, and ingress-nginx 4.5.0. The cluster stood up with little issue; outside of getting the policy based routing on the nodes worked out. Calico and MetalLB stood up and work as well. The issue now is when I stand up the ingress-nginx controllers and force the pods on to a specific worker node. Standing them up and running on them on the other nodes works and I can curl the LoadBalancer IP; however, while testing, when the ingress-nginx pods are moved to a specific node, the liveness and readiness checks fail. Moving the pods back to any other worker node they come up and run just fine. I've been verifying the routes and iptables on all the nodes; as well as, watching the interfaces via tcpdump, but I've not narrowed down the issue. For the simple things: kernel parameters and loaded modules between the nodes are the same No logs in messages/crio is showing an issue with starting the pod the calico and metallb pods are working on the problem node I've rebuilt the cluster since noticing the issue, and prior builds cert-manager was having issues on the node, as well as a few other random test deployments I've tried From with the pods while they are running, I can hit external webs via curl (dns work and outbound traffic work) Using tcpdump on 'any' interface of the problem node, i can see the pod and the kubernetes internal api IP communicate I can't hit the pod's IP, service IP, or anything from the problem node or other member node the namespace events aren't showing any issues except for the liveness and readiness probes failing The endpoints for the services aren't being filled while on the problem node (although this isn't a surprise). Watching the traffic over the vxlan.calico interface isn't showing only one way traffic - there are responses to traffic that is making it through. Im at a lose on where to look for the root issue. This has been going on for over a week and I could use some help. ### Response: I found out what I was doing to cause the issue in the first place, so will document it just in case someone runs across the same scenario. Some more background on this as it is very niche. But to some limitations we face, the worker nodes have 1 physical interface which is broken out into 2 additional sub-interfaces to allow for vlan tagged traffic. This being the case, I wrote iproute policy rules to direct traffic between the logical interfaces. So in summation, eth2 (the one actually cabled up) has logical interfaces eth2, eth2.3 and eth2.4 all on different subnets. The issue I caused was writing rules for the primary interface, eth2. This was causing the kubelet traffic for liveness and readiness probes to be mis-routed and not actually follow the kube-proxy iptables rules and calico's felix routing rules. Once the policy rules for the primary interface were removed and the pods restarted (this last bit was more of my impatience) traffic flowed as it should and the pods came up and the probes finished satisfactorily.
I am building a Kubernetes cluster using kubeadm and have an issue with a single node. The worker nodes are running with sub-interfacing and policy based routing, which work as intended; however, out of the 4 worker nodes, if pods are moved to one of them, they fail liveness and readiness checks over http. I am using Kubernetes version 1.26.1, calico 3.25.0, metallb 0.13.9, and ingress-nginx 4.5.0. The cluster stood up with little issue; outside of getting the policy based routing on the nodes worked out. Calico and MetalLB stood up and work as well. The issue now is when I stand up the ingress-nginx controllers and force the pods on to a specific worker node. Standing them up and running on them on the other nodes works and I can curl the LoadBalancer IP; however, while testing, when the ingress-nginx pods are moved to a specific node, the liveness and readiness checks fail. Moving the pods back to any other worker node they come up and run just fine. I've been verifying the routes and iptables on all the nodes; as well as, watching the interfaces via tcpdump, but I've not narrowed down the issue. For the simple things: kernel parameters and loaded modules between the nodes are the same No logs in messages/crio is showing an issue with starting the pod the calico and metallb pods are working on the problem node I've rebuilt the cluster since noticing the issue, and prior builds cert-manager was having issues on the node, as well as a few other random test deployments I've tried From with the pods while they are running, I can hit external webs via curl (dns work and outbound traffic work) Using tcpdump on 'any' interface of the problem node, i can see the pod and the kubernetes internal api IP communicate I can't hit the pod's IP, service IP, or anything from the problem node or other member node the namespace events aren't showing any issues except for the liveness and readiness probes failing The endpoints for the services aren't being filled while on the problem node (although this isn't a surprise). Watching the traffic over the vxlan.calico interface isn't showing only one way traffic - there are responses to traffic that is making it through. Im at a lose on where to look for the root issue. This has been going on for over a week and I could use some help.
Please note that k8s pods are by default ephemeral even if you deploy them as StatefulSet. StatefulSet is providing you option for pod with define name eg. consul-0 rather that standard consul-<<random string>>. It also keeps track of where to deploy pod in case you have different zones and you need to deploy pod in the same zone as storage. What is missing in your manifest is volumeMounts and volumeClaimTemplates sections . If you set your data directory to /home/consul/data your manifest should looks similar to this: apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 args: - "agent" - "-server" - "-data-dir=/home/consul/data" volumeMounts: - name: consul-data mountPath: /home/consul/data volumeClaimTemplates: # volume claim template will create volume for you so you don't need to define PVC - metadata: name: consul-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "my-storage-class" # you can get this with kubectl get sc resources: requests: storage: 1Gi Regarding you second problem with consul UI I would not help much since I never use consul but I can advise to deploy helm chart once again and check how arguments are passed there.
###Instruction: I am currently working on Spring micro-service(Eureka Implementation) project. To manage the distributed configuration we are using Consul KV. We are deploying services on Kubernetes cluster. The issue I am facing that, whenever I restart the cluster for Consul it deletes all the data of KV. I am creating Kubernetes cluster on local with docker image by having Deployment.yaml file. Please refer the below Deployment.yaml file for consul. apiVersion: v1 kind: Service metadata: name: consul labels: app: consul spec: clusterIP: None ports: - port: 8500 name: consul selector: app: consul --- apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 --- apiVersion: v1 kind: Service metadata: name: consul-lb labels: app: consul spec: selector: app: consul type: NodePort ports: - port: 80 targetPort: 8500 After some research I found that we can specify the -data-dir location in config, so I have modified StatefulSet kind yaml as below: apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 args: - "agent" - "-server" - "-data-dir=/home/consul/data" But after this Consul UI is not getting started, so wanted some help to resolve so it stores the data even after I delete Consul cluster. PS: I tried deploying cluster with helm, and it was persisting the data but I did not know how to make that cluster StatefulSet so I can refer it in other services with static url. Thanks! ### Response: Please note that k8s pods are by default ephemeral even if you deploy them as StatefulSet. StatefulSet is providing you option for pod with define name eg. consul-0 rather that standard consul-<<random string>>. It also keeps track of where to deploy pod in case you have different zones and you need to deploy pod in the same zone as storage. What is missing in your manifest is volumeMounts and volumeClaimTemplates sections . If you set your data directory to /home/consul/data your manifest should looks similar to this: apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 args: - "agent" - "-server" - "-data-dir=/home/consul/data" volumeMounts: - name: consul-data mountPath: /home/consul/data volumeClaimTemplates: # volume claim template will create volume for you so you don't need to define PVC - metadata: name: consul-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "my-storage-class" # you can get this with kubectl get sc resources: requests: storage: 1Gi Regarding you second problem with consul UI I would not help much since I never use consul but I can advise to deploy helm chart once again and check how arguments are passed there.
I am currently working on Spring micro-service(Eureka Implementation) project. To manage the distributed configuration we are using Consul KV. We are deploying services on Kubernetes cluster. The issue I am facing that, whenever I restart the cluster for Consul it deletes all the data of KV. I am creating Kubernetes cluster on local with docker image by having Deployment.yaml file. Please refer the below Deployment.yaml file for consul. apiVersion: v1 kind: Service metadata: name: consul labels: app: consul spec: clusterIP: None ports: - port: 8500 name: consul selector: app: consul --- apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 --- apiVersion: v1 kind: Service metadata: name: consul-lb labels: app: consul spec: selector: app: consul type: NodePort ports: - port: 80 targetPort: 8500 After some research I found that we can specify the -data-dir location in config, so I have modified StatefulSet kind yaml as below: apiVersion: apps/v1 kind: StatefulSet metadata: name: consul spec: serviceName: consul replicas: 1 selector: matchLabels: app: consul template: metadata: labels: app: consul spec: containers: - name: consul image: hashicorp/consul:latest imagePullPolicy: Always ports: - containerPort: 8500 args: - "agent" - "-server" - "-data-dir=/home/consul/data" But after this Consul UI is not getting started, so wanted some help to resolve so it stores the data even after I delete Consul cluster. PS: I tried deploying cluster with helm, and it was persisting the data but I did not know how to make that cluster StatefulSet so I can refer it in other services with static url. Thanks!
My cloudflared pod is running under hm-cloudflared namesapce. So I can get the node name by: kubectl get pods -o wide -n hm-cloudflared NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cloudflared-7cdf78df46-x5fb7 0/1 CrashLoopBackOff 13 (93s ago) 26m 10.42.0.82 lima-rancher-desktop <none> <none> Once getting the node name that the pod is running in, then you can ssh into the Kubernetes node by kubectl-plugins: # Install kubectl-plugins git clone https://github.com/luksa/kubectl-plugins $HOME/kubectl-plugins export PATH=$PATH:$HOME/kubectl-plugins # SSH into the Kubernetes node by kubectl-plugins kubectl ssh node lima-rancher-desktop Inside the Kubernetes node, based on https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size#non-bsd, I can increase the UDP receive buffer size by: sysctl -w net.core.rmem_max=2500000 This command would increase the maximum receive buffer size to roughly 2.5 MB. Now just restart the cloudflared pod, the the issue should be gone! Hopefully it helps save some people time in future!
###Instruction: I have a local Kubernetes created by Rancher Desktop. I have deployed a named Cloudflared Tunnel based on this tutorial. I recently started to get error: failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details. Note this does not affect the actual function of Cloudflared Tunnel, which is more like a warning. However, I do hope to fix it. I have read the content in the link. However, this is running in a pod, so I am not sure how to fix it. Below is full log: 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Starting tunnel tunnelID=c9aa4140-fee8-4862-a479-3c1faacbd816 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Version 2023.3.1 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF GOOS: linux, GOVersion: go1.19.7, GoArch: arm64 2023-03-18 00:27:51.451Z 2023-03-18T00:27:51Z INF Settings: map[config:/etc/cloudflared/config/config.yaml cred-file:/etc/cloudflared/creds/credentials.json credentials-file:/etc/cloudflared/creds/credentials.json metrics:0.0.0.0:2000 no-autoupdate:true] 2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Generated Connector ID: a2d07b8a-3343-4b28-bbb5-a0cc951d5093 2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Initial protocol quic 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use 10.42.0.32 as source for IPv4 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use fe80::3c91:31ff:fe74:68ee in zone eth0 as source for IPv6 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN The user running cloudflared process has a GID (group ID) that is not within ping_group_range. You might need to add that user to a group within that range, or instead update the range to encompass a group the user is already in by modifying /proc/sys/net/ipv4/ping_group_range. Otherwise cloudflared will not be able to ping this network error="Group ID 65532 is not between ping group 1 to 0" 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN ICMP proxy feature is disabled error="cannot create ICMPv4 proxy: Group ID 65532 is not between ping group 1 to 0 nor ICMPv6 proxy: socket: permission denied" 2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting Hello World server at 127.0.0.1:34545 2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting metrics server on [::]:2000/metrics 2023-03-18 00:27:51.462Z 2023/03/18 00:27:51 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details. 2023-03-18 00:27:51.592Z 2023-03-18T00:27:51Z INF Connection ca329025-1f06-4f36-a8b2-27eda979345d registered with protocol: quic connIndex=0 ip=198.41.192.107 location=LAX 2023-03-18 00:27:51.760Z 2023-03-18T00:27:51Z INF Connection a25fdab3-adff-4be5-8eb3-c22d593dfbc5 registered with protocol: quic connIndex=1 ip=198.41.200.193 location=SJC 2023-03-18 00:27:52.670Z 2023-03-18T00:27:52Z INF Connection ef583d03-d123-4e8e-b8ad-37eed817d2da registered with protocol: quic connIndex=2 ip=198.41.200.113 location=SJC 2023-03-18 00:27:53.684Z 2023-03-18T00:27:53Z INF Connection 25609514-8c37-451e-b4ac-1fb9fba2b9b8 registered with protocol: quic connIndex=3 ip=198.41.192.37 location=LAX ### Response: My cloudflared pod is running under hm-cloudflared namesapce. So I can get the node name by: kubectl get pods -o wide -n hm-cloudflared NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cloudflared-7cdf78df46-x5fb7 0/1 CrashLoopBackOff 13 (93s ago) 26m 10.42.0.82 lima-rancher-desktop <none> <none> Once getting the node name that the pod is running in, then you can ssh into the Kubernetes node by kubectl-plugins: # Install kubectl-plugins git clone https://github.com/luksa/kubectl-plugins $HOME/kubectl-plugins export PATH=$PATH:$HOME/kubectl-plugins # SSH into the Kubernetes node by kubectl-plugins kubectl ssh node lima-rancher-desktop Inside the Kubernetes node, based on https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size#non-bsd, I can increase the UDP receive buffer size by: sysctl -w net.core.rmem_max=2500000 This command would increase the maximum receive buffer size to roughly 2.5 MB. Now just restart the cloudflared pod, the the issue should be gone! Hopefully it helps save some people time in future!
I have a local Kubernetes created by Rancher Desktop. I have deployed a named Cloudflared Tunnel based on this tutorial. I recently started to get error: failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details. Note this does not affect the actual function of Cloudflared Tunnel, which is more like a warning. However, I do hope to fix it. I have read the content in the link. However, this is running in a pod, so I am not sure how to fix it. Below is full log: 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Starting tunnel tunnelID=c9aa4140-fee8-4862-a479-3c1faacbd816 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Version 2023.3.1 2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF GOOS: linux, GOVersion: go1.19.7, GoArch: arm64 2023-03-18 00:27:51.451Z 2023-03-18T00:27:51Z INF Settings: map[config:/etc/cloudflared/config/config.yaml cred-file:/etc/cloudflared/creds/credentials.json credentials-file:/etc/cloudflared/creds/credentials.json metrics:0.0.0.0:2000 no-autoupdate:true] 2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Generated Connector ID: a2d07b8a-3343-4b28-bbb5-a0cc951d5093 2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Initial protocol quic 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use 10.42.0.32 as source for IPv4 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use fe80::3c91:31ff:fe74:68ee in zone eth0 as source for IPv6 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN The user running cloudflared process has a GID (group ID) that is not within ping_group_range. You might need to add that user to a group within that range, or instead update the range to encompass a group the user is already in by modifying /proc/sys/net/ipv4/ping_group_range. Otherwise cloudflared will not be able to ping this network error="Group ID 65532 is not between ping group 1 to 0" 2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN ICMP proxy feature is disabled error="cannot create ICMPv4 proxy: Group ID 65532 is not between ping group 1 to 0 nor ICMPv6 proxy: socket: permission denied" 2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting Hello World server at 127.0.0.1:34545 2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting metrics server on [::]:2000/metrics 2023-03-18 00:27:51.462Z 2023/03/18 00:27:51 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details. 2023-03-18 00:27:51.592Z 2023-03-18T00:27:51Z INF Connection ca329025-1f06-4f36-a8b2-27eda979345d registered with protocol: quic connIndex=0 ip=198.41.192.107 location=LAX 2023-03-18 00:27:51.760Z 2023-03-18T00:27:51Z INF Connection a25fdab3-adff-4be5-8eb3-c22d593dfbc5 registered with protocol: quic connIndex=1 ip=198.41.200.193 location=SJC 2023-03-18 00:27:52.670Z 2023-03-18T00:27:52Z INF Connection ef583d03-d123-4e8e-b8ad-37eed817d2da registered with protocol: quic connIndex=2 ip=198.41.200.113 location=SJC 2023-03-18 00:27:53.684Z 2023-03-18T00:27:53Z INF Connection 25609514-8c37-451e-b4ac-1fb9fba2b9b8 registered with protocol: quic connIndex=3 ip=198.41.192.37 location=LAX
I had exactly the same problem using Microk8s. I had the following plugins enabled: hostpath-storage and dns. I don't deploy Kong, but RabbitMQ (here's my example project). I got the following error: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 53s default-scheduler Successfully assigned default/hello-rabbit-server-0 to microk8s-vm Normal Pulled 52s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine Normal Created 52s kubelet Created container setup-container Normal Started 52s kubelet Started container setup-container Normal Pulled 21s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine Normal Created 21s kubelet Created container rabbitmq Normal Started 21s kubelet Started container rabbitmq Warning Unhealthy 3s kubelet Readiness probe failed: dial tcp 10.1.254.78:5672: connect: connection refused What fixed the issue for me was to enable the host-access addon in Microk8s: microk8s enable host-access Now the readiness probes are working fine.
###Instruction: I have a Kong deployment. apiVersion: apps/v1 kind: Deployment metadata: name: local-test-kong labels: app: local-test-kong spec: replicas: 1 selector: matchLabels: app: local-test-kong strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 0 type: RollingUpdate template: metadata: labels: app: local-test-kong spec: automountServiceAccountToken: false containers: - envFrom: - configMapRef: name: kong-env-vars image: kong:2.6 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /bin/sh - -c - /bin/sleep 15 && kong quit livenessProbe: failureThreshold: 3 httpGet: path: /status port: status scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: proxy ports: - containerPort: 8000 name: proxy protocol: TCP - containerPort: 8100 name: status protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /status port: status scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: # ToDo limits: cpu: 256m memory: 256Mi requests: cpu: 256m memory: 256Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kong_prefix/ name: kong-prefix-dir - mountPath: /tmp name: tmp-dir - mountPath: /kong_dbless/ name: kong-custom-dbless-config-volume terminationGracePeriodSeconds: 30 volumes: - name: kong-prefix-dir - name: tmp-dir - configMap: defaultMode: 0555 name: kong-declarative name: kong-custom-dbless-config-volume I applied this YAML in GKE. Then i ran kubectl describe on its pod. ➜ kubectl get pods NAME READY STATUS RESTARTS AGE local-test-kong-678598ffc6-ll9s8 1/1 Running 0 25m ➜ kubectl describe pod/local-test-kong-678598ffc6-ll9s8 Name: local-test-kong-678598ffc6-ll9s8 Namespace: local-test-kong Priority: 0 Node: gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95 Start Time: Wed, 23 Nov 2022 00:12:56 +0800 Labels: app=local-test-kong pod-template-hash=678598ffc6 Annotations: kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00 Status: Running IP: 10.128.96.104 IPs: IP: 10.128.96.104 Controlled By: ReplicaSet/local-test-kong-678598ffc6 Containers: proxy: Container ID: containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a Image: kong:2.6 Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70 Ports: 8000/TCP, 8100/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Wed, 23 Nov 2022 00:12:58 +0800 Ready: True Restart Count: 0 Limits: cpu: 256m memory: 256Mi Requests: cpu: 256m memory: 256Mi Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Environment Variables from: kong-env-vars ConfigMap Optional: false Environment: <none> Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kong-prefix-dir (rw) /tmp from tmp-dir (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kong-prefix-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kong-custom-dbless-config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: kong-declarative Optional: false QoS Class: Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25m default-scheduler Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl Normal Pulled 25m kubelet Container image "kong:2.6" already present on machine Normal Created 25m kubelet Created container proxy Normal Started 25m kubelet Started container proxy ➜ I applied the same YAML in my localhost's MicroK8S (on MacOS) and then I ran kubectl describe on its pod. ➜ kubectl get pods NAME READY STATUS RESTARTS AGE local-test-kong-54cfc585cb-7grj8 1/1 Running 0 86s ➜ kubectl describe pod/local-test-kong-54cfc585cb-7grj8 Name: local-test-kong-54cfc585cb-7grj8 Namespace: local-test-kong Priority: 0 Node: microk8s-vm/192.168.64.5 Start Time: Wed, 23 Nov 2022 00:39:33 +0800 Labels: app=local-test-kong pod-template-hash=54cfc585cb Annotations: cni.projectcalico.org/podIP: 10.1.254.79/32 cni.projectcalico.org/podIPs: 10.1.254.79/32 kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00 Status: Running IP: 10.1.254.79 IPs: IP: 10.1.254.79 Controlled By: ReplicaSet/local-test-kong-54cfc585cb Containers: proxy: Container ID: containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d Image: kong:2.6 Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70 Ports: 8000/TCP, 8100/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Wed, 23 Nov 2022 00:39:37 +0800 Ready: True Restart Count: 0 Limits: cpu: 256m memory: 256Mi Requests: cpu: 256m memory: 256Mi Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Environment Variables from: kong-env-vars ConfigMap Optional: false Environment: <none> Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kong-prefix-dir (rw) /tmp from tmp-dir (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kong-prefix-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kong-custom-dbless-config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: kong-declarative Optional: false QoS Class: Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 92s default-scheduler Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm Normal Pulled 90s kubelet Container image "kong:2.6" already present on machine Normal Created 90s kubelet Created container proxy Normal Started 89s kubelet Started container proxy Warning Unhealthy 68s kubelet Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 68s kubelet Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ➜ It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures. What could i be missing here while deploying to microk8s (on MacOS)? ### Response: I had exactly the same problem using Microk8s. I had the following plugins enabled: hostpath-storage and dns. I don't deploy Kong, but RabbitMQ (here's my example project). I got the following error: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 53s default-scheduler Successfully assigned default/hello-rabbit-server-0 to microk8s-vm Normal Pulled 52s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine Normal Created 52s kubelet Created container setup-container Normal Started 52s kubelet Started container setup-container Normal Pulled 21s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine Normal Created 21s kubelet Created container rabbitmq Normal Started 21s kubelet Started container rabbitmq Warning Unhealthy 3s kubelet Readiness probe failed: dial tcp 10.1.254.78:5672: connect: connection refused What fixed the issue for me was to enable the host-access addon in Microk8s: microk8s enable host-access Now the readiness probes are working fine.
I have a Kong deployment. apiVersion: apps/v1 kind: Deployment metadata: name: local-test-kong labels: app: local-test-kong spec: replicas: 1 selector: matchLabels: app: local-test-kong strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 0 type: RollingUpdate template: metadata: labels: app: local-test-kong spec: automountServiceAccountToken: false containers: - envFrom: - configMapRef: name: kong-env-vars image: kong:2.6 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /bin/sh - -c - /bin/sleep 15 && kong quit livenessProbe: failureThreshold: 3 httpGet: path: /status port: status scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: proxy ports: - containerPort: 8000 name: proxy protocol: TCP - containerPort: 8100 name: status protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /status port: status scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: # ToDo limits: cpu: 256m memory: 256Mi requests: cpu: 256m memory: 256Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kong_prefix/ name: kong-prefix-dir - mountPath: /tmp name: tmp-dir - mountPath: /kong_dbless/ name: kong-custom-dbless-config-volume terminationGracePeriodSeconds: 30 volumes: - name: kong-prefix-dir - name: tmp-dir - configMap: defaultMode: 0555 name: kong-declarative name: kong-custom-dbless-config-volume I applied this YAML in GKE. Then i ran kubectl describe on its pod. ➜ kubectl get pods NAME READY STATUS RESTARTS AGE local-test-kong-678598ffc6-ll9s8 1/1 Running 0 25m ➜ kubectl describe pod/local-test-kong-678598ffc6-ll9s8 Name: local-test-kong-678598ffc6-ll9s8 Namespace: local-test-kong Priority: 0 Node: gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95 Start Time: Wed, 23 Nov 2022 00:12:56 +0800 Labels: app=local-test-kong pod-template-hash=678598ffc6 Annotations: kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00 Status: Running IP: 10.128.96.104 IPs: IP: 10.128.96.104 Controlled By: ReplicaSet/local-test-kong-678598ffc6 Containers: proxy: Container ID: containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a Image: kong:2.6 Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70 Ports: 8000/TCP, 8100/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Wed, 23 Nov 2022 00:12:58 +0800 Ready: True Restart Count: 0 Limits: cpu: 256m memory: 256Mi Requests: cpu: 256m memory: 256Mi Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Environment Variables from: kong-env-vars ConfigMap Optional: false Environment: <none> Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kong-prefix-dir (rw) /tmp from tmp-dir (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kong-prefix-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kong-custom-dbless-config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: kong-declarative Optional: false QoS Class: Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25m default-scheduler Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl Normal Pulled 25m kubelet Container image "kong:2.6" already present on machine Normal Created 25m kubelet Created container proxy Normal Started 25m kubelet Started container proxy ➜ I applied the same YAML in my localhost's MicroK8S (on MacOS) and then I ran kubectl describe on its pod. ➜ kubectl get pods NAME READY STATUS RESTARTS AGE local-test-kong-54cfc585cb-7grj8 1/1 Running 0 86s ➜ kubectl describe pod/local-test-kong-54cfc585cb-7grj8 Name: local-test-kong-54cfc585cb-7grj8 Namespace: local-test-kong Priority: 0 Node: microk8s-vm/192.168.64.5 Start Time: Wed, 23 Nov 2022 00:39:33 +0800 Labels: app=local-test-kong pod-template-hash=54cfc585cb Annotations: cni.projectcalico.org/podIP: 10.1.254.79/32 cni.projectcalico.org/podIPs: 10.1.254.79/32 kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00 Status: Running IP: 10.1.254.79 IPs: IP: 10.1.254.79 Controlled By: ReplicaSet/local-test-kong-54cfc585cb Containers: proxy: Container ID: containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d Image: kong:2.6 Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70 Ports: 8000/TCP, 8100/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Wed, 23 Nov 2022 00:39:37 +0800 Ready: True Restart Count: 0 Limits: cpu: 256m memory: 256Mi Requests: cpu: 256m memory: 256Mi Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3 Environment Variables from: kong-env-vars ConfigMap Optional: false Environment: <none> Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kong-prefix-dir (rw) /tmp from tmp-dir (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kong-prefix-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kong-custom-dbless-config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: kong-declarative Optional: false QoS Class: Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 92s default-scheduler Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm Normal Pulled 90s kubelet Container image "kong:2.6" already present on machine Normal Created 90s kubelet Created container proxy Normal Started 89s kubelet Started container proxy Warning Unhealthy 68s kubelet Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 68s kubelet Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ➜ It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures. What could i be missing here while deploying to microk8s (on MacOS)?
Yes it's possible. You can create a REST client for your resource in main.go as below: gvkExternal := schema.GroupVersionKind{ Group: "some.group.io", Version: "v1", Kind: "External", } restClient, err := apiutil.RESTClientForGVK(gvkExternal, false, mgr.GetConfig(), serializer.NewCodecFactory(mgr.GetScheme())) if err != nil { setupLog.Error(err, "unable to create REST client") } Then add a field for this REST client (rest.Interface) to your reconciler (yournativeresource_controller.go) struct such as: type YourNativeResourceReconciler struct { client.Client Scheme *runtime.Scheme // add this RESTClient rest.Interface } Last, initialize your reconciler with this REST client (main.go): if err = (&controllers.YourNativeResourceReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), RESTClient: restClient, }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "controller", "YourNativeResource") os.Exit(1) } Do not forget to add RBAC marker to your project (reconciler preferably) that will generate RBAC rules allowing you to manipulate External resource: //+kubebuilder:rbac:groups=some.group.io,resources=externals,verbs=get;list;watch;create;update;patch;delete After these steps, you can use REST client for manipulating External resource over YourNativeResource reconciler using r.RESTClient. EDIT: If you want to watch resources, dynamic clients may help. Create a dynamic client in main.go: dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig()) if err != nil { setupLog.Error(err, "unable to create dynamic client") } Apply above steps, add it to your reconciler etc. Then you will be able to watch External resource as below: resourceInterface := r.DynamicClient.Resource(schema.GroupVersionResource{ Group: "some.group.io", Version: "", Resource: "externals", }) externalWatcher, err := resourceInterface.Watch(ctx, metav1.ListOptions{}) if err != nil { return err } defer externalWatcher.Stop() select { case event := <-externalWatcher.ResultChan(): if event.Type == watch.Deleted { logger.Info("FINALIZER: An external resource is deleted.") } }
###Instruction: Suppose I have bellow code snippet which setups a reconciler that watches external resource "External": // SetupWithManager sets up the controller with the Manager. func (r *SomethingReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&api.Something{}). WithOptions(controller.Options{ MaxConcurrentReconciles: stdruntime.NumCPU(), RecoverPanic: true, }). Watches( &source.Kind{Type: &somev1.External{}}, handler.EnqueueRequestsFromMapFunc(r.findInternalObjectsForExternal), builder.WithPredicates(predicate.Funcs{ UpdateFunc: func(ue event.UpdateEvent) bool { return true }, DeleteFunc: func(de event.DeleteEvent) bool { return true }, }), ). Complete(r) } My problem is that I can not import somev1.External type into my project because importing the go module containing this type would break my current project's dependencies. Is there a way in kubebuilder to watch for external resources without having to explicitly importing their types? like GVK or something? ### Response: Yes it's possible. You can create a REST client for your resource in main.go as below: gvkExternal := schema.GroupVersionKind{ Group: "some.group.io", Version: "v1", Kind: "External", } restClient, err := apiutil.RESTClientForGVK(gvkExternal, false, mgr.GetConfig(), serializer.NewCodecFactory(mgr.GetScheme())) if err != nil { setupLog.Error(err, "unable to create REST client") } Then add a field for this REST client (rest.Interface) to your reconciler (yournativeresource_controller.go) struct such as: type YourNativeResourceReconciler struct { client.Client Scheme *runtime.Scheme // add this RESTClient rest.Interface } Last, initialize your reconciler with this REST client (main.go): if err = (&controllers.YourNativeResourceReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), RESTClient: restClient, }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "controller", "YourNativeResource") os.Exit(1) } Do not forget to add RBAC marker to your project (reconciler preferably) that will generate RBAC rules allowing you to manipulate External resource: //+kubebuilder:rbac:groups=some.group.io,resources=externals,verbs=get;list;watch;create;update;patch;delete After these steps, you can use REST client for manipulating External resource over YourNativeResource reconciler using r.RESTClient. EDIT: If you want to watch resources, dynamic clients may help. Create a dynamic client in main.go: dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig()) if err != nil { setupLog.Error(err, "unable to create dynamic client") } Apply above steps, add it to your reconciler etc. Then you will be able to watch External resource as below: resourceInterface := r.DynamicClient.Resource(schema.GroupVersionResource{ Group: "some.group.io", Version: "", Resource: "externals", }) externalWatcher, err := resourceInterface.Watch(ctx, metav1.ListOptions{}) if err != nil { return err } defer externalWatcher.Stop() select { case event := <-externalWatcher.ResultChan(): if event.Type == watch.Deleted { logger.Info("FINALIZER: An external resource is deleted.") } }
Suppose I have bellow code snippet which setups a reconciler that watches external resource "External": // SetupWithManager sets up the controller with the Manager. func (r *SomethingReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&api.Something{}). WithOptions(controller.Options{ MaxConcurrentReconciles: stdruntime.NumCPU(), RecoverPanic: true, }). Watches( &source.Kind{Type: &somev1.External{}}, handler.EnqueueRequestsFromMapFunc(r.findInternalObjectsForExternal), builder.WithPredicates(predicate.Funcs{ UpdateFunc: func(ue event.UpdateEvent) bool { return true }, DeleteFunc: func(de event.DeleteEvent) bool { return true }, }), ). Complete(r) } My problem is that I can not import somev1.External type into my project because importing the go module containing this type would break my current project's dependencies. Is there a way in kubebuilder to watch for external resources without having to explicitly importing their types? like GVK or something?
The following two queries will do the trick for you: prober_probe_total{probe_type="Readiness",result="failed"} prober_probe_total{probe_type="Liveness",result="failed"}
###Instruction: I am trying to migrate a dashboard which shows the count of Readiness and Liveness Probe Failures, from Kibana(ElasticSearch) to a Grafana Dashboard(Sauron). In kibana the we can get both the probe failures separately using kubernetes.event.message : Liveness probe failed for Liveness failure and similar event message for Readiness, but in Sauron or Thanos (which acts as the datasource for Grafana) k8's event messages are not picked up. So I am unable to find a suitable promQL which will give me the count of both the probe failures individually. The closest promQL I have found is kube_event_count{reason="Unhealthy"} which is giving me the sum of the count of both the probe failures. I need the count of the probe failures individually. Another promQL that I have tried is kube_pod_container_status_ready which probably gives the readiness status of the containers but I am not sure about it. ### Response: The following two queries will do the trick for you: prober_probe_total{probe_type="Readiness",result="failed"} prober_probe_total{probe_type="Liveness",result="failed"}
I am trying to migrate a dashboard which shows the count of Readiness and Liveness Probe Failures, from Kibana(ElasticSearch) to a Grafana Dashboard(Sauron). In kibana the we can get both the probe failures separately using kubernetes.event.message : Liveness probe failed for Liveness failure and similar event message for Readiness, but in Sauron or Thanos (which acts as the datasource for Grafana) k8's event messages are not picked up. So I am unable to find a suitable promQL which will give me the count of both the probe failures individually. The closest promQL I have found is kube_event_count{reason="Unhealthy"} which is giving me the sum of the count of both the probe failures. I need the count of the probe failures individually. Another promQL that I have tried is kube_pod_container_status_ready which probably gives the readiness status of the containers but I am not sure about it.
Of course the best practice is using an operator like cloudnative-pg or postgres-operator but they are pretty big and probably have way more feature for a simple workload. Here is a simple solution for your problem. Solution Add below to your pod spec preStop: exec: command: ["/usr/local/bin/pg_ctl stop -D /var/lib/postgresql/data -w -t 60 -m fast"] Explanation Basically when you kill a pod, Kubernetes signals SIGTERM and gives 30 seconds for your pod, after that time it sends SIGKILL. When postgres receive SIGTERM it won't accept net connections but it won't terminate existing terminations, so any client will block db's termination, and after 30 seconds pod will receive SIGKILL which is very bad for postgres doc. So you need to safely shutdown postgres somehow, with preStop hook you can. Kubernetes This is the exact chronological order of your pod: Set state=Terminating from Pod controller terminationGracePeriodSeconds timer starts (default is 30 seconds) preStop hook: pg_cli ... SIGTERM is sent: Postgres won't accept new connections k8s waits until terminationGracePeriods (configurable from yaml) If app is still alive SIGKILL is sent Also you need to set .spec.strategy.type==Recreate in Deployment. Postgres For the pg_cli commands you can refer this summary, most useful one for you looks like -m fast. SIGTERM: "Smart Shutdown Mode" Disallows new connections Let's existing connections continue SIGINT: "Fast Shutdown Mode" Disallow new connections Send SIGTERM to existing servers processes (?) they'll exit promptly SIGQUIT: "Immediate Shutdown Mode" Send SIGQUIT to all child processes, if they don't terminate in 5 secs sends SIGKILL
###Instruction: I have a small hobby website I want to release on my server. I chose to use kubernetes for that, as I partly use it at work as well, so I'd like to get familiar. I bought hetzner debian server and installed k3s. Now are deploying PostgreSQL container (version 15.2, not cluster) as per this tutorial. (I did a minor changes I saw from other tutorials, should not be relevant). It runs fine, I was happy with it. But I tried to restart deployment to make sure data are not lost if server goes down for some reason. After a few restarts, database is corrupted. Once I saw: PANIC: invalid magic number 0000 in log segment 000000010000000000000000, offset 0 another time: invalid contrecord length 1174 (expected 48430224) at 0/195BC90 another time: PANIC: could not locate a valid checkpoint record When I tried to google how to recover from this, I did not find any safe options and mostly the suggestions were to restore backup. So my question is, how do I safely restart/shutdown PostgreSQL container? Am I missing some shutdown config for PostgreSQL pod in k8s? Update 1: I was restarting deployment from k9s with r command. I think UI made it look like it was rotated right away, but it probably takes some time. So I think I triggered multiple restarts every 10 seconds and that might have corrupted the DB. Anyway I added terminationGracePeriodSeconds: 60 and used preStop hook from answer. Thanks Update 2: I imported DB, did a restart and again same issue: could not locate a valid checkpoint record Update 3: I replaced Deployment with StatefulSet and it seems to be handling restarts better. Tried over 10 restarts and no issues. Whereas before it crashed around 4th restart. ### Response: Of course the best practice is using an operator like cloudnative-pg or postgres-operator but they are pretty big and probably have way more feature for a simple workload. Here is a simple solution for your problem. Solution Add below to your pod spec preStop: exec: command: ["/usr/local/bin/pg_ctl stop -D /var/lib/postgresql/data -w -t 60 -m fast"] Explanation Basically when you kill a pod, Kubernetes signals SIGTERM and gives 30 seconds for your pod, after that time it sends SIGKILL. When postgres receive SIGTERM it won't accept net connections but it won't terminate existing terminations, so any client will block db's termination, and after 30 seconds pod will receive SIGKILL which is very bad for postgres doc. So you need to safely shutdown postgres somehow, with preStop hook you can. Kubernetes This is the exact chronological order of your pod: Set state=Terminating from Pod controller terminationGracePeriodSeconds timer starts (default is 30 seconds) preStop hook: pg_cli ... SIGTERM is sent: Postgres won't accept new connections k8s waits until terminationGracePeriods (configurable from yaml) If app is still alive SIGKILL is sent Also you need to set .spec.strategy.type==Recreate in Deployment. Postgres For the pg_cli commands you can refer this summary, most useful one for you looks like -m fast. SIGTERM: "Smart Shutdown Mode" Disallows new connections Let's existing connections continue SIGINT: "Fast Shutdown Mode" Disallow new connections Send SIGTERM to existing servers processes (?) they'll exit promptly SIGQUIT: "Immediate Shutdown Mode" Send SIGQUIT to all child processes, if they don't terminate in 5 secs sends SIGKILL
I have a small hobby website I want to release on my server. I chose to use kubernetes for that, as I partly use it at work as well, so I'd like to get familiar. I bought hetzner debian server and installed k3s. Now are deploying PostgreSQL container (version 15.2, not cluster) as per this tutorial. (I did a minor changes I saw from other tutorials, should not be relevant). It runs fine, I was happy with it. But I tried to restart deployment to make sure data are not lost if server goes down for some reason. After a few restarts, database is corrupted. Once I saw: PANIC: invalid magic number 0000 in log segment 000000010000000000000000, offset 0 another time: invalid contrecord length 1174 (expected 48430224) at 0/195BC90 another time: PANIC: could not locate a valid checkpoint record When I tried to google how to recover from this, I did not find any safe options and mostly the suggestions were to restore backup. So my question is, how do I safely restart/shutdown PostgreSQL container? Am I missing some shutdown config for PostgreSQL pod in k8s? Update 1: I was restarting deployment from k9s with r command. I think UI made it look like it was rotated right away, but it probably takes some time. So I think I triggered multiple restarts every 10 seconds and that might have corrupted the DB. Anyway I added terminationGracePeriodSeconds: 60 and used preStop hook from answer. Thanks Update 2: I imported DB, did a restart and again same issue: could not locate a valid checkpoint record Update 3: I replaced Deployment with StatefulSet and it seems to be handling restarts better. Tried over 10 restarts and no issues. Whereas before it crashed around 4th restart.
Is it possible to make the k3s ingress route a certain path to a certain IP or port of a service which is not running inside the Kubernetes, but on same physical machine? Yes you can do it with the ExternalName service. You define one service and it points to specific physical IP outside of the cluster. apiVersion: v1 kind: Service metadata: name: my-service spec: type: ExternalName externalName: my.database.example.com//192.168.4.5 Further you can use this service as normal service of Kubernetes but the request will get forwarded to external service.
###Instruction: Is it possible to make the k3s ingress route a certain path to a certain IP or port of a service which is not running inside the Kubernetes, but on same physical machine? My use-case Using single node k3s setup. I have a special server running on the same host that the k3s is running on. I'd like to expose it as an HTTP endpoint in the ingress. e.g: foo.example.com --> k3s ingress --> 127.0.0.1:99 (port on k3s machine) or foo.example.com --> k3s ingress --> 192.168.1.7:99 (something in the local network) Is something like this possible or should there be some reverse proxy before the k3s server? ### Response: Is it possible to make the k3s ingress route a certain path to a certain IP or port of a service which is not running inside the Kubernetes, but on same physical machine? Yes you can do it with the ExternalName service. You define one service and it points to specific physical IP outside of the cluster. apiVersion: v1 kind: Service metadata: name: my-service spec: type: ExternalName externalName: my.database.example.com//192.168.4.5 Further you can use this service as normal service of Kubernetes but the request will get forwarded to external service.
Is it possible to make the k3s ingress route a certain path to a certain IP or port of a service which is not running inside the Kubernetes, but on same physical machine? My use-case Using single node k3s setup. I have a special server running on the same host that the k3s is running on. I'd like to expose it as an HTTP endpoint in the ingress. e.g: foo.example.com --> k3s ingress --> 127.0.0.1:99 (port on k3s machine) or foo.example.com --> k3s ingress --> 192.168.1.7:99 (something in the local network) Is something like this possible or should there be some reverse proxy before the k3s server?
As described in this doc by phoenixnap. There are several ways to fix the “helm has no deployed releases” error, one ways is by running the following command: kubectl -n kube-system patch configmap [release name].[release version] --type=merge -p '{"metadata":{"labels":{"STATUS":"DEPLOYED"}}}' [release name] is the name of the release you want to update. [release version] is the current version of your release. Since Helm 3 stores the deployment history as Kubernetes secrets. Check the deployment secrets: kubectl get secrets Find the secret referring to the failed deployment, then use the following command to change the deployment status: kubectl patch secret [name-of-secret-related-to-deployment] --type=merge -p '{"metadata":{"labels":{"status":"deployed"}}}' You can also refer this blog by Jacky Jiang for more information about how to upgrade helm
###Instruction: After I uninstalled a release(with --keep-history), there will remain a release history with "uninstalled status". Then if I want to install this release again, install and upgrade --install are both failed. install failed because of "cannot re-use a name that is still in use" but upgrade --install failed because of "xxx has no deployed releases" Is the only way that I have to remove the history or uninstall without history? I tried to use install and upgrade --install command, both failed ### Response: As described in this doc by phoenixnap. There are several ways to fix the “helm has no deployed releases” error, one ways is by running the following command: kubectl -n kube-system patch configmap [release name].[release version] --type=merge -p '{"metadata":{"labels":{"STATUS":"DEPLOYED"}}}' [release name] is the name of the release you want to update. [release version] is the current version of your release. Since Helm 3 stores the deployment history as Kubernetes secrets. Check the deployment secrets: kubectl get secrets Find the secret referring to the failed deployment, then use the following command to change the deployment status: kubectl patch secret [name-of-secret-related-to-deployment] --type=merge -p '{"metadata":{"labels":{"status":"deployed"}}}' You can also refer this blog by Jacky Jiang for more information about how to upgrade helm
After I uninstalled a release(with --keep-history), there will remain a release history with "uninstalled status". Then if I want to install this release again, install and upgrade --install are both failed. install failed because of "cannot re-use a name that is still in use" but upgrade --install failed because of "xxx has no deployed releases" Is the only way that I have to remove the history or uninstall without history? I tried to use install and upgrade --install command, both failed
It's certainly good practice to pre-assess the resources available in your node-pool and know in advance how "big" your Pods will be. Once you know how many resources you have available, you can do 2 things: 1.set up resources/limits for all your Pods, making sure you never reach the maximum available in your node-pool; https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.try to understand, in case of malfunction or deployment of other NOT estimated resources in your K8s cluster, which are the Pods you would be willing to sacrifice and in which order to do it. This will allow you to shutdown critical services ONLY after all other services have already been shut down. https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ Obviously there is a 3 and 4 point in case you want to scale horizontally and vertically, with autoscaling setup for the node-pool and for your Deployments, but they are not strictly related to your question.
###Instruction: I have a Google Cloud Composer environment set up that has 3 nodes in the worker pool. Each node has memory of 16GB (using n1-standard-4) instance. I have tasks inside a DAG that takes around 7-8GB of memory. The allocable memory for the worker nodes is roughly 12GB and hence, these tasks should run without encountering a OOM error. B My intuition is that each worker node has some (variable)number of pods and when airflow scheduler queues tasks, the tasks run inside a worker pod and not the node itself. And there might be memory limits set for the pods that you can have inside worker nodes. Note that the worker node are in kubernetes node pool exclusively used by my composer instance. How can I make sure that all my tasks run without encountering OOM given that my nodes have enough memory? Is there a way to set the pod memory limits to be higher? I looked at the workloads for the worker inside Kubernetes workloads and I can see that memory limit is 3.7GB which I guess is the limit for the pods. What should I do? Thanks in advance. ### Response: It's certainly good practice to pre-assess the resources available in your node-pool and know in advance how "big" your Pods will be. Once you know how many resources you have available, you can do 2 things: 1.set up resources/limits for all your Pods, making sure you never reach the maximum available in your node-pool; https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.try to understand, in case of malfunction or deployment of other NOT estimated resources in your K8s cluster, which are the Pods you would be willing to sacrifice and in which order to do it. This will allow you to shutdown critical services ONLY after all other services have already been shut down. https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ Obviously there is a 3 and 4 point in case you want to scale horizontally and vertically, with autoscaling setup for the node-pool and for your Deployments, but they are not strictly related to your question.
I have a Google Cloud Composer environment set up that has 3 nodes in the worker pool. Each node has memory of 16GB (using n1-standard-4) instance. I have tasks inside a DAG that takes around 7-8GB of memory. The allocable memory for the worker nodes is roughly 12GB and hence, these tasks should run without encountering a OOM error. B My intuition is that each worker node has some (variable)number of pods and when airflow scheduler queues tasks, the tasks run inside a worker pod and not the node itself. And there might be memory limits set for the pods that you can have inside worker nodes. Note that the worker node are in kubernetes node pool exclusively used by my composer instance. How can I make sure that all my tasks run without encountering OOM given that my nodes have enough memory? Is there a way to set the pod memory limits to be higher? I looked at the workloads for the worker inside Kubernetes workloads and I can see that memory limit is 3.7GB which I guess is the limit for the pods. What should I do? Thanks in advance.
I took a quick look at the Chart and I don't think there is the possibility of doing what is requested. What you can do is modify the templates yourself, adding the override parameter for names/namespaces and try doing a PR against the Hashicorp repo; maybe they didn't think this feature could be useful. Otherwise, you simply create a custom CHANGELOG where you track all these changes that remind you that you have to make changes to the repository every time you download a new version. It's a maintenance job you won't ignore, but it often happens in Enterprise and highly customized environments that you have to modify the Vendor templates. This is where the definition of the name happens: https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl (First block)
###Instruction: When releasing a Helm chart, Kubernetes resources have Helm's .Release.Name prepended to their names. In the Chart I'm writing separate namespaces with default resource naming is enough, and the additional prefix is verbose and unwanted. Vault is a dependency of my Chart. Is there a way I can achieve this using Helm? Ideally, I should only be able to deploy this chart once per namespace, supplying a namespace and release name also feel redundant. # Specify namespace and release name helm install --create-namespace -n my-namespace my-release hashicorp/vault # Vault is named "my-release-vault-..." kubectl get svc -n my-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-release-vault ClusterIP 10.98.169.234 <none> 8200/TCP,8201/TCP 8s ### Response: I took a quick look at the Chart and I don't think there is the possibility of doing what is requested. What you can do is modify the templates yourself, adding the override parameter for names/namespaces and try doing a PR against the Hashicorp repo; maybe they didn't think this feature could be useful. Otherwise, you simply create a custom CHANGELOG where you track all these changes that remind you that you have to make changes to the repository every time you download a new version. It's a maintenance job you won't ignore, but it often happens in Enterprise and highly customized environments that you have to modify the Vendor templates. This is where the definition of the name happens: https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl (First block)
When releasing a Helm chart, Kubernetes resources have Helm's .Release.Name prepended to their names. In the Chart I'm writing separate namespaces with default resource naming is enough, and the additional prefix is verbose and unwanted. Vault is a dependency of my Chart. Is there a way I can achieve this using Helm? Ideally, I should only be able to deploy this chart once per namespace, supplying a namespace and release name also feel redundant. # Specify namespace and release name helm install --create-namespace -n my-namespace my-release hashicorp/vault # Vault is named "my-release-vault-..." kubectl get svc -n my-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-release-vault ClusterIP 10.98.169.234 <none> 8200/TCP,8201/TCP 8s
Your targetPort in the Service and the containerPort in the Deployment do not match. You can fix it by changing the targetPort in the Service apiVersion: v1 kind: Service metadata: namespace: my-namespace name: backend-service spec: type: NodePort selector: app: backend-java ports: - port: 81 targetPort: 81 nodePort: 30019 Read more about the difference between port and targetPort here.
###Instruction: I deployed a Spring Boot app on AWS Elastic Kubernetes Service. I am facing a 502 Bad Gateway error. I cannot find anything useful from the logs, there is no event to check, it works fine locally and the docker image is also running without any issue. Right now its just a simple hello world app, Here are the yaml files files or reference. deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment namespace: my-namespace labels: app: backend-java spec: replicas: 1 selector: matchLabels: app: backend-java template: metadata: labels: app: backend-java spec: containers: - name: backend-java image: <docker-image-location> ports: - containerPort: 81 resources: limits: cpu: "4000m" memory: "2048Mi" requests: cpu: "100m" memory: "1024Mi" service.yaml apiVersion: v1 kind: Service metadata: namespace: my-namespace name: backend-service spec: type: NodePort selector: app: backend-java ports: - port: 81 targetPort: 8080 nodePort: 30019 ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: "my-app-ingress" namespace: "my-namespace" annotations: alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/backend-protocol: HTTP alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]' spec: ingressClassName: alb rules: - host: myapp.aws.com http: paths: - path: / pathType: Prefix backend: service: name: "backend-service" port: number: 81 Similar configuration has worked for deploying a react app, which works as expected. Only while deploying backend it give '502 Bad Gateway' ### Response: Your targetPort in the Service and the containerPort in the Deployment do not match. You can fix it by changing the targetPort in the Service apiVersion: v1 kind: Service metadata: namespace: my-namespace name: backend-service spec: type: NodePort selector: app: backend-java ports: - port: 81 targetPort: 81 nodePort: 30019 Read more about the difference between port and targetPort here.
I deployed a Spring Boot app on AWS Elastic Kubernetes Service. I am facing a 502 Bad Gateway error. I cannot find anything useful from the logs, there is no event to check, it works fine locally and the docker image is also running without any issue. Right now its just a simple hello world app, Here are the yaml files files or reference. deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment namespace: my-namespace labels: app: backend-java spec: replicas: 1 selector: matchLabels: app: backend-java template: metadata: labels: app: backend-java spec: containers: - name: backend-java image: <docker-image-location> ports: - containerPort: 81 resources: limits: cpu: "4000m" memory: "2048Mi" requests: cpu: "100m" memory: "1024Mi" service.yaml apiVersion: v1 kind: Service metadata: namespace: my-namespace name: backend-service spec: type: NodePort selector: app: backend-java ports: - port: 81 targetPort: 8080 nodePort: 30019 ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: "my-app-ingress" namespace: "my-namespace" annotations: alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/backend-protocol: HTTP alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]' spec: ingressClassName: alb rules: - host: myapp.aws.com http: paths: - path: / pathType: Prefix backend: service: name: "backend-service" port: number: 81 Similar configuration has worked for deploying a react app, which works as expected. Only while deploying backend it give '502 Bad Gateway'
After investigating, checked logs, you can get logs location by running $ k9s info       ____  __.________        |    |/ _/   __   \______ |      < \____    /  ___/ |    |  \   /    /\___ \  |____|__ \ /____//____  >         \/            \/  Configuration:   /Users/xyx/Library/Applications/k9s/config.yml Logs:            /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-shubcbsj.log Screen Dumps:    /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-screens-chakhcahkcha Logs showed me this errors. 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (1/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (2/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (3/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (4/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR Conn check failed (5/5). Bailing out! I realized it is because of my kubectl client was recently updated and k9s stopped connecting to k8s because of that. Followed the link as there has been some changes in kubectl authentication for gke in newer kubectl versions. https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke Did auth with my clusters again gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION_NAME --project $PROJECT_NAME It worked again.
###Instruction: I am using k9s tool for managing kubernetes cluster(GKE on Google Cloud Platform). Recently k9s has stopped working and stopped connecting to k8s cluster. When I launch k9s(i.e. k9s -n default) it shows me all clusters's context and when I click on one of the contexts then Error is Unable to connect to context "XXXXXXXXXX" it shows me below error as displayed in screenshot. However with kubectl client, it is working fine and showing all the data of cluster. I tried reinstalling k9s and update its version but problem persists. How can I debug the issue and fix the problem ? ### Response: After investigating, checked logs, you can get logs location by running $ k9s info       ____  __.________        |    |/ _/   __   \______ |      < \____    /  ___/ |    |  \   /    /\___ \  |____|__ \ /____//____  >         \/            \/  Configuration:   /Users/xyx/Library/Applications/k9s/config.yml Logs:            /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-shubcbsj.log Screen Dumps:    /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-screens-chakhcahkcha Logs showed me this errors. 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (1/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (2/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (3/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR ClusterUpdater failed error="Conn check failed (4/5)" 9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details" 9:08PM ERR Conn check failed (5/5). Bailing out! I realized it is because of my kubectl client was recently updated and k9s stopped connecting to k8s because of that. Followed the link as there has been some changes in kubectl authentication for gke in newer kubectl versions. https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke Did auth with my clusters again gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION_NAME --project $PROJECT_NAME It worked again.
I am using k9s tool for managing kubernetes cluster(GKE on Google Cloud Platform). Recently k9s has stopped working and stopped connecting to k8s cluster. When I launch k9s(i.e. k9s -n default) it shows me all clusters's context and when I click on one of the contexts then Error is Unable to connect to context "XXXXXXXXXX" it shows me below error as displayed in screenshot. However with kubectl client, it is working fine and showing all the data of cluster. I tried reinstalling k9s and update its version but problem persists. How can I debug the issue and fix the problem ?
Shouldn't this annotation be added to ingress? cert-manager.io/cluster-issuer=letsencrypt-production
###Instruction: I have installed microk8s, traefik and cert-manager. When I try to receive a letsencrypt certificate, a new pod for answering the challenge is created, but the request from the letsencryt server does not reach this pod. Instead, the request is forwarded to the pod that serves the website. It looks like the ingressroute routing the traffic to the web pod has higher priority then the ingress that routes the /.well-known/acme-challenge/... requests to the correct pod. What am I missing? kubectl edit clusterissuer letsencrypt-prod: kind: ClusterIssuer metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cert-manager.io/v1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"office@mydomain.com","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"traefik"}}}]}}} creationTimestamp: "2022-07-11T14:32:15Z" generation: 11 name: letsencrypt-prod resourceVersion: "49979842" uid: 40c4e26d-9c94-4cda-aa3a-357491bdb25a spec: acme: email: office@mydomain.com preferredChain: "" privateKeySecretRef: name: letsencrypt-prod server: https://acme-v02.api.letsencrypt.org/directory solvers: - http01: ingress: {} status: acme: lastRegisteredEmail: office@mydomain.com uri: https://acme-v02.api.letsencrypt.org/acme/acct/627190636 conditions: - lastTransitionTime: "2022-07-11T14:32:17Z" message: The ACME account was registered with the ACME server observedGeneration: 11 reason: ACMEAccountRegistered status: "True" type: Ready kubectl edit ingressroute webspace1-tls: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"traefik.containo.us/v1alpha1","kind":"IngressRoute","metadata":{"annotations":{},"name":"w271a19-tls","namespace":"default"},"spec":{"entryPoints":["websecure"],"routes":[{"kind":"Rule","match":"Host(`test1.mydomain.com`)","middlewares":[{"name":"test-compress"}],"priority":10,"services":[{"name":"w271a19","port":80}]}],"tls":{"secretName":"test1.mydomain.com-tls"}}} creationTimestamp: "2022-10-05T20:01:38Z" generation: 7 name: w271a19-tls namespace: default resourceVersion: "45151920" uid: 77e9b7ac-33e7-4810-9baf-579f00e2db6b spec: entryPoints: - websecure routes: - kind: Rule match: Host(`test1.mydomain.com`) middlewares: - name: test-compress priority: 10 services: - name: w271a19 port: 80 tls: secretName: test1.mydomain.com-tls kubectl edit ingress cm-acme-http-solver-rz9mm: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0 creationTimestamp: "2023-03-22T13:00:18Z" generateName: cm-acme-http-solver- generation: 1 labels: acme.cert-manager.io/http-domain: "2306410973" acme.cert-manager.io/http-token: "1038683769" acme.cert-manager.io/http01-solver: "true" name: cm-acme-http-solver-rz9mm namespace: default ownerReferences: - apiVersion: acme.cert-manager.io/v1 blockOwnerDeletion: true controller: true kind: Challenge name: test1.mydomain.com-glnrn-2096762198-4162956557 uid: db8b5c78-8549-4f13-b43d-c6c7bba7468d resourceVersion: "52806119" uid: 6b27e02a-ee65-4809-b391-95c03f9ebb36 spec: ingressClassName: traefik rules: - host: test1.mydomain.com http: paths: - backend: service: name: cm-acme-http-solver-ll2zr port: number: 8089 path: /.well-known/acme-challenge/9qtVY8FjfMIWd_wBNhP3PEPJZo4lFTw8WfWLMucRqAQ pathType: ImplementationSpecific status: loadBalancer: {} get_cert.yaml: apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: test1.mydomain.com namespace: default spec: secretName: test1.mydomain.com-tls issuerRef: name: letsencrypt-prod kind: ClusterIssuer commonName: test1.mydomain.com dnsNames: - test1.mydomain.com In the webserver log of the web pod I see the reqests to /.well-known... coming in. ### Response: Shouldn't this annotation be added to ingress? cert-manager.io/cluster-issuer=letsencrypt-production
I have installed microk8s, traefik and cert-manager. When I try to receive a letsencrypt certificate, a new pod for answering the challenge is created, but the request from the letsencryt server does not reach this pod. Instead, the request is forwarded to the pod that serves the website. It looks like the ingressroute routing the traffic to the web pod has higher priority then the ingress that routes the /.well-known/acme-challenge/... requests to the correct pod. What am I missing? kubectl edit clusterissuer letsencrypt-prod: kind: ClusterIssuer metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cert-manager.io/v1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"office@mydomain.com","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"traefik"}}}]}}} creationTimestamp: "2022-07-11T14:32:15Z" generation: 11 name: letsencrypt-prod resourceVersion: "49979842" uid: 40c4e26d-9c94-4cda-aa3a-357491bdb25a spec: acme: email: office@mydomain.com preferredChain: "" privateKeySecretRef: name: letsencrypt-prod server: https://acme-v02.api.letsencrypt.org/directory solvers: - http01: ingress: {} status: acme: lastRegisteredEmail: office@mydomain.com uri: https://acme-v02.api.letsencrypt.org/acme/acct/627190636 conditions: - lastTransitionTime: "2022-07-11T14:32:17Z" message: The ACME account was registered with the ACME server observedGeneration: 11 reason: ACMEAccountRegistered status: "True" type: Ready kubectl edit ingressroute webspace1-tls: apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"traefik.containo.us/v1alpha1","kind":"IngressRoute","metadata":{"annotations":{},"name":"w271a19-tls","namespace":"default"},"spec":{"entryPoints":["websecure"],"routes":[{"kind":"Rule","match":"Host(`test1.mydomain.com`)","middlewares":[{"name":"test-compress"}],"priority":10,"services":[{"name":"w271a19","port":80}]}],"tls":{"secretName":"test1.mydomain.com-tls"}}} creationTimestamp: "2022-10-05T20:01:38Z" generation: 7 name: w271a19-tls namespace: default resourceVersion: "45151920" uid: 77e9b7ac-33e7-4810-9baf-579f00e2db6b spec: entryPoints: - websecure routes: - kind: Rule match: Host(`test1.mydomain.com`) middlewares: - name: test-compress priority: 10 services: - name: w271a19 port: 80 tls: secretName: test1.mydomain.com-tls kubectl edit ingress cm-acme-http-solver-rz9mm: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0 creationTimestamp: "2023-03-22T13:00:18Z" generateName: cm-acme-http-solver- generation: 1 labels: acme.cert-manager.io/http-domain: "2306410973" acme.cert-manager.io/http-token: "1038683769" acme.cert-manager.io/http01-solver: "true" name: cm-acme-http-solver-rz9mm namespace: default ownerReferences: - apiVersion: acme.cert-manager.io/v1 blockOwnerDeletion: true controller: true kind: Challenge name: test1.mydomain.com-glnrn-2096762198-4162956557 uid: db8b5c78-8549-4f13-b43d-c6c7bba7468d resourceVersion: "52806119" uid: 6b27e02a-ee65-4809-b391-95c03f9ebb36 spec: ingressClassName: traefik rules: - host: test1.mydomain.com http: paths: - backend: service: name: cm-acme-http-solver-ll2zr port: number: 8089 path: /.well-known/acme-challenge/9qtVY8FjfMIWd_wBNhP3PEPJZo4lFTw8WfWLMucRqAQ pathType: ImplementationSpecific status: loadBalancer: {} get_cert.yaml: apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: test1.mydomain.com namespace: default spec: secretName: test1.mydomain.com-tls issuerRef: name: letsencrypt-prod kind: ClusterIssuer commonName: test1.mydomain.com dnsNames: - test1.mydomain.com In the webserver log of the web pod I see the reqests to /.well-known... coming in.
The easiest solution would be to build the file into the docker image.
###Instruction: I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards ### Response: The easiest solution would be to build the file into the docker image.
I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards
It's actually very simple; I report a part of OpenShift documentation: healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html From the point of view of the kube-proxy, this always responds to packets sent to the health check node port. I suggest you take a look at this example: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
###Instruction: So, according to Kubernetes documentation, when you have an external LoadBalancer service set with externalTrafficPolicy=Local, you can explicitly define a healthCheckNodePort. If I understood correctly, since LB services with this specific externalTrafficPolicy are unable to determine if a pod is running inside of a node, this healthCheckNodePort should be used to evaluate just that. As a result, a properly set healthCheckNodePort should avoid situations in which our request is routed to improper nodes. An example would be the image below, extracted from https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies. If I properly understood this concept, the healthCheckNodePort should let the LB know that all traffic should be directed to the left node, as the right one doesn't have a pod to handle it. My question here is: how does the proxy attached to healthCheckNodePort checks for the pod existence? Is this something I should implement from my side, or is Kubernetes auto-assigned port able to handle this health-check on its own? It seems weirdly convenient that any port number would do, so I would like some more insight on how this works. ### Response: It's actually very simple; I report a part of OpenShift documentation: healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html From the point of view of the kube-proxy, this always responds to packets sent to the health check node port. I suggest you take a look at this example: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
So, according to Kubernetes documentation, when you have an external LoadBalancer service set with externalTrafficPolicy=Local, you can explicitly define a healthCheckNodePort. If I understood correctly, since LB services with this specific externalTrafficPolicy are unable to determine if a pod is running inside of a node, this healthCheckNodePort should be used to evaluate just that. As a result, a properly set healthCheckNodePort should avoid situations in which our request is routed to improper nodes. An example would be the image below, extracted from https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies. If I properly understood this concept, the healthCheckNodePort should let the LB know that all traffic should be directed to the left node, as the right one doesn't have a pod to handle it. My question here is: how does the proxy attached to healthCheckNodePort checks for the pod existence? Is this something I should implement from my side, or is Kubernetes auto-assigned port able to handle this health-check on its own? It seems weirdly convenient that any port number would do, so I would like some more insight on how this works.
Store the Env in a configmap and mount it on your deployment. Just rollout your deployment if you want to update the Envs in the pods(containers).
###Instruction: I have a Kubernetes cluster running a Docker image. My web app is containerized and uses an environment variable that changes every couple of days. I am wondering if there is a way to externally store the variables or if I just have to enter the new variables and push a new image every couple of days? ### Response: Store the Env in a configmap and mount it on your deployment. Just rollout your deployment if you want to update the Envs in the pods(containers).
I have a Kubernetes cluster running a Docker image. My web app is containerized and uses an environment variable that changes every couple of days. I am wondering if there is a way to externally store the variables or if I just have to enter the new variables and push a new image every couple of days?
A row level security policy allows a role to do something, so dropping the policy will reduce what the user can do. If you are worried that this can cause errors or bad results for concurrent queries, wrap the DROP POLICY and CREATE POLICY` statements in a single transaction, then all concurrent queries get blocked. That shouldn't be a problem, because both statements are fast.
###Instruction: Say I have an RLS policy on a table, and I realize I goofed and forgot a command in the policy, so can't ALTER it; it must be DROP/CREATEed. I'm using rolling updates in k8s, and my DB migrations happen in an init container. I deploy my new application version that DROPs the policy and CREATEs the new one with the correct set of commands. Is there a window, however brief between CREATE and DROP, where the still running old pod now has access to everything? Intuition says "definitely", but testing a fraction of a second race condition is hard. Can I prevent this window by wrapping the DROP/CREATE in a transaction? Are there other methods to make this safe? Real world, the correct answer is "your deployed version has a security vulnerability. Scale to 0 and deploy the correct version", but I'm trying to think through the vulnerabilities this set up brings, and others doing the deploy in the future may not be so careful. My current setup is running 9.6, but there's a plan in the medium-term to work it up to 15.1, so answers for either version are welcome. ### Response: A row level security policy allows a role to do something, so dropping the policy will reduce what the user can do. If you are worried that this can cause errors or bad results for concurrent queries, wrap the DROP POLICY and CREATE POLICY` statements in a single transaction, then all concurrent queries get blocked. That shouldn't be a problem, because both statements are fast.
Say I have an RLS policy on a table, and I realize I goofed and forgot a command in the policy, so can't ALTER it; it must be DROP/CREATEed. I'm using rolling updates in k8s, and my DB migrations happen in an init container. I deploy my new application version that DROPs the policy and CREATEs the new one with the correct set of commands. Is there a window, however brief between CREATE and DROP, where the still running old pod now has access to everything? Intuition says "definitely", but testing a fraction of a second race condition is hard. Can I prevent this window by wrapping the DROP/CREATE in a transaction? Are there other methods to make this safe? Real world, the correct answer is "your deployed version has a security vulnerability. Scale to 0 and deploy the correct version", but I'm trying to think through the vulnerabilities this set up brings, and others doing the deploy in the future may not be so careful. My current setup is running 9.6, but there's a plan in the medium-term to work it up to 15.1, so answers for either version are welcome.
The idea of ​​containerizing your scripts allows you to have a highly customized "environment" that doesn't change wherever you deploy it. For the management of these containers then, you decide according to your needs... If they are management scripts, you can think of creating a management Pod that always stays up&running (I'll link you below a couple of examples on how Do). How can I keep a container running on Kubernetes? Otherwise, it may be a good idea to prepare a Job and run it as needed. In case of PROD, remember to have at least 3 nodes (HA), do a pre-assessment to understand how many resources you can assign to your Pods (assuming that the resource consumption of these scripts has already been tested), think about the roles assigned to Worker nodes, so as to avoid Pods being randomly scheduled everywhere (perhaps where there is a business critical workload that risks saturating resources), autoscaling, etc.
###Instruction: I am just learning containers and kubernetes and everything around it. There has been a use case to build a reliable setup, where we can store all our python scripts(small, usecase defined scripts that do one only job each). There are some scripts in other languages like perl too. Not sure if this is the correct place to ask, but I will ask anyway. The requirement is to build a solution that will have less to no dependency on the underlying operating system so even if we were to switch operating systems/servers in the future, the scripts can remain/run as it is. Was thinking if I can build a 2 node kubernetes cluster and run each script in a container and trigger them using a cron job. Not sure if this is an optimal and efficient approach. The python virtual environments is not our way to go given the python version is symmlinked back to the python version on the server, causing a server/os dependency. Appreciate any ideas and advice if someone else has done something similar. I've google enough for such usecases. Didn't find solutions that match specifically to my need. But please feel free to share, ideas, thoughts any good reads too. Thanks! Note: The server operating system is RHEL 8 and above ### Response: The idea of ​​containerizing your scripts allows you to have a highly customized "environment" that doesn't change wherever you deploy it. For the management of these containers then, you decide according to your needs... If they are management scripts, you can think of creating a management Pod that always stays up&running (I'll link you below a couple of examples on how Do). How can I keep a container running on Kubernetes? Otherwise, it may be a good idea to prepare a Job and run it as needed. In case of PROD, remember to have at least 3 nodes (HA), do a pre-assessment to understand how many resources you can assign to your Pods (assuming that the resource consumption of these scripts has already been tested), think about the roles assigned to Worker nodes, so as to avoid Pods being randomly scheduled everywhere (perhaps where there is a business critical workload that risks saturating resources), autoscaling, etc.
I am just learning containers and kubernetes and everything around it. There has been a use case to build a reliable setup, where we can store all our python scripts(small, usecase defined scripts that do one only job each). There are some scripts in other languages like perl too. Not sure if this is the correct place to ask, but I will ask anyway. The requirement is to build a solution that will have less to no dependency on the underlying operating system so even if we were to switch operating systems/servers in the future, the scripts can remain/run as it is. Was thinking if I can build a 2 node kubernetes cluster and run each script in a container and trigger them using a cron job. Not sure if this is an optimal and efficient approach. The python virtual environments is not our way to go given the python version is symmlinked back to the python version on the server, causing a server/os dependency. Appreciate any ideas and advice if someone else has done something similar. I've google enough for such usecases. Didn't find solutions that match specifically to my need. But please feel free to share, ideas, thoughts any good reads too. Thanks! Note: The server operating system is RHEL 8 and above
@glv's answer is also correct, but let me explain you why it is not working and what is the otherway The reason is wget uses the DNS resolver configured in the pod, which is set to the Kubernetes DNS service by default. httpGet probe, which is executed by the kubelet on the node hosting the pod. The kubelet does not use the DNS resolver configured in the pod, but rather the host's DNS resolver configuration You can modify your readiness probe to use the IP address of the pod providing the service instead of the hostname. readinessProbe: httpGet: host: (POD_IP) path: / port: 80 scheme: HTTP You can dyamically assign IP by this way as well apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP livenessProbe: exec: command: - 'true' readinessProbe: httpGet: host: $(MY_POD_IP) path: / port: 80 scheme: HTTP dnsPolicy: ClusterFirst restartPolicy: Always status: {}
###Instruction: The following pod definition successfully executes a readinessProbe, which makes a request to the service service-am-i-ready that connects to pods on the same cluster. apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} livenessProbe: exec: command: - 'true' readinessProbe: exec: command: - sh - -c - 'wget -T2 -O- http://service-am-i-ready:80' dnsPolicy: ClusterFirst restartPolicy: Always status: {} However, if I change the readinessProbe.exec.command to readinessProbe.httpGet it doesn't work anymore: apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} livenessProbe: exec: command: - 'true' readinessProbe: httpGet: # Only changed this method host: service-am-i-ready path: / port: 80 scheme: HTTP dnsPolicy: ClusterFirst restartPolicy: Always status: {} This is the error message I get running kubectl po describe ready-if-service-ready: Warning Unhealty 3m10s (x139 over 23m) kubelet Readiness probe failed: Get "http://service-am-i-ready:80/": dial tcp: lookup service-am-i-ready: no such host Running kubectl get po ready-if-service-ready gives: NAME READY STATUS RESTARTS AGE ready-if-service-ready 0/1 Running 0 27m Why is the first readinessProbe working, but not the second one? It looks like the second readinessProbe makes a request to the same endpoint as the wget -T2 -O- http://service-am-i-ready:80 command. ### Response: @glv's answer is also correct, but let me explain you why it is not working and what is the otherway The reason is wget uses the DNS resolver configured in the pod, which is set to the Kubernetes DNS service by default. httpGet probe, which is executed by the kubelet on the node hosting the pod. The kubelet does not use the DNS resolver configured in the pod, but rather the host's DNS resolver configuration You can modify your readiness probe to use the IP address of the pod providing the service instead of the hostname. readinessProbe: httpGet: host: (POD_IP) path: / port: 80 scheme: HTTP You can dyamically assign IP by this way as well apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP livenessProbe: exec: command: - 'true' readinessProbe: httpGet: host: $(MY_POD_IP) path: / port: 80 scheme: HTTP dnsPolicy: ClusterFirst restartPolicy: Always status: {}
The following pod definition successfully executes a readinessProbe, which makes a request to the service service-am-i-ready that connects to pods on the same cluster. apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} livenessProbe: exec: command: - 'true' readinessProbe: exec: command: - sh - -c - 'wget -T2 -O- http://service-am-i-ready:80' dnsPolicy: ClusterFirst restartPolicy: Always status: {} However, if I change the readinessProbe.exec.command to readinessProbe.httpGet it doesn't work anymore: apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready resources: {} livenessProbe: exec: command: - 'true' readinessProbe: httpGet: # Only changed this method host: service-am-i-ready path: / port: 80 scheme: HTTP dnsPolicy: ClusterFirst restartPolicy: Always status: {} This is the error message I get running kubectl po describe ready-if-service-ready: Warning Unhealty 3m10s (x139 over 23m) kubelet Readiness probe failed: Get "http://service-am-i-ready:80/": dial tcp: lookup service-am-i-ready: no such host Running kubectl get po ready-if-service-ready gives: NAME READY STATUS RESTARTS AGE ready-if-service-ready 0/1 Running 0 27m Why is the first readinessProbe working, but not the second one? It looks like the second readinessProbe makes a request to the same endpoint as the wget -T2 -O- http://service-am-i-ready:80 command.
Your question requires a series of reflections that you don't do.. for example, which Cloud provider are you going to? Why do you want to create a monitoring (Prometheus/Alertmanager/Grafana) and a logging stack (Elasticsearch/fluentd)? Is there a particular reason why you want to remain "untied" from the Provider's products? On most Cloud Service Providers, you already have these tools as a service. Anyway, for the "monitoring" stack you can use the Prometheus operator; this provides all the tools you need in one solution. https://prometheus-operator.dev/docs/prologue/introduction/ On your applications side you will have to export the metrics you want to monitor and add the various "scrape-jobs" to your Prometheus. After that you can have fun creating dashboards of all kinds (you will find a lot of docs online). For the logging stack, you'll need a tool like fluentd to "fetch" and collect logs from your Kubernetes cluster, and a tool that allows you to intelligently view and process this information like Elasticsearch. The tools in question are not as closely related as the monitoring ones, so it's up to you to decide how to install them. Surely I would create a single namespace for Logging and consider using the Helm Charts provided by the Vendors.
###Instruction: We currently have several microservices running on-premises, and we use shell scripts to monitor their performance, including JVM, heap size, and number of running threads. However, we are planning to move to the cloud using Kubernetes. We want to monitor service level stats like the number of threads open, JVM, heap size, etc., in the cloud. We are planning to use Prometheus, Grafana, and Elasticsearch for this purpose. Can anyone suggest a way to monitor microservice performance in a Kubernetes cluster using Prometheus, Grafana, and Elasticsearch? We believe this can be done using Spring Boot Actuator , but don't have clear idea on how it works on kubernetes for multiple services. ### Response: Your question requires a series of reflections that you don't do.. for example, which Cloud provider are you going to? Why do you want to create a monitoring (Prometheus/Alertmanager/Grafana) and a logging stack (Elasticsearch/fluentd)? Is there a particular reason why you want to remain "untied" from the Provider's products? On most Cloud Service Providers, you already have these tools as a service. Anyway, for the "monitoring" stack you can use the Prometheus operator; this provides all the tools you need in one solution. https://prometheus-operator.dev/docs/prologue/introduction/ On your applications side you will have to export the metrics you want to monitor and add the various "scrape-jobs" to your Prometheus. After that you can have fun creating dashboards of all kinds (you will find a lot of docs online). For the logging stack, you'll need a tool like fluentd to "fetch" and collect logs from your Kubernetes cluster, and a tool that allows you to intelligently view and process this information like Elasticsearch. The tools in question are not as closely related as the monitoring ones, so it's up to you to decide how to install them. Surely I would create a single namespace for Logging and consider using the Helm Charts provided by the Vendors.
We currently have several microservices running on-premises, and we use shell scripts to monitor their performance, including JVM, heap size, and number of running threads. However, we are planning to move to the cloud using Kubernetes. We want to monitor service level stats like the number of threads open, JVM, heap size, etc., in the cloud. We are planning to use Prometheus, Grafana, and Elasticsearch for this purpose. Can anyone suggest a way to monitor microservice performance in a Kubernetes cluster using Prometheus, Grafana, and Elasticsearch? We believe this can be done using Spring Boot Actuator , but don't have clear idea on how it works on kubernetes for multiple services.
on worker nodes doing sudo kubeadm reset and then rejoining will solve this issue
###Instruction: When join node : sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2 The outputs: [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition And systemctl status kubelet: node@node:~$ sudo systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago Docs: https://kubernetes.io/docs/home/ Main PID: 26716 (kubelet) Tasks: 16 (limit: 1111) CGroup: /system.slice/kubelet.service └─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml - Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found To Unauthorized I checked at master with kubeadm token list, token is valid. So what's the problem? Thanks a lot. ### Response: on worker nodes doing sudo kubeadm reset and then rejoining will solve this issue
When join node : sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2 The outputs: [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition And systemctl status kubelet: node@node:~$ sudo systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago Docs: https://kubernetes.io/docs/home/ Main PID: 26716 (kubelet) Tasks: 16 (limit: 1111) CGroup: /system.slice/kubelet.service └─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml - Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found To Unauthorized I checked at master with kubeadm token list, token is valid. So what's the problem? Thanks a lot.
If you need both microservice that can read and write to a file, then you need a sharedStorage that supports the ReadWriteMany accessMode. There are a couple of options in Kubernetes like: NFS CephFS Glusterfs You could find more on this topic here. Another solution would be using object storages like s3.
###Instruction: I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards ### Response: If you need both microservice that can read and write to a file, then you need a sharedStorage that supports the ReadWriteMany accessMode. There are a couple of options in Kubernetes like: NFS CephFS Glusterfs You could find more on this topic here. Another solution would be using object storages like s3.
I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards
You need to deploy the frontend service. More details on using Grafana with Managed Prometheus can be found here, but here's the manifest: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: automountServiceAccountToken: true nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: amd64 containers: - name: frontend image: "gke.gcr.io/prometheus-engine/frontend:v0.5.0-gke.0" args: - "--web.listen-address=:9090" - "--query.project-id=$PROJECT_ID" ports: - name: web containerPort: 9090 readinessProbe: httpGet: path: /-/ready port: web livenessProbe: httpGet: path: /-/healthy port: web --- apiVersion: v1 kind: Service metadata: name: frontend spec: clusterIP: None selector: app: frontend ports: - name: web port: 9090
###Instruction: I'm trying to integrate Grafana with managed prometheus service provided by Kubernetes Engine in GCP. I configured Managed Prometheus service and I'm able see the metrics well, but I'm not able to integrate the managed prometheus service with Grafana on the same kubernetes cluster. Below one is the managed prometheus metrics that are available. I believe without endpoint URL, we cannot create Grafana dashboard. The issue is with creating the endpoint for managed prometheus on GKE. I crawled all the web but couldn't find a way to create endpoint for managed prometheus for GKE. Can you please confirm me that is it actually possible to create an endpoint for Managed Prometheus for GKE cluster? if yes, can you please guide me how to grab that endpoint URL. Thanks in advance. ### Response: You need to deploy the frontend service. More details on using Grafana with Managed Prometheus can be found here, but here's the manifest: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 2 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: automountServiceAccountToken: true nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: amd64 containers: - name: frontend image: "gke.gcr.io/prometheus-engine/frontend:v0.5.0-gke.0" args: - "--web.listen-address=:9090" - "--query.project-id=$PROJECT_ID" ports: - name: web containerPort: 9090 readinessProbe: httpGet: path: /-/ready port: web livenessProbe: httpGet: path: /-/healthy port: web --- apiVersion: v1 kind: Service metadata: name: frontend spec: clusterIP: None selector: app: frontend ports: - name: web port: 9090
I'm trying to integrate Grafana with managed prometheus service provided by Kubernetes Engine in GCP. I configured Managed Prometheus service and I'm able see the metrics well, but I'm not able to integrate the managed prometheus service with Grafana on the same kubernetes cluster. Below one is the managed prometheus metrics that are available. I believe without endpoint URL, we cannot create Grafana dashboard. The issue is with creating the endpoint for managed prometheus on GKE. I crawled all the web but couldn't find a way to create endpoint for managed prometheus for GKE. Can you please confirm me that is it actually possible to create an endpoint for Managed Prometheus for GKE cluster? if yes, can you please guide me how to grab that endpoint URL. Thanks in advance.
If you dont want to set it in the docker image, you can use the initcontainer to download when POD starts. You can set the file into the hostPath when service A starting. When you say service A has file not sure if it's in the repo or it's part of Docker image build. If not part of the repo and it's stored at some central place like Bucket you can download it when service B starts with initcontainer. Example downloading file and set & share to hostpath: apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: save-data mountPath: /data-dir initContainers: - name: install image: ubuntu command: ['curl', 'download-link-path >', '/tmp/file.csv'] volumeMounts: - name: save-data mountPath: "/data-dir" dnsPolicy: Default volumes: - name: save-data hostPath: path: /data-dir type: Directory other option is to set the shared file system which multiple PODs can attach and use same time. ReadWriteMany or ReadOnlyMany option. Minio is very good option to use for above scenario.
###Instruction: I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards ### Response: If you dont want to set it in the docker image, you can use the initcontainer to download when POD starts. You can set the file into the hostPath when service A starting. When you say service A has file not sure if it's in the repo or it's part of Docker image build. If not part of the repo and it's stored at some central place like Bucket you can download it when service B starts with initcontainer. Example downloading file and set & share to hostpath: apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: save-data mountPath: /data-dir initContainers: - name: install image: ubuntu command: ['curl', 'download-link-path >', '/tmp/file.csv'] volumeMounts: - name: save-data mountPath: "/data-dir" dnsPolicy: Default volumes: - name: save-data hostPath: path: /data-dir type: Directory other option is to set the shared file system which multiple PODs can attach and use same time. ReadWriteMany or ReadOnlyMany option. Minio is very good option to use for above scenario.
I have scenario when my cluster consist of two microservices. In service A i have .CSV(15MB) file, which also is needed in service B. I don't want to place this file two times in each repo. During deployment of service A I want to place this .csv file in some kind of shared volume, that pod containing service B can consume and process it. Any ideas and best practices how to do it? Best regards
It seems to me that what you describe mirrors the YAML files you shared; there is nothing that changes the name of the backup and therefore it will never create a new one. If you want to make a backup of one PVC to another in a "raw" way with scripts, maybe think of a job that mounts 2 Volumes (source and destination) and executes a simple cp -Rp /volume-source/* /volume-destination/ Otherwise, if you want to get the job done right, consider using a tool like Velero. https://velero.io/
###Instruction: I wrote a CronJob that periodically takes volume snapshot of my PVC. Below is the CronJob file: kind: CronJob metadata: name: mycronjob spec: schedule: "*/2 * * * *" # Run the job every 2 minutes jobTemplate: spec: template: spec: containers: - name: webserver-container image: kubectl:latest command: - /bin/sh - -c - /app/script.sh volumeMounts: - name: script-volume mountPath: /app restartPolicy: OnFailure volumes: - name: script-volume configMap: name: script-configmap The Volume Snapshot file looks like this: kind: VolumeSnapshot metadata: name: snap-shot-test spec: volumeSnapshotClassName: csi-snapclass source: persistentVolumeClaimName: my-test-apps And here is the script file: #!/bin/bash kubectl apply -f volumesnapshot.yml First time the cronjob is successfully executed, but after that it says volumesnapshot.snapshot.storage.k8s.io/snap-shot-test unchanged How can I periodically take volumesnapshot of a PVC with having latest 2 copies of the snapshot? ### Response: It seems to me that what you describe mirrors the YAML files you shared; there is nothing that changes the name of the backup and therefore it will never create a new one. If you want to make a backup of one PVC to another in a "raw" way with scripts, maybe think of a job that mounts 2 Volumes (source and destination) and executes a simple cp -Rp /volume-source/* /volume-destination/ Otherwise, if you want to get the job done right, consider using a tool like Velero. https://velero.io/
I wrote a CronJob that periodically takes volume snapshot of my PVC. Below is the CronJob file: kind: CronJob metadata: name: mycronjob spec: schedule: "*/2 * * * *" # Run the job every 2 minutes jobTemplate: spec: template: spec: containers: - name: webserver-container image: kubectl:latest command: - /bin/sh - -c - /app/script.sh volumeMounts: - name: script-volume mountPath: /app restartPolicy: OnFailure volumes: - name: script-volume configMap: name: script-configmap The Volume Snapshot file looks like this: kind: VolumeSnapshot metadata: name: snap-shot-test spec: volumeSnapshotClassName: csi-snapclass source: persistentVolumeClaimName: my-test-apps And here is the script file: #!/bin/bash kubectl apply -f volumesnapshot.yml First time the cronjob is successfully executed, but after that it says volumesnapshot.snapshot.storage.k8s.io/snap-shot-test unchanged How can I periodically take volumesnapshot of a PVC with having latest 2 copies of the snapshot?
This is due to the Token System powering CDKTF. Instead of real run-time values accessing ssdEksNodeGroupPublicLargeSubnet.taint gives you a token that is a reference to the taints. This reference is later used by Terraform for e.g. ordering the deployments of the individual resources. If your input is static you can use the input directly by accessing ssdEksNodeGroupPublicLargeSubnet.taintInput. If you want to loop over the taints on the terraform side instead of during synthetisation you can use dynamic blocks, e.g. const iterator = TerraformIterator.fromList(ssdEksNodeGroupPublicLargeSubnet.taint); const tolerations = iterator.dynamic({ key: cdktf.propertyAccess(iterator.value, "key"), value: cdktf.propertyAccess(iterator.value, "value"), effect: cdktf.propertyAccess(iterator.value, "effect"), operator: "Equal" })
###Instruction: I have an EksNodeGroup with the following taints: const ssdEksNodeGroupPublicLargeSubnet = new aws.eks.EksNodeGroup( this, "ssdEksNodeGroupPublicLargeSubnet", { // ... other stuff... taint: [ { key: "app", value: "strick", effect: "NO_SCHEDULE", }, ], } ); Elsewhere in my code, I'm trying to iterate over my nodeGroup taints to dynamically create kubernetes pod tolerations. const nodeGrouop = ssdEksNodeGroupPublicLargeSubnet const tolerations: k8s.DeploymentSpecTemplateSpecToleration[] = []; for (let i = 0; i < Fn.lengthOf(nodeGroup.taint); i++) { const taint = nodeGroup.taint.get(i); tolerations.push({ key: taint.key, value: taint.value, effect: taint.effect, operator: "Equal" }); } console.log("##################", tolerations) However, when I try to run this, I see the log statement prints an empty array and when my pod/deployment is created it is created with no tolerations. Here's full declaration of my kubernetes deployment const pausePodDeployment = new k8s.Deployment( this, pausePodDeploymentName, { metadata: { name: pausePodDeploymentName, namespace: namespace.metadata.name, }, spec: { replicas: "1", selector: { matchLabels: { app: pausePodDeploymentName, }, }, template: { metadata: { labels: { app: pausePodDeploymentName, }, }, spec: { priorityClassName: priorityClass.metadata.name, terminationGracePeriodSeconds: 0, container: [ { name: "reserve-resources", image: "k8s.gcr.io/pause", resources: { requests: { cpu: "1", }, }, }, ], toleration: tolerations, nodeSelector: { ...nodeGroupLabels, }, }, }, }, } ); and here's the full output from CDK (note that there aren't any tolerations): # kubernetes_deployment.overprovisioner_strick-overprovisioner-pause-pods_B5F26972 (overprovisioner/strick-overprovisioner-pause-pods) will be created + resource "kubernetes_deployment" "overprovisioner_strick-overprovisioner-pause-pods_B5F26972" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + name = "strick-overprovisioner-pause-pods" + namespace = "overprovisioner" + resource_version = (known after apply) + uid = (known after apply) } + spec { + min_ready_seconds = 0 + paused = false + progress_deadline_seconds = 600 + replicas = "1" + revision_history_limit = 10 + selector { + match_labels = { + "app" = "strick-overprovisioner-pause-pods" } } + strategy { + type = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } + template { + metadata { + generation = (known after apply) + labels = { + "app" = "strick-overprovisioner-pause-pods" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "diskType" = "ssd" } + priority_class_name = "overprovisioner" + restart_policy = "Always" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 0 + container { + image = "k8s.gcr.io/pause" + image_pull_policy = (known after apply) + name = "reserve-resources" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + resources { + limits = (known after apply) + requests = { + "cpu" = "1" } } } + image_pull_secrets { + name = (known after apply) } + readiness_gate { + condition_type = (known after apply) } + volume { + name = (known after apply) + aws_elastic_block_store { + fs_type = (known after apply) + partition = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + azure_disk { + caching_mode = (known after apply) + data_disk_uri = (known after apply) + disk_name = (known after apply) + fs_type = (known after apply) + kind = (known after apply) + read_only = (known after apply) } + azure_file { + read_only = (known after apply) + secret_name = (known after apply) + secret_namespace = (known after apply) + share_name = (known after apply) } + ceph_fs { + monitors = (known after apply) + path = (known after apply) + read_only = (known after apply) + secret_file = (known after apply) + user = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + cinder { + fs_type = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + config_map { + default_mode = (known after apply) + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + csi { + driver = (known after apply) + fs_type = (known after apply) + read_only = (known after apply) + volume_attributes = (known after apply) + node_publish_secret_ref { + name = (known after apply) } } + downward_api { + default_mode = (known after apply) + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + empty_dir { + medium = (known after apply) + size_limit = (known after apply) } + fc { + fs_type = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_ww_ns = (known after apply) } + flex_volume { + driver = (known after apply) + fs_type = (known after apply) + options = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + flocker { + dataset_name = (known after apply) + dataset_uuid = (known after apply) } + gce_persistent_disk { + fs_type = (known after apply) + partition = (known after apply) + pd_name = (known after apply) + read_only = (known after apply) } + git_repo { + directory = (known after apply) + repository = (known after apply) + revision = (known after apply) } + glusterfs { + endpoints_name = (known after apply) + path = (known after apply) + read_only = (known after apply) } + host_path { + path = (known after apply) + type = (known after apply) } + iscsi { + fs_type = (known after apply) + iqn = (known after apply) + iscsi_interface = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_portal = (known after apply) } + local { + path = (known after apply) } + nfs { + path = (known after apply) + read_only = (known after apply) + server = (known after apply) } + persistent_volume_claim { + claim_name = (known after apply) + read_only = (known after apply) } + photon_persistent_disk { + fs_type = (known after apply) + pd_id = (known after apply) } + projected { + default_mode = (known after apply) + sources { + config_map { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + downward_api { + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + secret { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + service_account_token { + audience = (known after apply) + expiration_seconds = (known after apply) + path = (known after apply) } } } + quobyte { + group = (known after apply) + read_only = (known after apply) + registry = (known after apply) + user = (known after apply) + volume = (known after apply) } + rbd { + ceph_monitors = (known after apply) + fs_type = (known after apply) + keyring = (known after apply) + rados_user = (known after apply) + rbd_image = (known after apply) + rbd_pool = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + secret { + default_mode = (known after apply) + optional = (known after apply) + secret_name = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + vsphere_volume { + fs_type = (known after apply) + volume_path = (known after apply) } } } } } How can I dynamically iterate over the taints of a nodegroup to construct the correct pod tolerations for that node group? ### Response: This is due to the Token System powering CDKTF. Instead of real run-time values accessing ssdEksNodeGroupPublicLargeSubnet.taint gives you a token that is a reference to the taints. This reference is later used by Terraform for e.g. ordering the deployments of the individual resources. If your input is static you can use the input directly by accessing ssdEksNodeGroupPublicLargeSubnet.taintInput. If you want to loop over the taints on the terraform side instead of during synthetisation you can use dynamic blocks, e.g. const iterator = TerraformIterator.fromList(ssdEksNodeGroupPublicLargeSubnet.taint); const tolerations = iterator.dynamic({ key: cdktf.propertyAccess(iterator.value, "key"), value: cdktf.propertyAccess(iterator.value, "value"), effect: cdktf.propertyAccess(iterator.value, "effect"), operator: "Equal" })
I have an EksNodeGroup with the following taints: const ssdEksNodeGroupPublicLargeSubnet = new aws.eks.EksNodeGroup( this, "ssdEksNodeGroupPublicLargeSubnet", { // ... other stuff... taint: [ { key: "app", value: "strick", effect: "NO_SCHEDULE", }, ], } ); Elsewhere in my code, I'm trying to iterate over my nodeGroup taints to dynamically create kubernetes pod tolerations. const nodeGrouop = ssdEksNodeGroupPublicLargeSubnet const tolerations: k8s.DeploymentSpecTemplateSpecToleration[] = []; for (let i = 0; i < Fn.lengthOf(nodeGroup.taint); i++) { const taint = nodeGroup.taint.get(i); tolerations.push({ key: taint.key, value: taint.value, effect: taint.effect, operator: "Equal" }); } console.log("##################", tolerations) However, when I try to run this, I see the log statement prints an empty array and when my pod/deployment is created it is created with no tolerations. Here's full declaration of my kubernetes deployment const pausePodDeployment = new k8s.Deployment( this, pausePodDeploymentName, { metadata: { name: pausePodDeploymentName, namespace: namespace.metadata.name, }, spec: { replicas: "1", selector: { matchLabels: { app: pausePodDeploymentName, }, }, template: { metadata: { labels: { app: pausePodDeploymentName, }, }, spec: { priorityClassName: priorityClass.metadata.name, terminationGracePeriodSeconds: 0, container: [ { name: "reserve-resources", image: "k8s.gcr.io/pause", resources: { requests: { cpu: "1", }, }, }, ], toleration: tolerations, nodeSelector: { ...nodeGroupLabels, }, }, }, }, } ); and here's the full output from CDK (note that there aren't any tolerations): # kubernetes_deployment.overprovisioner_strick-overprovisioner-pause-pods_B5F26972 (overprovisioner/strick-overprovisioner-pause-pods) will be created + resource "kubernetes_deployment" "overprovisioner_strick-overprovisioner-pause-pods_B5F26972" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + name = "strick-overprovisioner-pause-pods" + namespace = "overprovisioner" + resource_version = (known after apply) + uid = (known after apply) } + spec { + min_ready_seconds = 0 + paused = false + progress_deadline_seconds = 600 + replicas = "1" + revision_history_limit = 10 + selector { + match_labels = { + "app" = "strick-overprovisioner-pause-pods" } } + strategy { + type = (known after apply) + rolling_update { + max_surge = (known after apply) + max_unavailable = (known after apply) } } + template { + metadata { + generation = (known after apply) + labels = { + "app" = "strick-overprovisioner-pause-pods" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "diskType" = "ssd" } + priority_class_name = "overprovisioner" + restart_policy = "Always" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 0 + container { + image = "k8s.gcr.io/pause" + image_pull_policy = (known after apply) + name = "reserve-resources" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + resources { + limits = (known after apply) + requests = { + "cpu" = "1" } } } + image_pull_secrets { + name = (known after apply) } + readiness_gate { + condition_type = (known after apply) } + volume { + name = (known after apply) + aws_elastic_block_store { + fs_type = (known after apply) + partition = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + azure_disk { + caching_mode = (known after apply) + data_disk_uri = (known after apply) + disk_name = (known after apply) + fs_type = (known after apply) + kind = (known after apply) + read_only = (known after apply) } + azure_file { + read_only = (known after apply) + secret_name = (known after apply) + secret_namespace = (known after apply) + share_name = (known after apply) } + ceph_fs { + monitors = (known after apply) + path = (known after apply) + read_only = (known after apply) + secret_file = (known after apply) + user = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + cinder { + fs_type = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + config_map { + default_mode = (known after apply) + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + csi { + driver = (known after apply) + fs_type = (known after apply) + read_only = (known after apply) + volume_attributes = (known after apply) + node_publish_secret_ref { + name = (known after apply) } } + downward_api { + default_mode = (known after apply) + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + empty_dir { + medium = (known after apply) + size_limit = (known after apply) } + fc { + fs_type = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_ww_ns = (known after apply) } + flex_volume { + driver = (known after apply) + fs_type = (known after apply) + options = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + flocker { + dataset_name = (known after apply) + dataset_uuid = (known after apply) } + gce_persistent_disk { + fs_type = (known after apply) + partition = (known after apply) + pd_name = (known after apply) + read_only = (known after apply) } + git_repo { + directory = (known after apply) + repository = (known after apply) + revision = (known after apply) } + glusterfs { + endpoints_name = (known after apply) + path = (known after apply) + read_only = (known after apply) } + host_path { + path = (known after apply) + type = (known after apply) } + iscsi { + fs_type = (known after apply) + iqn = (known after apply) + iscsi_interface = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_portal = (known after apply) } + local { + path = (known after apply) } + nfs { + path = (known after apply) + read_only = (known after apply) + server = (known after apply) } + persistent_volume_claim { + claim_name = (known after apply) + read_only = (known after apply) } + photon_persistent_disk { + fs_type = (known after apply) + pd_id = (known after apply) } + projected { + default_mode = (known after apply) + sources { + config_map { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + downward_api { + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + secret { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + service_account_token { + audience = (known after apply) + expiration_seconds = (known after apply) + path = (known after apply) } } } + quobyte { + group = (known after apply) + read_only = (known after apply) + registry = (known after apply) + user = (known after apply) + volume = (known after apply) } + rbd { + ceph_monitors = (known after apply) + fs_type = (known after apply) + keyring = (known after apply) + rados_user = (known after apply) + rbd_image = (known after apply) + rbd_pool = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + secret { + default_mode = (known after apply) + optional = (known after apply) + secret_name = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + vsphere_volume { + fs_type = (known after apply) + volume_path = (known after apply) } } } } } How can I dynamically iterate over the taints of a nodegroup to construct the correct pod tolerations for that node group?
It is not said that you reach the IP of your node (you should provide some more information about the environment if necessary). But you could port forward the service and reach it easily. Take a look here: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ Some other doc: How kubectl port-forward works?
###Instruction: I set up a local kubernetes cluster with minikube. On my cluster I have only one deployment runnning and one service attached to it. I used a NodePort on port 30100 to expose the service, so I can access it from my browser or via curl. here is the python-server.yml file I use to setup the cluster: apiVersion: apps/v1 kind: Deployment metadata: name: python-server-deployment namespace: kubernetes-hello-world labels: app: python-server spec: replicas: 1 selector: matchLabels: app: python-server template: metadata: labels: app: python-server spec: containers: - name: python-hello-world image: hello-world-python:latest imagePullPolicy: Never ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: python-server-internal-service namespace: kubernetes-hello-world spec: type: NodePort selector: app: python-server ports: - protocol: TCP port: 80 targetPort: 5000 nodePort: 30100 my python-hello-world image is based on this python file: from http.server import BaseHTTPRequestHandler, HTTPServer class MyServer(BaseHTTPRequestHandler): def do_GET(self): html = """ <!DOCTYPE html> <html> <head> <title>Hello World</title> <meta charset="utf-8"> </head> <body> <h1>Hello World</h1> </body> </html> """ self.send_response(200) self.send_header('Access-Control-Allow-Origin', '*') self.send_header('Content-type', 'text/html') self.end_headers() self.wfile.write(bytes(html, "utf-8")) def run(): addr = ('', 5000) httpd = HTTPServer(addr, MyServer) httpd.serve_forever() if __name__ == '__main__': run() When I run the cluster I can as expected receive the hello world html with curl {node_ip}:30100. But when I try to access my service via my browser with the same ip:port I get a time out. I read that that can be caused by missing headers but I think I have all necessary ones covered in my python file, so what else could cause this? ### Response: It is not said that you reach the IP of your node (you should provide some more information about the environment if necessary). But you could port forward the service and reach it easily. Take a look here: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ Some other doc: How kubectl port-forward works?
I set up a local kubernetes cluster with minikube. On my cluster I have only one deployment runnning and one service attached to it. I used a NodePort on port 30100 to expose the service, so I can access it from my browser or via curl. here is the python-server.yml file I use to setup the cluster: apiVersion: apps/v1 kind: Deployment metadata: name: python-server-deployment namespace: kubernetes-hello-world labels: app: python-server spec: replicas: 1 selector: matchLabels: app: python-server template: metadata: labels: app: python-server spec: containers: - name: python-hello-world image: hello-world-python:latest imagePullPolicy: Never ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: python-server-internal-service namespace: kubernetes-hello-world spec: type: NodePort selector: app: python-server ports: - protocol: TCP port: 80 targetPort: 5000 nodePort: 30100 my python-hello-world image is based on this python file: from http.server import BaseHTTPRequestHandler, HTTPServer class MyServer(BaseHTTPRequestHandler): def do_GET(self): html = """ <!DOCTYPE html> <html> <head> <title>Hello World</title> <meta charset="utf-8"> </head> <body> <h1>Hello World</h1> </body> </html> """ self.send_response(200) self.send_header('Access-Control-Allow-Origin', '*') self.send_header('Content-type', 'text/html') self.end_headers() self.wfile.write(bytes(html, "utf-8")) def run(): addr = ('', 5000) httpd = HTTPServer(addr, MyServer) httpd.serve_forever() if __name__ == '__main__': run() When I run the cluster I can as expected receive the hello world html with curl {node_ip}:30100. But when I try to access my service via my browser with the same ip:port I get a time out. I read that that can be caused by missing headers but I think I have all necessary ones covered in my python file, so what else could cause this?
Finally I found the nginx cached the sse message, configure the nginx like this to fixed this issue: location ^~ /ai/stream { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_cache_bypass $http_pragma; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_buffering off; proxy_cache off; gzip off; chunked_transfer_encoding off; }
###Instruction: I have write a server side sse test api like this: public SseEmitter testStream(String question) { SseEmitter emitter = new SseEmitter(); // Start asynchronous processing new Thread(() -> { try { for (int i = 0; i < 10; i++) { // Generate some event data String eventData = "Event data " + i; // Create Server-Sent Event object ServerSentEvent event = ServerSentEvent.builder() .event("message") .data(eventData) .build(); // Serialize event to string and send to client String serializedEvent = JSON.toJSONString(event); emitter.send(serializedEvent); // Wait for one second before sending the next event Thread.sleep(1000); } // Complete the SSE stream emitter.complete(); } catch (Exception e) { emitter.completeWithError(e); } }).start(); return emitter; } then expose the api as a rest api: @GetMapping(path="/test",produces = MediaType.TEXT_EVENT_STREAM_VALUE) SseEmitter testStream(@RequestParam(required = true) String question); when I invoke this api in local machine, the sse message returned one by one every seconds, that's what I want. But after I deploy this api to remote cloud server, the api wait a long time and return the result onece. Am I missing something? what should I do to make the remote server return the message one by one not onece and waiting for a long time. This is my test command: # localhost test curl -X GET -H 'Content-Type: application/json' -N http://localhost:11014/ai/stream/chat/test\?question\=1 # remote server test curl -X GET -H 'Content-Type: application/json' -H 'x-request-id:1' -H 'x-access-token: eyJhbGciOiJIUzUx.balabala.sDM53g' -N https://ai.example.top/ai/stream/chat/test\?question\=1 the route to remote service like this: client user>dns>nginx>kubernetes traefik>spring gateway in kubernetes>kubernetes service>kubernetes pod I have tried to add cache control for request header in curl command like this -H 'Cache-Control:no-cache' and response header: response.setContentType("text/event-stream"); response.setCharacterEncoding("UTF-8"); response.setHeader("Cache-Control", "no-cache"); response.setHeader("Connection", "keep-alive"); seems did not work. I also tried to let nginx no cache the content like this: server { listen 443 ssl; server_name ai.example.top; ssl_certificate /etc/letsencrypt/live/poemhub/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/poemhub/privkey.pem; if ($allowed = 0) { return 403; } location / { client_max_body_size 1m; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 3600; proxy_pass http://127.0.0.1:8000; } location ^~ /post/websocket { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ^~ /ai/stream { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_cache_bypass $http_pragma; } } ### Response: Finally I found the nginx cached the sse message, configure the nginx like this to fixed this issue: location ^~ /ai/stream { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_cache_bypass $http_pragma; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_buffering off; proxy_cache off; gzip off; chunked_transfer_encoding off; }
I have write a server side sse test api like this: public SseEmitter testStream(String question) { SseEmitter emitter = new SseEmitter(); // Start asynchronous processing new Thread(() -> { try { for (int i = 0; i < 10; i++) { // Generate some event data String eventData = "Event data " + i; // Create Server-Sent Event object ServerSentEvent event = ServerSentEvent.builder() .event("message") .data(eventData) .build(); // Serialize event to string and send to client String serializedEvent = JSON.toJSONString(event); emitter.send(serializedEvent); // Wait for one second before sending the next event Thread.sleep(1000); } // Complete the SSE stream emitter.complete(); } catch (Exception e) { emitter.completeWithError(e); } }).start(); return emitter; } then expose the api as a rest api: @GetMapping(path="/test",produces = MediaType.TEXT_EVENT_STREAM_VALUE) SseEmitter testStream(@RequestParam(required = true) String question); when I invoke this api in local machine, the sse message returned one by one every seconds, that's what I want. But after I deploy this api to remote cloud server, the api wait a long time and return the result onece. Am I missing something? what should I do to make the remote server return the message one by one not onece and waiting for a long time. This is my test command: # localhost test curl -X GET -H 'Content-Type: application/json' -N http://localhost:11014/ai/stream/chat/test\?question\=1 # remote server test curl -X GET -H 'Content-Type: application/json' -H 'x-request-id:1' -H 'x-access-token: eyJhbGciOiJIUzUx.balabala.sDM53g' -N https://ai.example.top/ai/stream/chat/test\?question\=1 the route to remote service like this: client user>dns>nginx>kubernetes traefik>spring gateway in kubernetes>kubernetes service>kubernetes pod I have tried to add cache control for request header in curl command like this -H 'Cache-Control:no-cache' and response header: response.setContentType("text/event-stream"); response.setCharacterEncoding("UTF-8"); response.setHeader("Cache-Control", "no-cache"); response.setHeader("Connection", "keep-alive"); seems did not work. I also tried to let nginx no cache the content like this: server { listen 443 ssl; server_name ai.example.top; ssl_certificate /etc/letsencrypt/live/poemhub/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/poemhub/privkey.pem; if ($allowed = 0) { return 403; } location / { client_max_body_size 1m; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 3600; proxy_pass http://127.0.0.1:8000; } location ^~ /post/websocket { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ^~ /ai/stream { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; proxy_cache_bypass $http_pragma; } }
It's definitely a Permissions issue. Binds the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole. Otherwise it doesn't have the privileges to be able to collect data from the cluster. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: NAMESPACE-WHERE-DASHBOARD-IS
###Instruction: When running the Kubernetes Dashboard in a Windows Docker Desktop when I click on "pods" either nothing is shown There is nothing to display here No resources found. or I get this error: deployments.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "deployments" in API group "apps" in the namespace "default" Was there anything running? Yes. How can I get an overview of my pods? What's the config? In the Windows Docker Desktop environment, I stared with a fresh Kubernetes. I removed any old user "./kube/config" file. To get the Kubernetes dashboard runnig, I did the procedure: Get the dashboard: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml Because generating tokens via a standard procedure (as found on many places) did not work, I took the alternative short-cut: kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login"}]' After typing "kubectl proxy" the result is: Starting to serve on 127.0.0.1:8001 In a browser I started the dashboard: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default After clicking the "Skip" button, the dashboard opened. Clicking on "Pods" (and nearly all other items) gave this error: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "pods" in API group "" in the namespace "kubernetes-dashboard" (could be "default" as well) It did not matter whether I chose the default namespace. ALTERNATIVE: As an alternative I tried to bind the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole. Preparations: create this file: apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard $ kubectl apply -f s.yml Create this file: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard $ kubectl apply -f r.yml Then run this command: $ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" This (or similar alternative) command gives a lot of errors. Breaking this command down in parts: kubectl -n kubernetes-dashboard get sa/admin-user ... gives: This command: kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}" gives no result. ### Response: It's definitely a Permissions issue. Binds the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole. Otherwise it doesn't have the privileges to be able to collect data from the cluster. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: dashboard-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: NAMESPACE-WHERE-DASHBOARD-IS
When running the Kubernetes Dashboard in a Windows Docker Desktop when I click on "pods" either nothing is shown There is nothing to display here No resources found. or I get this error: deployments.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "deployments" in API group "apps" in the namespace "default" Was there anything running? Yes. How can I get an overview of my pods? What's the config? In the Windows Docker Desktop environment, I stared with a fresh Kubernetes. I removed any old user "./kube/config" file. To get the Kubernetes dashboard runnig, I did the procedure: Get the dashboard: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml Because generating tokens via a standard procedure (as found on many places) did not work, I took the alternative short-cut: kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login"}]' After typing "kubectl proxy" the result is: Starting to serve on 127.0.0.1:8001 In a browser I started the dashboard: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default After clicking the "Skip" button, the dashboard opened. Clicking on "Pods" (and nearly all other items) gave this error: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "pods" in API group "" in the namespace "kubernetes-dashboard" (could be "default" as well) It did not matter whether I chose the default namespace. ALTERNATIVE: As an alternative I tried to bind the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole. Preparations: create this file: apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard $ kubectl apply -f s.yml Create this file: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard $ kubectl apply -f r.yml Then run this command: $ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" This (or similar alternative) command gives a lot of errors. Breaking this command down in parts: kubectl -n kubernetes-dashboard get sa/admin-user ... gives: This command: kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}" gives no result.
You could use hostPath. More in here. It is also recommended not to use it. HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
###Instruction: I have a 3 node test cluster: 1 head and 2 worker nodes (all raspberry pies). I would like to expose a specific folder on the head for read/write on each pod on the cluster (irregardless of which node they are running on). E.g. my head node has this folder: /media/usb/test. I want to mount this to my pods' filesystem to the mnt/test directory. Is it possible? My pods are deployed with this config: apiVersion: apps/v1 kind: Deployment metadata: name: test-deploy spec: replicas: 2 selector: matchLabels: app: test template: metadata: labels: app: test spec: terminationGracePeriodSeconds: 5 containers: - name: test image: rpi.home:5000/test ### Response: You could use hostPath. More in here. It is also recommended not to use it. HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
I have a 3 node test cluster: 1 head and 2 worker nodes (all raspberry pies). I would like to expose a specific folder on the head for read/write on each pod on the cluster (irregardless of which node they are running on). E.g. my head node has this folder: /media/usb/test. I want to mount this to my pods' filesystem to the mnt/test directory. Is it possible? My pods are deployed with this config: apiVersion: apps/v1 kind: Deployment metadata: name: test-deploy spec: replicas: 2 selector: matchLabels: app: test template: metadata: labels: app: test spec: terminationGracePeriodSeconds: 5 containers: - name: test image: rpi.home:5000/test
You will need to have an Ingress controller installed in your cluster to handle incoming traffic and route it to the appropriate service. Examples of Ingress controllers include Nginx Ingress, Traefik, and Istio. Another way around that we use in Azure and Google Cloud is that we Expose the services via App Gateway in Azure & HTTPS Global LB In GCP case and the services are exposed on the LB single Anycast IP. In GCP Case workflow is: Create a Kubernetes service> Create a backend service that references each Kubernetes service > Create a URL map that maps the incoming requests to the appropriate backend service based on the requested URL or hostname > Create a target HTTP proxy that references the URL map > Create a Google Cloud HTTPS load balancer and configure it to use the target HTTP proxy Each time the Front End will be using the SAME Anycast IP & Different ports in front end... In your private cloud case I will refer using Traefik you can follow their documentation on this:https://doc.traefik.io/traefik/providers/kubernetes-ingress/
###Instruction: I have several image do different thing. Now, I expose them like these commands: kubectl create deployment work_deployment_1 --image=username/work_image_1:0.0.1-SNAPSHOT kubectl expose deployment work_deployment_1 --type=LoadBalancer --port=8000 and then kubectl create deployment work_deployment_2 --image=username/work_image_2:0.0.1-SNAPSHOT kubectl expose deployment work_deployment_2 --type=LoadBalancer --port=9000 After deployment creating and exposing, I check them by kubectl get service, the result of it will like: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE work_deployment_1 LoadBalancer 10.245.197.226 159.65.210.104 8000:30798/TCP 30m work_deployment_2 LoadBalancer 10.245.168.156 159.65.129.201 9000:32105/TCP 51s Can I make deployment (or deployments), expose same_external-ip:8000 and same_external-ip:9000, instead of address above (159.65.210.104:8000 and 159.65.129.201:9000) ? ### Response: You will need to have an Ingress controller installed in your cluster to handle incoming traffic and route it to the appropriate service. Examples of Ingress controllers include Nginx Ingress, Traefik, and Istio. Another way around that we use in Azure and Google Cloud is that we Expose the services via App Gateway in Azure & HTTPS Global LB In GCP case and the services are exposed on the LB single Anycast IP. In GCP Case workflow is: Create a Kubernetes service> Create a backend service that references each Kubernetes service > Create a URL map that maps the incoming requests to the appropriate backend service based on the requested URL or hostname > Create a target HTTP proxy that references the URL map > Create a Google Cloud HTTPS load balancer and configure it to use the target HTTP proxy Each time the Front End will be using the SAME Anycast IP & Different ports in front end... In your private cloud case I will refer using Traefik you can follow their documentation on this:https://doc.traefik.io/traefik/providers/kubernetes-ingress/
I have several image do different thing. Now, I expose them like these commands: kubectl create deployment work_deployment_1 --image=username/work_image_1:0.0.1-SNAPSHOT kubectl expose deployment work_deployment_1 --type=LoadBalancer --port=8000 and then kubectl create deployment work_deployment_2 --image=username/work_image_2:0.0.1-SNAPSHOT kubectl expose deployment work_deployment_2 --type=LoadBalancer --port=9000 After deployment creating and exposing, I check them by kubectl get service, the result of it will like: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE work_deployment_1 LoadBalancer 10.245.197.226 159.65.210.104 8000:30798/TCP 30m work_deployment_2 LoadBalancer 10.245.168.156 159.65.129.201 9000:32105/TCP 51s Can I make deployment (or deployments), expose same_external-ip:8000 and same_external-ip:9000, instead of address above (159.65.210.104:8000 and 159.65.129.201:9000) ?
From outside k8s, your app cannot resolve cluster DNS names. You'll need to use an ClusterIP / NodePort address (but you've set that to None). This will also require you to advertise that. Refer https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/ Never use IPs for services. This includes Zookeeper connect property for Kafka. Ref. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ I highly recommend not writing your own Kafka/Zookeeper specs. Use a Helm Chart or Operator such as https://strimzi.io - mentioned in blog above
###Instruction: I have a Nodejs microservice and a Kafka broker running in the same cluster. The kafka broker and zookeeper are running without errors, but I am not sure how to connect to them. kafka.yaml # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-service namespace: kafka spec: ports: - port: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT # value: 10.244.0.35:2181 value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 # - name: KAFKA_ADVERTISED_HOST_NAME # value: kafka-broker # - name: KAFKA_ADVERTISED_PORT # value: "9092" - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 source Connecting using kafka-service:9092 or kafka-broker:9092 doesn't work and leads to a timeout. kafka.js const { Kafka } = require('kafkajs') const kafka = new Kafka({ clientId: 'my-app', brokers: ['PLAINTEXT://kafka-broker:9092'], // !!! connection string }) async function createProducer() { const producer = kafka.producer() await producer.connect() await producer.send({ topic: 'test-topic', messages: [{ value: 'Hello KafkaJS user!' }], }) await producer.disconnect() } createProducer() [auth-pod] {"level":"WARN","timestamp":"2023-03-24T15:35:41.511Z","logger":"kafkajs","message":"KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option \"createPartitioner: Partitioners.LegacyPartitioner\". See the migration guide at https://kafka.js.org/docs/migration-guide-v2.0.0#producer-new-default-partitioner for details. Silence this warning by setting the environment variable \"KAFKAJS_NO_PARTITIONER_WARNING=1\""} [auth-pod] Listening on port 3000... [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.586Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":0,"retryTime":292} [auth-pod] Connected to: mongodb://auth-mongo-srv:27017/auth [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.881Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":1,"retryTime":596} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:42.479Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":2,"retryTime":1184} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:43.665Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":3,"retryTime":2782} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:46.449Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":4,"retryTime":5562} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:52.015Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":5,"retryTime":12506} [auth-pod] node:internal/process/promises:288 [auth-pod] triggerUncaughtException(err, true /* fromPromise */); [auth-pod] ^ [auth-pod] [auth-pod] KafkaJSNonRetriableError [auth-pod] Caused by: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN). [auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11 [auth-pod] ... 8 lines matching cause stack trace ... [auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) { [auth-pod] name: 'KafkaJSNumberOfRetriesExceeded', [auth-pod] retriable: false, [auth-pod] helpUrl: undefined, [auth-pod] retryCount: 5, [auth-pod] retryTime: 12506, [auth-pod] [cause]: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN). [auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11 [auth-pod] at new Promise (<anonymous>) [auth-pod] at Connection.connect (/app/node_modules/kafkajs/src/network/connection.js:167:12) [auth-pod] at ConnectionPool.getConnection (/app/node_modules/kafkajs/src/network/connectionPool.js:56:24) [auth-pod] at Broker.connect (/app/node_modules/kafkajs/src/broker/index.js:86:52) [auth-pod] at async /app/node_modules/kafkajs/src/cluster/brokerPool.js:93:9 [auth-pod] at async /app/node_modules/kafkajs/src/cluster/index.js:107:14 [auth-pod] at async Cluster.connect (/app/node_modules/kafkajs/src/cluster/index.js:146:5) [auth-pod] at async Object.connect (/app/node_modules/kafkajs/src/producer/index.js:219:7) [auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) { [auth-pod] retriable: true, [auth-pod] helpUrl: undefined, [auth-pod] broker: 'PLAINTEXT:NaN', [auth-pod] code: undefined, [auth-pod] [cause]: undefined [auth-pod] } [auth-pod] } [auth-pod] [auth-pod] Node.js v18.15.0 If I use the IP of the pod kafka-broker-5c7f7d4f77-nxlwm directly brokers: ['10.244.0.94:9092'], I also get an error. Using the default namespace instead of a separate namespace didn't make a difference. After switching to a StatefulSet based on this answer, I can connect using the IP of kafka-broker-0 '10.244.0.110:9092', but I get another error: KafkaJSProtocolError: Replication-factor is invalid. I don't know why the dns resolution would fail, but using the name 'kafka-broker-0:9092', leads to the same error as before "[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout". Based on If you have multiple REST Proxy pods running, Kubernetes will route the traffic to one of them. source I should be able to use the Kubernetes service kafka-service to load balance requests without hard coding an IP address. (There wasn't a targetPort, but it still doesn't work after adding targetPort: 9092, although I am not sure which protocol to use) I looked at the logs of the kafka-broker pod and noticed an exception. [2023-03-24 18:01:25,123] WARN [Controller id=1, targetBrokerId=1] Error connecting to node kafka-broker:9092 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient) java.net.UnknownHostException: kafka-broker at java.base/java.net.InetAddress$CachedAddresses.get(Unknown Source) at java.base/java.net.InetAddress.getAllByName0(Unknown Source) at java.base/java.net.InetAddress.getAllByName(Unknown Source) at java.base/java.net.InetAddress.getAllByName(Unknown Source) at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:111) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:513) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:467) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:172) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:985) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:311) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:65) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:292) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:246) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) I think that specifying KAFKA_ADVERTISED_LISTENERS should be sufficient (answer), so I am guessing there is a problem with dns resolution. Using a headless service by adding clusterIP: "None" and changing the name to kafka-broker in case that PLAINTEXT://kafka-broker:9092 uses the service and not the deployment didn't help. # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: clusterIP: "None" # ports: # - protocol: TCP # port: 9092 # targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: # replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT # value: 10.244.0.35:2181 value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 full code Edit: Not sure why I had a KafkaJSProtocolError: Replication-factor is invalid error, but changing the service as follows prevents it. (It might be because I was using the same name for the service and deployment. I don't fully understand headless services, but I also added a port.) # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: # type: NodePort ports: - name: zookeeper-port port: 2181 # nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-srv name: kafka-srv namespace: kafka spec: # headless service clusterIP: "None" ports: - name: foo port: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: # replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 const { Kafka } = require('kafkajs') const kafka = new Kafka({ clientId: 'my-app', brokers: ['10.244.0.64:9092'], }) async function createProducer() { const producer = kafka.producer() try { await producer.connect() console.log('connected', producer) // await producer.send({ // topic: 'test-topic', // messages: [{ value: 'Hello KafkaJS user!' }], // }) // await producer.disconnect() } catch (err) { console.log("Couldn' connect to broker") console.error(err) } } [auth-pod] connected { [auth-pod] connect: [AsyncFunction: connect], [auth-pod] disconnect: [AsyncFunction: disconnect], [auth-pod] isIdempotent: [Function: isIdempotent], [auth-pod] events: { [auth-pod] CONNECT: 'producer.connect', [auth-pod] DISCONNECT: 'producer.disconnect', [auth-pod] REQUEST: 'producer.network.request', [auth-pod] REQUEST_TIMEOUT: 'producer.network.request_timeout', [auth-pod] REQUEST_QUEUE_SIZE: 'producer.network.request_queue_size' [auth-pod] }, [auth-pod] on: [Function: on], [auth-pod] send: [AsyncFunction: send], [auth-pod] sendBatch: [AsyncFunction: sendBatch], [auth-pod] transaction: [AsyncFunction: transaction], [auth-pod] logger: [Function: getLogger] [auth-pod] } Edit 2: When connecting successfully using the ip address, I also get java.net.UnknownHostException: kafka-broker in the kafka-broker-0 pod. (The error keeps repeating as well) I thought that the pod was being reached by kafkajs and then the pod threw an error, but the error happens regardless. Matching the service name with the advertised host name prevents it. --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-broker # namespace: kafka spec: # headless service clusterIP: "None" ports: - name: foo port: 9092 selector: app: kafka-broker I can't connect to the pod kafka-broker-0:9092 directly, but now using the service name kafka-broker:9092 works. ### Response: From outside k8s, your app cannot resolve cluster DNS names. You'll need to use an ClusterIP / NodePort address (but you've set that to None). This will also require you to advertise that. Refer https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/ Never use IPs for services. This includes Zookeeper connect property for Kafka. Ref. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ I highly recommend not writing your own Kafka/Zookeeper specs. Use a Helm Chart or Operator such as https://strimzi.io - mentioned in blog above
I have a Nodejs microservice and a Kafka broker running in the same cluster. The kafka broker and zookeeper are running without errors, but I am not sure how to connect to them. kafka.yaml # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-service namespace: kafka spec: ports: - port: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT # value: 10.244.0.35:2181 value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 # - name: KAFKA_ADVERTISED_HOST_NAME # value: kafka-broker # - name: KAFKA_ADVERTISED_PORT # value: "9092" - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 source Connecting using kafka-service:9092 or kafka-broker:9092 doesn't work and leads to a timeout. kafka.js const { Kafka } = require('kafkajs') const kafka = new Kafka({ clientId: 'my-app', brokers: ['PLAINTEXT://kafka-broker:9092'], // !!! connection string }) async function createProducer() { const producer = kafka.producer() await producer.connect() await producer.send({ topic: 'test-topic', messages: [{ value: 'Hello KafkaJS user!' }], }) await producer.disconnect() } createProducer() [auth-pod] {"level":"WARN","timestamp":"2023-03-24T15:35:41.511Z","logger":"kafkajs","message":"KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option \"createPartitioner: Partitioners.LegacyPartitioner\". See the migration guide at https://kafka.js.org/docs/migration-guide-v2.0.0#producer-new-default-partitioner for details. Silence this warning by setting the environment variable \"KAFKAJS_NO_PARTITIONER_WARNING=1\""} [auth-pod] Listening on port 3000... [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.586Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":0,"retryTime":292} [auth-pod] Connected to: mongodb://auth-mongo-srv:27017/auth [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.881Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":1,"retryTime":596} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:42.479Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":2,"retryTime":1184} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:43.665Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":3,"retryTime":2782} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:46.449Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":4,"retryTime":5562} [auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:52.015Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":5,"retryTime":12506} [auth-pod] node:internal/process/promises:288 [auth-pod] triggerUncaughtException(err, true /* fromPromise */); [auth-pod] ^ [auth-pod] [auth-pod] KafkaJSNonRetriableError [auth-pod] Caused by: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN). [auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11 [auth-pod] ... 8 lines matching cause stack trace ... [auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) { [auth-pod] name: 'KafkaJSNumberOfRetriesExceeded', [auth-pod] retriable: false, [auth-pod] helpUrl: undefined, [auth-pod] retryCount: 5, [auth-pod] retryTime: 12506, [auth-pod] [cause]: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN). [auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11 [auth-pod] at new Promise (<anonymous>) [auth-pod] at Connection.connect (/app/node_modules/kafkajs/src/network/connection.js:167:12) [auth-pod] at ConnectionPool.getConnection (/app/node_modules/kafkajs/src/network/connectionPool.js:56:24) [auth-pod] at Broker.connect (/app/node_modules/kafkajs/src/broker/index.js:86:52) [auth-pod] at async /app/node_modules/kafkajs/src/cluster/brokerPool.js:93:9 [auth-pod] at async /app/node_modules/kafkajs/src/cluster/index.js:107:14 [auth-pod] at async Cluster.connect (/app/node_modules/kafkajs/src/cluster/index.js:146:5) [auth-pod] at async Object.connect (/app/node_modules/kafkajs/src/producer/index.js:219:7) [auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) { [auth-pod] retriable: true, [auth-pod] helpUrl: undefined, [auth-pod] broker: 'PLAINTEXT:NaN', [auth-pod] code: undefined, [auth-pod] [cause]: undefined [auth-pod] } [auth-pod] } [auth-pod] [auth-pod] Node.js v18.15.0 If I use the IP of the pod kafka-broker-5c7f7d4f77-nxlwm directly brokers: ['10.244.0.94:9092'], I also get an error. Using the default namespace instead of a separate namespace didn't make a difference. After switching to a StatefulSet based on this answer, I can connect using the IP of kafka-broker-0 '10.244.0.110:9092', but I get another error: KafkaJSProtocolError: Replication-factor is invalid. I don't know why the dns resolution would fail, but using the name 'kafka-broker-0:9092', leads to the same error as before "[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout". Based on If you have multiple REST Proxy pods running, Kubernetes will route the traffic to one of them. source I should be able to use the Kubernetes service kafka-service to load balance requests without hard coding an IP address. (There wasn't a targetPort, but it still doesn't work after adding targetPort: 9092, although I am not sure which protocol to use) I looked at the logs of the kafka-broker pod and noticed an exception. [2023-03-24 18:01:25,123] WARN [Controller id=1, targetBrokerId=1] Error connecting to node kafka-broker:9092 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient) java.net.UnknownHostException: kafka-broker at java.base/java.net.InetAddress$CachedAddresses.get(Unknown Source) at java.base/java.net.InetAddress.getAllByName0(Unknown Source) at java.base/java.net.InetAddress.getAllByName(Unknown Source) at java.base/java.net.InetAddress.getAllByName(Unknown Source) at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27) at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:111) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:513) at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:467) at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:172) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:985) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:311) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:65) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:292) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:246) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) I think that specifying KAFKA_ADVERTISED_LISTENERS should be sufficient (answer), so I am guessing there is a problem with dns resolution. Using a headless service by adding clusterIP: "None" and changing the name to kafka-broker in case that PLAINTEXT://kafka-broker:9092 uses the service and not the deployment didn't help. # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: clusterIP: "None" # ports: # - protocol: TCP # port: 9092 # targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: # replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT # value: 10.244.0.35:2181 value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 full code Edit: Not sure why I had a KafkaJSProtocolError: Replication-factor is invalid error, but changing the service as follows prevents it. (It might be because I was using the same name for the service and deployment. I don't fully understand headless services, but I also added a port.) # create namespace apiVersion: v1 kind: Namespace metadata: name: "kafka" labels: name: "kafka" --- # create zookeeper service apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: kafka spec: # type: NodePort ports: - name: zookeeper-port port: 2181 # nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: kafka spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-srv name: kafka-srv namespace: kafka spec: # headless service clusterIP: "None" ports: - name: foo port: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: kafka-broker name: kafka-broker namespace: kafka spec: # replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 const { Kafka } = require('kafkajs') const kafka = new Kafka({ clientId: 'my-app', brokers: ['10.244.0.64:9092'], }) async function createProducer() { const producer = kafka.producer() try { await producer.connect() console.log('connected', producer) // await producer.send({ // topic: 'test-topic', // messages: [{ value: 'Hello KafkaJS user!' }], // }) // await producer.disconnect() } catch (err) { console.log("Couldn' connect to broker") console.error(err) } } [auth-pod] connected { [auth-pod] connect: [AsyncFunction: connect], [auth-pod] disconnect: [AsyncFunction: disconnect], [auth-pod] isIdempotent: [Function: isIdempotent], [auth-pod] events: { [auth-pod] CONNECT: 'producer.connect', [auth-pod] DISCONNECT: 'producer.disconnect', [auth-pod] REQUEST: 'producer.network.request', [auth-pod] REQUEST_TIMEOUT: 'producer.network.request_timeout', [auth-pod] REQUEST_QUEUE_SIZE: 'producer.network.request_queue_size' [auth-pod] }, [auth-pod] on: [Function: on], [auth-pod] send: [AsyncFunction: send], [auth-pod] sendBatch: [AsyncFunction: sendBatch], [auth-pod] transaction: [AsyncFunction: transaction], [auth-pod] logger: [Function: getLogger] [auth-pod] } Edit 2: When connecting successfully using the ip address, I also get java.net.UnknownHostException: kafka-broker in the kafka-broker-0 pod. (The error keeps repeating as well) I thought that the pod was being reached by kafkajs and then the pod threw an error, but the error happens regardless. Matching the service name with the advertised host name prevents it. --- # deploy kafka broker apiVersion: v1 kind: Service metadata: labels: app: kafka-broker name: kafka-broker # namespace: kafka spec: # headless service clusterIP: "None" ports: - name: foo port: 9092 selector: app: kafka-broker I can't connect to the pod kafka-broker-0:9092 directly, but now using the service name kafka-broker:9092 works.
Try like this: jdbc:postgresql://<database_host>:<port>/<database_name> Credentials will need to be managed via Secrets.
###Instruction: I am deploying PostgreSQL cluster using kubernetes runnung 3 Instance How do you add postgresql connection string url by using kubernetes yaml file postgresql://bigdata:bigdata@dbhost1:5432,dbhost2:5432/bigdata?target_session_attrs=primary ### Response: Try like this: jdbc:postgresql://<database_host>:<port>/<database_name> Credentials will need to be managed via Secrets.
I am deploying PostgreSQL cluster using kubernetes runnung 3 Instance How do you add postgresql connection string url by using kubernetes yaml file postgresql://bigdata:bigdata@dbhost1:5432,dbhost2:5432/bigdata?target_session_attrs=primary
have you add option --default-volumes-to-restic(for older versions like 1.7) or --default-volumes-to-fs-backup(new versions) when doing velero backup? could be same problem like: Velero - Volume Snapshots contain no data when restoring a backup
###Instruction: We need to backup PV on an on-premise Kubernetes cluster, so we installed Velero, linked to MinIO, with Velero's File System Backup. No PV are backed up and no error is shown, only this mention appears in the logs "Persistent volume is not a supported volume type for snapshots, skipping". Does someone have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers ? Details Velero was installed using the following command (credentials-minio containing the MinIO's bucket access keys): velero install \ --provider aws \ --plugins velero/velero-plugin-for-aws:v1.2.1 \ --bucket ka-backup \ --secret-file ./credentials-minio \ --use-node-agent \ --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio-1679064025.minio.svc:9000 \ --snapshot-location-config region=minio The result of a backup of namespaces shows no error, nor warnings as seen below and the status of the phase is completed. velero backup describe acme -n velero --kubeconfig ../VKATEST.config Name: acme Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.6 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=25 Phase: Completed Errors: 0 Warnings: 0 Namespaces: Included: acme Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Velero-Native Snapshot PVs: auto TTL: 720h0m0s CSISnapshotTimeout: 10m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2023-03-20 14:40:18 +0100 CET Completed: 2023-03-20 14:40:29 +0100 CET Expiration: 2023-04-19 15:40:18 +0200 CEST Total items to be backed up: 437 Items backed up: 437 Velero-Native Snapshots: <none included> In the logs we can read at the end of the following extract, that: "Persistent volume is not a supported volume type for snapshots, skipping". level=info msg="Backing up item" backup=velero/acme logSource="pkg/backup/item_backupper.go:132" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes level=info msg="Executing takePVSnapshot" backup=velero/acme logSource="pkg/backup/item_backupper.go:412" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes level=info msg="label \"topology.kubernetes.io/zone\" is not present on PersistentVolume, checking deprecated label..." backup=velero/acme logSource="pkg/backup/item_backupper.go:445" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume" backup=velero/acme logSource="pkg/backup/item_backupper.go:449" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="zone info not available in nodeAffinity requirements" backup=velero/acme logSource="pkg/backup/item_backupper.go:454" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="No volume ID returned by volume snapshotter for persistent volume" backup=velero/acme logSource="pkg/backup/item_backupper.go:478" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes volumeSnapshotLocation=default level=info msg="Persistent volume is not a supported volume type for snapshots, skipping." backup=velero/acme logSource="pkg/backup/item_backupper.go:489" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes Please let us know if you have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers. ### Response: have you add option --default-volumes-to-restic(for older versions like 1.7) or --default-volumes-to-fs-backup(new versions) when doing velero backup? could be same problem like: Velero - Volume Snapshots contain no data when restoring a backup
We need to backup PV on an on-premise Kubernetes cluster, so we installed Velero, linked to MinIO, with Velero's File System Backup. No PV are backed up and no error is shown, only this mention appears in the logs "Persistent volume is not a supported volume type for snapshots, skipping". Does someone have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers ? Details Velero was installed using the following command (credentials-minio containing the MinIO's bucket access keys): velero install \ --provider aws \ --plugins velero/velero-plugin-for-aws:v1.2.1 \ --bucket ka-backup \ --secret-file ./credentials-minio \ --use-node-agent \ --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio-1679064025.minio.svc:9000 \ --snapshot-location-config region=minio The result of a backup of namespaces shows no error, nor warnings as seen below and the status of the phase is completed. velero backup describe acme -n velero --kubeconfig ../VKATEST.config Name: acme Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.6 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=25 Phase: Completed Errors: 0 Warnings: 0 Namespaces: Included: acme Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Velero-Native Snapshot PVs: auto TTL: 720h0m0s CSISnapshotTimeout: 10m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2023-03-20 14:40:18 +0100 CET Completed: 2023-03-20 14:40:29 +0100 CET Expiration: 2023-04-19 15:40:18 +0200 CEST Total items to be backed up: 437 Items backed up: 437 Velero-Native Snapshots: <none included> In the logs we can read at the end of the following extract, that: "Persistent volume is not a supported volume type for snapshots, skipping". level=info msg="Backing up item" backup=velero/acme logSource="pkg/backup/item_backupper.go:132" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes level=info msg="Executing takePVSnapshot" backup=velero/acme logSource="pkg/backup/item_backupper.go:412" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes level=info msg="label \"topology.kubernetes.io/zone\" is not present on PersistentVolume, checking deprecated label..." backup=velero/acme logSource="pkg/backup/item_backupper.go:445" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume" backup=velero/acme logSource="pkg/backup/item_backupper.go:449" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="zone info not available in nodeAffinity requirements" backup=velero/acme logSource="pkg/backup/item_backupper.go:454" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes level=info msg="No volume ID returned by volume snapshotter for persistent volume" backup=velero/acme logSource="pkg/backup/item_backupper.go:478" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes volumeSnapshotLocation=default level=info msg="Persistent volume is not a supported volume type for snapshots, skipping." backup=velero/acme logSource="pkg/backup/item_backupper.go:489" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes Please let us know if you have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers.
There is a metric in kubelet kube_pod_status_scheduled_time you could use. If the start time is more than scheduled time, it is scheduled: kube_pod_start_time{namespace="my-ns"} - kube_pod_status_scheduled_time{namespace="my-ns"} > 0 You can also find the status of pods (Evicted, NodeAffinity, Shutdown ...) , with the metric kube_pod_status_reason.
###Instruction: We have recently introduced Karpenter autoscaler in our Kubernetes infrastructure. One of its strength is the deprovisioning mechanism. However given that we have a lot of stateful workload, this can cause some serious issue at time. We are taking several measures to somewhat control it. Meanwhile, we realize that we have no ways to observe how the measures we take mitigate the issue. Hence the question: Is there way to monitor Kubernetes POD movements, i.e. tracking how many time a POD (not the containers) may have been re-scheduled / restarted to move from a node to another, etc.... ### Response: There is a metric in kubelet kube_pod_status_scheduled_time you could use. If the start time is more than scheduled time, it is scheduled: kube_pod_start_time{namespace="my-ns"} - kube_pod_status_scheduled_time{namespace="my-ns"} > 0 You can also find the status of pods (Evicted, NodeAffinity, Shutdown ...) , with the metric kube_pod_status_reason.
We have recently introduced Karpenter autoscaler in our Kubernetes infrastructure. One of its strength is the deprovisioning mechanism. However given that we have a lot of stateful workload, this can cause some serious issue at time. We are taking several measures to somewhat control it. Meanwhile, we realize that we have no ways to observe how the measures we take mitigate the issue. Hence the question: Is there way to monitor Kubernetes POD movements, i.e. tracking how many time a POD (not the containers) may have been re-scheduled / restarted to move from a node to another, etc....
It's a very general question, which in my opinion leaves too much room for interpretation. In general, however, you should think about configuring a Monitoring stack, perhaps based on Prometheus, Grafana and Alertmanager that allows you to extrapolate the metrics that are exposed by the DB. Otherwise you can do something very rough by following the following CURL: curl http://127.0.0.1:21001/metrics curl http://127.0.0.1:21002/metrics curl http://127.0.0.1:21003/metrics https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process
###Instruction: how can I do health check for cnosdb in kubernetes, as we know, health-check is important in kubernetes, as common, a binary should provider a restful api for kubernetes periodically health-checking. Does cnosdb provider such api? best practise to do health-checking in kubernetes for cnosdb ### Response: It's a very general question, which in my opinion leaves too much room for interpretation. In general, however, you should think about configuring a Monitoring stack, perhaps based on Prometheus, Grafana and Alertmanager that allows you to extrapolate the metrics that are exposed by the DB. Otherwise you can do something very rough by following the following CURL: curl http://127.0.0.1:21001/metrics curl http://127.0.0.1:21002/metrics curl http://127.0.0.1:21003/metrics https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process
how can I do health check for cnosdb in kubernetes, as we know, health-check is important in kubernetes, as common, a binary should provider a restful api for kubernetes periodically health-checking. Does cnosdb provider such api? best practise to do health-checking in kubernetes for cnosdb
This issue should be fixed in flink 1.17.1 or higher: https://issues.apache.org/jira/browse/FLINK-31612
###Instruction: I have apache flink deployed to AWS EKS (1.21) with version 1.17-SNAPSHOT and state storage in AWS S3. This setup works great. I am now trying to deploy the same version to Azure AKS (1.22 - minimum available version for AKS) and store the state in Azure Blob. In both cases I use the apache flink kubernetes operator, version 1.3.1. If I disable checkpoints and savepoints my application works great. Once enabled, I get the following error at job manager startup: Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not found at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2592) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2712) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:107) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:102) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:338) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:300) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:575) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1425) ~[?:?] at org.apache.flink.fs.azurefs.AbstractAzureFSFactory.create(AbstractAzureFSFactory.java:78) ~[?:?] at org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:62) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:508) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:67) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:324) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:333) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:248) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:524) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:321) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:163) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:210) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:136) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:348) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?] at java.lang.Thread.run(Unknown Source) ~[?:?] I followed this guide: https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/ , the azure-fs-hadoop is properly configured. I tried settings hadoop.flink.flink.hadoop.hadoop.security.group.mapping: org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback in flink_conf.yaml The checkpoint and savepoint configuration: state.checkpoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<cp_path> state.savepoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<sp_path> If I try to set the properties this way, I get messages about anonymous access: state.checkpoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path> state.savepoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<sp_path> Message: The account access key is not configured for wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path>. Now try anonymous access. any suggestions will be appreciated. ### Response: This issue should be fixed in flink 1.17.1 or higher: https://issues.apache.org/jira/browse/FLINK-31612
I have apache flink deployed to AWS EKS (1.21) with version 1.17-SNAPSHOT and state storage in AWS S3. This setup works great. I am now trying to deploy the same version to Azure AKS (1.22 - minimum available version for AKS) and store the state in Azure Blob. In both cases I use the apache flink kubernetes operator, version 1.3.1. If I disable checkpoints and savepoints my application works great. Once enabled, I get the following error at job manager startup: Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not found at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2592) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2712) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:107) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:102) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:338) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:300) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:575) ~[?:?] at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1425) ~[?:?] at org.apache.flink.fs.azurefs.AbstractAzureFSFactory.create(AbstractAzureFSFactory.java:78) ~[?:?] at org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:62) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:508) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:67) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:324) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:333) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:248) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:524) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:321) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:163) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:210) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:136) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:348) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT] at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?] at java.lang.Thread.run(Unknown Source) ~[?:?] I followed this guide: https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/ , the azure-fs-hadoop is properly configured. I tried settings hadoop.flink.flink.hadoop.hadoop.security.group.mapping: org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback in flink_conf.yaml The checkpoint and savepoint configuration: state.checkpoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<cp_path> state.savepoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<sp_path> If I try to set the properties this way, I get messages about anonymous access: state.checkpoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path> state.savepoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<sp_path> Message: The account access key is not configured for wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path>. Now try anonymous access. any suggestions will be appreciated.
The need is very clear. You can add this piece of code to your Deployment (.spec.affinity.nodeAffinity): nodeAffinity: # Soft anti affinity between nodes A and B preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: nodepool-a operator: In values: - "true" topologyKey: "kubernetes.io/hostname" Assuming that the nodepool-a has a label "nodepool-a=true" (it could be any other), adding this configuration the Pods will be assigned to the nodepool-a; if nodepool-a is not available, they are scheduled on other nodepools that don't match the label. With this configuration, you just need to schedule a rollout restart of your Deployment to move the Pods to nodepool-a when it is available. Unfortunately there is no other way to achieve your goal; maybe you could automate the Deployment rollout restart triggered by a monitoring event, which after some time "notifies" that the nodepool-a is available. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ Taking a look in the nodeaffinity code, I understand that the requiredDuringSchedulingRequiredDuringExecution function will be implemented which should even eviction Pods that are on nodes that do not match the label. There are no details on when it will be implemented though. https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36 (line 258)
###Instruction: Background I have two Kubernetes nodepools - A and B. A nodepool is the right hardware and the preferred one. B be nodepool is less preferred since it using different hardware. My pods allocation preferring nodepool A. But, A is not always available. Its available 70% of the time. Once a pod spawned in nodepool B, it will stay there forever. This is inefficient for me. Question: I want to schedule auto termination of a pod, if its living inside of nodepool B. The auto termination can be done after a fix time (or even better random time between X to Y minutes). How to implement this? ### Response: The need is very clear. You can add this piece of code to your Deployment (.spec.affinity.nodeAffinity): nodeAffinity: # Soft anti affinity between nodes A and B preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: nodepool-a operator: In values: - "true" topologyKey: "kubernetes.io/hostname" Assuming that the nodepool-a has a label "nodepool-a=true" (it could be any other), adding this configuration the Pods will be assigned to the nodepool-a; if nodepool-a is not available, they are scheduled on other nodepools that don't match the label. With this configuration, you just need to schedule a rollout restart of your Deployment to move the Pods to nodepool-a when it is available. Unfortunately there is no other way to achieve your goal; maybe you could automate the Deployment rollout restart triggered by a monitoring event, which after some time "notifies" that the nodepool-a is available. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ Taking a look in the nodeaffinity code, I understand that the requiredDuringSchedulingRequiredDuringExecution function will be implemented which should even eviction Pods that are on nodes that do not match the label. There are no details on when it will be implemented though. https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36 (line 258)
Background I have two Kubernetes nodepools - A and B. A nodepool is the right hardware and the preferred one. B be nodepool is less preferred since it using different hardware. My pods allocation preferring nodepool A. But, A is not always available. Its available 70% of the time. Once a pod spawned in nodepool B, it will stay there forever. This is inefficient for me. Question: I want to schedule auto termination of a pod, if its living inside of nodepool B. The auto termination can be done after a fix time (or even better random time between X to Y minutes). How to implement this?
You can get close by doing something like: yq '(.. | key | select(test("\."))) |= ("[\"" + . + "\"]")' file.yaml -op apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.["beta.kubernetes.io/arch"] = amd64 Or you could do: yq '(.. | key | select(test("\."))) |= sub("\.", "\.")' file.yaml -op apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.beta\\.kubernetes\\.io/arch = amd64 BTW - I'm not sure how it's supposed be escaped in property files, I'd be willing to update yq to do it natively someone raises a bug with details on github... Disclaimer: I wrote yq
###Instruction: echo "apiVersion: v1 kind: Node metadata: name: host-cluster-control-plane-64j47 labels: beta.kubernetes.io/arch: amd64 " | yq -o p Result: apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.beta.kubernetes.io/arch = amd64 That's almost what I want. I am looking for the key to get values. I could use metadata.name like this: echo "apiVersion: v1 kind: Node metadata: name: host-cluster-control-plane-64j47 labels: beta.kubernetes.io/arch: amd64 " | yq '.metadata.name' But the -o p option of yq does not quote the key, if needed. I can't use metadata.labels.beta.kubernetes.io/arch as key, since the correct syntax is metadata.labels["beta.kubernetes.io/arch"]. Is there an automated way to get the keys of a yaml file so that I can use the keys in yq (or jq)? The desired output would be something like this: apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels["beta.kubernetes.io/arch"] = amd64 I am looking for the valid key, because I want to create a second command line to select these values. For example: ❯ k get nodes -o yaml | yq '.items[].metadata.labels["beta.kubernetes.io/arch"]' amd64 amd64 amd64 ### Response: You can get close by doing something like: yq '(.. | key | select(test("\."))) |= ("[\"" + . + "\"]")' file.yaml -op apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.["beta.kubernetes.io/arch"] = amd64 Or you could do: yq '(.. | key | select(test("\."))) |= sub("\.", "\.")' file.yaml -op apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.beta\\.kubernetes\\.io/arch = amd64 BTW - I'm not sure how it's supposed be escaped in property files, I'd be willing to update yq to do it natively someone raises a bug with details on github... Disclaimer: I wrote yq
echo "apiVersion: v1 kind: Node metadata: name: host-cluster-control-plane-64j47 labels: beta.kubernetes.io/arch: amd64 " | yq -o p Result: apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels.beta.kubernetes.io/arch = amd64 That's almost what I want. I am looking for the key to get values. I could use metadata.name like this: echo "apiVersion: v1 kind: Node metadata: name: host-cluster-control-plane-64j47 labels: beta.kubernetes.io/arch: amd64 " | yq '.metadata.name' But the -o p option of yq does not quote the key, if needed. I can't use metadata.labels.beta.kubernetes.io/arch as key, since the correct syntax is metadata.labels["beta.kubernetes.io/arch"]. Is there an automated way to get the keys of a yaml file so that I can use the keys in yq (or jq)? The desired output would be something like this: apiVersion = v1 kind = Node metadata.name = host-cluster-control-plane-64j47 metadata.labels["beta.kubernetes.io/arch"] = amd64 I am looking for the valid key, because I want to create a second command line to select these values. For example: ❯ k get nodes -o yaml | yq '.items[].metadata.labels["beta.kubernetes.io/arch"]' amd64 amd64 amd64
I found the solution, the problem was that my reserved IP address was of premium tier. Now that I changed it, everything seems to be in working order
###Instruction: I'd like to downgrade the load balancer of my GKE Service from Premium tier, to Standard tier. To do that, I added cloud.google.com/network-tier: Standard to the annotations of my service. The problem now is that no load balancer is getting created in the Load Balancer section, and I can't connect to my kubernetes service anymore. The service itself was installed by helm, but here's the resulting YAML from GKE: apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/neg: '{"ingress":true}' cloud.google.com/network-tier: Standard meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: ingress-nginx creationTimestamp: "2023-03-29T22:15:04Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 helm.sh/chart: ingress-nginx-4.4.2 name: ingress-nginx-controller namespace: ingress-nginx spec: allocateLoadBalancerNodePorts: true clusterIP: 10.70.128.216 clusterIPs: - 10.70.128.216 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack loadBalancerIP: <<REDACTED>> ports: - name: http nodePort: 31109 port: 80 protocol: TCP targetPort: http - name: https nodePort: 31245 port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} ### Response: I found the solution, the problem was that my reserved IP address was of premium tier. Now that I changed it, everything seems to be in working order
I'd like to downgrade the load balancer of my GKE Service from Premium tier, to Standard tier. To do that, I added cloud.google.com/network-tier: Standard to the annotations of my service. The problem now is that no load balancer is getting created in the Load Balancer section, and I can't connect to my kubernetes service anymore. The service itself was installed by helm, but here's the resulting YAML from GKE: apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/neg: '{"ingress":true}' cloud.google.com/network-tier: Standard meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: ingress-nginx creationTimestamp: "2023-03-29T22:15:04Z" finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 helm.sh/chart: ingress-nginx-4.4.2 name: ingress-nginx-controller namespace: ingress-nginx spec: allocateLoadBalancerNodePorts: true clusterIP: 10.70.128.216 clusterIPs: - 10.70.128.216 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack loadBalancerIP: <<REDACTED>> ports: - name: http nodePort: 31109 port: 80 protocol: TCP targetPort: http - name: https nodePort: 31245 port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {}
Had you already seen this section from your link? How to securely reference secrets in grafana.ini – jordanm Thank you so much @jordanm :)
###Instruction: I'm using Grafana with Helm https://github.com/grafana/helm-charts/tree/main/charts/grafana. I would like to switch from SQLite 3 to PostgreSQL as my backend database. However, I'm concerned about the security of my database credentials, which are currently stored in the values.yaml file as plain text. What is the recommended way to switch to PostgreSQL and hide the database credentials in a secure way? Can I use Kubernetes secrets or some other mechanism to achieve this? (Please I need to know where, in the values.yaml file, I have to do the configuration) I'm connecting Grafana with the PostgreSQL database inside the grafana.ini section in the values.yaml, E.g.: grafana.ini: database: type: "postgres" host: "db.postgres.database.azure.com" name: "grafana-db" user: "grafana-db-user" password: ""grafana-db-pass" ssl_mode: "require" Thanks in advance for your help! I've tried to include use the env section but it's not working. ### Response: Had you already seen this section from your link? How to securely reference secrets in grafana.ini – jordanm Thank you so much @jordanm :)
I'm using Grafana with Helm https://github.com/grafana/helm-charts/tree/main/charts/grafana. I would like to switch from SQLite 3 to PostgreSQL as my backend database. However, I'm concerned about the security of my database credentials, which are currently stored in the values.yaml file as plain text. What is the recommended way to switch to PostgreSQL and hide the database credentials in a secure way? Can I use Kubernetes secrets or some other mechanism to achieve this? (Please I need to know where, in the values.yaml file, I have to do the configuration) I'm connecting Grafana with the PostgreSQL database inside the grafana.ini section in the values.yaml, E.g.: grafana.ini: database: type: "postgres" host: "db.postgres.database.azure.com" name: "grafana-db" user: "grafana-db-user" password: ""grafana-db-pass" ssl_mode: "require" Thanks in advance for your help! I've tried to include use the env section but it's not working.
This behaviour is not supported (yet). There is an open issue for it: https://github.com/tilt-dev/tilt/issues/5268
###Instruction: All my kustomize declarations for my kubernetes cluster are stored in a folder with this structure: . ├── apps │ └── nginx-test │ ├── base │ │ ├── deployment.yaml │ │ ├── ingress.yaml │ │ ├── kustomization.yaml │ │ ├── namespace.yaml │ │ └── service.yaml │ ├── dev │ │ ├── ingress.yaml │ │ └── kustomization.yaml │ └── tollana │ ├── ingress.yaml │ └── kustomization.yaml ├── infra │ ├── cert-manager │ │ ├── base │ │ │ ├── helmChart.yaml │ │ │ ├── kustomization.yaml │ │ │ ├── namespace.yaml │ │ │ └── values.yaml │ │ ├── dev │ │ │ └── kustomization.yaml │ │ └── tollana │ │ └── kustomization.yaml │ ├── enroute │ │ ├── base │ │ │ ├── helmChart.yaml │ │ │ ├── kustomization.yaml │ │ │ ├── namespace.yaml │ │ │ └── values.yaml │ │ ├── dev │ │ │ ├── add-namespace.yaml │ │ │ ├── externalIP.yaml │ │ │ └── kustomization.yaml │ │ └── tollana │ │ ├── add-namespace.yaml │ │ ├── externalIP.yaml │ │ └── kustomization.yaml I want to have tilt to sync all kustomizations in the dev directories. So far I got this: baseFolders = ["./apps", "./infra"] for folder in baseFolders: appFolders = listdir(folder) for appFolder in appFolders: if os.path.exists("kubectl kustomize --enable-helm {}/dev/kustomization.yaml").format(os.path.join(folder, appFolder)): k8s_yaml(local("kubectl kustomize --enable-helm {}/dev").format(os.path.join(folder, appFolder))) But listdir("directory") only lists the files in directory. Is there a way to get all folders in a specific directory? os.listdir() doesn't work: Error: module has no .listdir field or method ### Response: This behaviour is not supported (yet). There is an open issue for it: https://github.com/tilt-dev/tilt/issues/5268
All my kustomize declarations for my kubernetes cluster are stored in a folder with this structure: . ├── apps │ └── nginx-test │ ├── base │ │ ├── deployment.yaml │ │ ├── ingress.yaml │ │ ├── kustomization.yaml │ │ ├── namespace.yaml │ │ └── service.yaml │ ├── dev │ │ ├── ingress.yaml │ │ └── kustomization.yaml │ └── tollana │ ├── ingress.yaml │ └── kustomization.yaml ├── infra │ ├── cert-manager │ │ ├── base │ │ │ ├── helmChart.yaml │ │ │ ├── kustomization.yaml │ │ │ ├── namespace.yaml │ │ │ └── values.yaml │ │ ├── dev │ │ │ └── kustomization.yaml │ │ └── tollana │ │ └── kustomization.yaml │ ├── enroute │ │ ├── base │ │ │ ├── helmChart.yaml │ │ │ ├── kustomization.yaml │ │ │ ├── namespace.yaml │ │ │ └── values.yaml │ │ ├── dev │ │ │ ├── add-namespace.yaml │ │ │ ├── externalIP.yaml │ │ │ └── kustomization.yaml │ │ └── tollana │ │ ├── add-namespace.yaml │ │ ├── externalIP.yaml │ │ └── kustomization.yaml I want to have tilt to sync all kustomizations in the dev directories. So far I got this: baseFolders = ["./apps", "./infra"] for folder in baseFolders: appFolders = listdir(folder) for appFolder in appFolders: if os.path.exists("kubectl kustomize --enable-helm {}/dev/kustomization.yaml").format(os.path.join(folder, appFolder)): k8s_yaml(local("kubectl kustomize --enable-helm {}/dev").format(os.path.join(folder, appFolder))) But listdir("directory") only lists the files in directory. Is there a way to get all folders in a specific directory? os.listdir() doesn't work: Error: module has no .listdir field or method
Unfortunately it isn't possible to do what you say. Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec, so it's not possible to add an ephemeral container using kubectl edit. Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod. https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers
###Instruction: Since Kubernetes 1.25, the ephemeral containers are stable. I can inject ephemeral container into running pod but when the purpose of debug container ends I'd like to remove the container from the pod but I still see it with Terminated state. The docs currently say to delete the container I must delete whole pod similar to copied pod but I don't think that is right. How can I delete ephemeral container from running pod without destroying it? ### Response: Unfortunately it isn't possible to do what you say. Ephemeral containers are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec, so it's not possible to add an ephemeral container using kubectl edit. Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod. https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers
Since Kubernetes 1.25, the ephemeral containers are stable. I can inject ephemeral container into running pod but when the purpose of debug container ends I'd like to remove the container from the pod but I still see it with Terminated state. The docs currently say to delete the container I must delete whole pod similar to copied pod but I don't think that is right. How can I delete ephemeral container from running pod without destroying it?
Below are my test results: Scenario1: If there are multiple nodepools and if all those nodepools have got corresponding taints applied , then cluster autoscaler will scale only that particular nodepool for which the corresponding pods are in pending state which are specified with tolerations i.e. it will scale only that nodepool which matches with corresponding taints/tolerations Scenario2: If you have 3 nodepools , taint is applied only on one nodepool - once the corresponding nodepool is full , then the pending pods can go to other nodepools (on which taints were not applied) and there is a high possibility of auto-scaling randomly the other nodepools also! Please Note: Only Taints & Tolerations will not guarantee of sticking the pods to corresponding nodepools. But If you apply Taints/Tolerations along with NodeAffinity that will make sure the pods will go and deploy only on corresponding nodepools ! All those conclusions are based upon the tests which I did locally in my AKS cluster!
###Instruction: I would like to know how does the AKS cluster autoscaler choses in which node pool to add a node in a multiple node pool environment For instance, if I have a node pool tainted for a specific applications, will the autoscaler automatically detect the taint and only scale the node pool up if there are some pending pods which can be scheduled on the nodes ? Or will it scale a random node pool in the cluster ? There is nothing about it on https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler. ### Response: Below are my test results: Scenario1: If there are multiple nodepools and if all those nodepools have got corresponding taints applied , then cluster autoscaler will scale only that particular nodepool for which the corresponding pods are in pending state which are specified with tolerations i.e. it will scale only that nodepool which matches with corresponding taints/tolerations Scenario2: If you have 3 nodepools , taint is applied only on one nodepool - once the corresponding nodepool is full , then the pending pods can go to other nodepools (on which taints were not applied) and there is a high possibility of auto-scaling randomly the other nodepools also! Please Note: Only Taints & Tolerations will not guarantee of sticking the pods to corresponding nodepools. But If you apply Taints/Tolerations along with NodeAffinity that will make sure the pods will go and deploy only on corresponding nodepools ! All those conclusions are based upon the tests which I did locally in my AKS cluster!
I would like to know how does the AKS cluster autoscaler choses in which node pool to add a node in a multiple node pool environment For instance, if I have a node pool tainted for a specific applications, will the autoscaler automatically detect the taint and only scale the node pool up if there are some pending pods which can be scheduled on the nodes ? Or will it scale a random node pool in the cluster ? There is nothing about it on https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler.
I usually recommend the following, which gives you control over the IP address etc. Deploy an Public IP address using Terraform (or ARM) alongside your AKS cluster Give the AKS Kubelet identity "Network Contributor" permissions on that PIP When deploying your Ingress, reference that existing PIP (and its resource group). AKS will then use that IP for the deployed service. This way you can, for example, control whether the PIP is static or dynamic, if it's coming out of a given prefix range, etc. Full example here: https://learn.microsoft.com/en-us/azure/aks/static-ip
###Instruction: I have deployed Azure AKS using the below official terraform docs https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster I see bunch of resources created as well as Loadbalancer called Kubernetes automatically After this I have deployed deployed the Run demo app with ingress as mentioned in docs below for hello world one & two https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli When I check the ingress resource I don't see any EXTERNAL IP address allocated & the columns are blank. I want to set up a sample end to end AKS cluster with load balancing & a DNS record. Can some one let me know what am I doing wrong or is there any other repo with end to end examples? ### Response: I usually recommend the following, which gives you control over the IP address etc. Deploy an Public IP address using Terraform (or ARM) alongside your AKS cluster Give the AKS Kubelet identity "Network Contributor" permissions on that PIP When deploying your Ingress, reference that existing PIP (and its resource group). AKS will then use that IP for the deployed service. This way you can, for example, control whether the PIP is static or dynamic, if it's coming out of a given prefix range, etc. Full example here: https://learn.microsoft.com/en-us/azure/aks/static-ip
I have deployed Azure AKS using the below official terraform docs https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster I see bunch of resources created as well as Loadbalancer called Kubernetes automatically After this I have deployed deployed the Run demo app with ingress as mentioned in docs below for hello world one & two https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli When I check the ingress resource I don't see any EXTERNAL IP address allocated & the columns are blank. I want to set up a sample end to end AKS cluster with load balancing & a DNS record. Can some one let me know what am I doing wrong or is there any other repo with end to end examples?
According to your shell-scripted workaround, you want to store each document into its own file named after .metadata.name in a subdirectory named after .kind. However, yq's split option -s cannot create subdirectories, and would fail when provided with naming schemes such as -s '.kind + "/" + .metadata.name'. Alternatives: You can try submitting a feature request. The author is quite open for adaptations towards new use-cases. In fact, the -s option by itself also came to life this way. You could (temporarily) replace the / character from above for something else (valid but not contained in the names otherwise), and then with a shell script just iterate over the files moving and renaming them into the right places (altogether resulting in just one call to yq). Example using _ instead: yq -s '.kind + "_" + .metadata.name' . testfile.yaml && for f in *_* do mkdir -p "${f%%_*}" && mv "$f" "${f%%_*}/${f#*_}"; done Depending on your overall goal with the files/documents, you could consider working with copies of the original file, each reduced to just those documents "belonging together", i.e. replacing the subdirectories with files, and keeping a grouped set of documents within those files. Example using the eval-all (or ea) command in combination with [.] to generate an array of documents, then group_by on .kind to collect into sub-arrays those that share the criteria given, and finally .[] | split_doc to turn those groups into documents again, in order to be captured by -s. yq ea -s '.[0].kind' '[.] | group_by(.kind) | .[] | split_doc' testfile.yaml Given your example, here you'd intentionally get only three files, but with one containing two documents, and two containing one. Note that technically the outputs are still just arrays of (former) documents, not "real" multi-document files, They'd need another .[] | split_doc applied to each one of them in order to properly re-convert them as such (not covered here).
###Instruction: Below is my testfile.yaml: --- kind: Pod metadata: name: amazing-application --- kind: Deployment metadata: name: amazing-deployment --- kind: Service metadata: name: amazing-deployment --- kind: Service metadata: name: tea-service My goal is to split this into 4 files where the filename is .metadata.name and the dir that file goes into is .kind. I have achieved what I want with this: for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq); do mkdir "$kind" cd "$kind" yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name' cd ..; done What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop. Is there a way to use yq and -s or select to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')? Because if you do yq -s '.kind' testfile.yaml it will yield three yaml files, not four. Same for yq -s '.metadata.name' testfile.yaml; we get three files as not all name are unique - one gets lost. ### Response: According to your shell-scripted workaround, you want to store each document into its own file named after .metadata.name in a subdirectory named after .kind. However, yq's split option -s cannot create subdirectories, and would fail when provided with naming schemes such as -s '.kind + "/" + .metadata.name'. Alternatives: You can try submitting a feature request. The author is quite open for adaptations towards new use-cases. In fact, the -s option by itself also came to life this way. You could (temporarily) replace the / character from above for something else (valid but not contained in the names otherwise), and then with a shell script just iterate over the files moving and renaming them into the right places (altogether resulting in just one call to yq). Example using _ instead: yq -s '.kind + "_" + .metadata.name' . testfile.yaml && for f in *_* do mkdir -p "${f%%_*}" && mv "$f" "${f%%_*}/${f#*_}"; done Depending on your overall goal with the files/documents, you could consider working with copies of the original file, each reduced to just those documents "belonging together", i.e. replacing the subdirectories with files, and keeping a grouped set of documents within those files. Example using the eval-all (or ea) command in combination with [.] to generate an array of documents, then group_by on .kind to collect into sub-arrays those that share the criteria given, and finally .[] | split_doc to turn those groups into documents again, in order to be captured by -s. yq ea -s '.[0].kind' '[.] | group_by(.kind) | .[] | split_doc' testfile.yaml Given your example, here you'd intentionally get only three files, but with one containing two documents, and two containing one. Note that technically the outputs are still just arrays of (former) documents, not "real" multi-document files, They'd need another .[] | split_doc applied to each one of them in order to properly re-convert them as such (not covered here).
Below is my testfile.yaml: --- kind: Pod metadata: name: amazing-application --- kind: Deployment metadata: name: amazing-deployment --- kind: Service metadata: name: amazing-deployment --- kind: Service metadata: name: tea-service My goal is to split this into 4 files where the filename is .metadata.name and the dir that file goes into is .kind. I have achieved what I want with this: for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq); do mkdir "$kind" cd "$kind" yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name' cd ..; done What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop. Is there a way to use yq and -s or select to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')? Because if you do yq -s '.kind' testfile.yaml it will yield three yaml files, not four. Same for yq -s '.metadata.name' testfile.yaml; we get three files as not all name are unique - one gets lost.
when the pod is restarted, first send TermSignal to the application and execute preStop and terminate and start pod again. Am I wrong? As per the official documentation PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires. This grace period applies to the total time it takes for both the PreStop hook to execute and for the Container to stop normally. Hope the above information is useful to you.
###Instruction: I have a problem. There is preStop option in manifest file and OOMKilled was happened. pod was restarted but no heapdump is created. lifecycle: preStop: exec: command: ["/tmp/preStop.sh"] heapdump works when I manually terminate the pod. so I wonder if pod is restarted, preStop is not supposed to be executed? I thought when pod is restarted, first send TermSignal to application and execute preStop and terminate and start pod again. Am I wrong? Thanks Best Regards. ### Response: when the pod is restarted, first send TermSignal to the application and execute preStop and terminate and start pod again. Am I wrong? As per the official documentation PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires. This grace period applies to the total time it takes for both the PreStop hook to execute and for the Container to stop normally. Hope the above information is useful to you.
I have a problem. There is preStop option in manifest file and OOMKilled was happened. pod was restarted but no heapdump is created. lifecycle: preStop: exec: command: ["/tmp/preStop.sh"] heapdump works when I manually terminate the pod. so I wonder if pod is restarted, preStop is not supposed to be executed? I thought when pod is restarted, first send TermSignal to application and execute preStop and terminate and start pod again. Am I wrong? Thanks Best Regards.
I don't know the code of the tool you indicate in the question, but not giving the possibility to configure a DNS name in favor of a static IP is generally an anti-pattern, especially on Kubernetes. However, Network plug-ins like Calico allow you to reserve a static IP address for your Pod. Take a look here: https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip
###Instruction: In k8s, dns name can be unchangable, the ip is unflexable. the cnosdb github repo provide ip to config the clusters, but in kubernetes the cluster should use dns name, please provide a workaround method to config. know the best practise to deploy cnosdb in k8s ### Response: I don't know the code of the tool you indicate in the question, but not giving the possibility to configure a DNS name in favor of a static IP is generally an anti-pattern, especially on Kubernetes. However, Network plug-ins like Calico allow you to reserve a static IP address for your Pod. Take a look here: https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip
In k8s, dns name can be unchangable, the ip is unflexable. the cnosdb github repo provide ip to config the clusters, but in kubernetes the cluster should use dns name, please provide a workaround method to config. know the best practise to deploy cnosdb in k8s
The version of Spring Cloud Kubernetes that you are using (1.1.10.RELEASE) requires Spring Boot 2.2.x. You are using 2.0.x. This older version of Spring Boot uses an older version of Spring Framework that does not contain the org.springframework.core.env.Profiles class. It was introduced in Spring Framework 5.1 and Spring Boot 2.0.x uses Spring Framework 5.0.x. You should update your dependency versions to ensure that they're compatible. To make it easier to do so, I would recommend using the spring-cloud-dependencies bom as shown on its project page.
###Instruction: When I run the application locally the application is up and running but when i deploy the same application in the Kubernetes cluster i am getting the error Error java.lang.NoClassDefFoundError: org/springframework/core/env/Profiles at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$null$3(PropertySourceUtils.java:69) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:239) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:167) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:139) at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.createProperties(YamlPropertiesFactoryBean.java:135) at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.getObject(YamlPropertiesFactoryBean.java:115) at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$yamlParserGenerator$4(PropertySourceUtils.java:77) at java.util.function.Function.lambda$andThen$1(Function.java:88) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.processAllEntries(ConfigMapPropertySource.java:149) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.getData(ConfigMapPropertySource.java:100) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.<init>(ConfigMapPropertySource.java:78) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.getMapPropertySourceForSingleConfigMap(ConfigMapPropertySourceLocator.java:96) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.lambda$locate$0(ConfigMapPropertySourceLocator.java:79) at java.util.ArrayList.forEach(ArrayList.java:1259) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.locate(ConfigMapPropertySourceLocator.java:78) at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94) at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:628) at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:364) at org.springframework.boot.SpringApplication.run(SpringApplication.java:305) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) at com.daimler.daivb.msl.MbappsSnapLocalSearchServiceApplication.main(MbappsSnapLocalSearchServiceApplication.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) Caused by: java.lang.ClassNotFoundException: org.springframework.core.env.Profiles at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:93) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) Dependencies I am using in the application are spring-boot-starter-web - 2.0.8.RELEASE gson - 2.3.1 json-lib - 2.3 spring-cloud-starter-kubernetes-config -1.1.10.RELEASE json - 20230227 xmlrpc-client - 3.1.3 spring-security-oauth2-autoconfigure - 2.0.8.RELEASE spring-security-config spring-security-web spring-cloud-starter-openfeign - 2.0.0.RELEASE spring-cloud-starter-netflix-ribbon - 2.0.0.RELEASE spring-boot-starter-actuator commons-lang3 - 3.8.1 lombok spring-cloud-starter-config - 2.0.3.RELEASE micrometer-registry-prometheus - 1.2.2 micrometer-core - 1.2.2 spring-boot-starter-test spring-cloud-dependencies - Finchley.SR3 ### Response: The version of Spring Cloud Kubernetes that you are using (1.1.10.RELEASE) requires Spring Boot 2.2.x. You are using 2.0.x. This older version of Spring Boot uses an older version of Spring Framework that does not contain the org.springframework.core.env.Profiles class. It was introduced in Spring Framework 5.1 and Spring Boot 2.0.x uses Spring Framework 5.0.x. You should update your dependency versions to ensure that they're compatible. To make it easier to do so, I would recommend using the spring-cloud-dependencies bom as shown on its project page.
When I run the application locally the application is up and running but when i deploy the same application in the Kubernetes cluster i am getting the error Error java.lang.NoClassDefFoundError: org/springframework/core/env/Profiles at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$null$3(PropertySourceUtils.java:69) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:239) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:167) at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:139) at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.createProperties(YamlPropertiesFactoryBean.java:135) at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.getObject(YamlPropertiesFactoryBean.java:115) at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$yamlParserGenerator$4(PropertySourceUtils.java:77) at java.util.function.Function.lambda$andThen$1(Function.java:88) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.processAllEntries(ConfigMapPropertySource.java:149) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.getData(ConfigMapPropertySource.java:100) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.<init>(ConfigMapPropertySource.java:78) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.getMapPropertySourceForSingleConfigMap(ConfigMapPropertySourceLocator.java:96) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.lambda$locate$0(ConfigMapPropertySourceLocator.java:79) at java.util.ArrayList.forEach(ArrayList.java:1259) at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.locate(ConfigMapPropertySourceLocator.java:78) at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94) at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:628) at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:364) at org.springframework.boot.SpringApplication.run(SpringApplication.java:305) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) at com.daimler.daivb.msl.MbappsSnapLocalSearchServiceApplication.main(MbappsSnapLocalSearchServiceApplication.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) Caused by: java.lang.ClassNotFoundException: org.springframework.core.env.Profiles at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:93) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) Dependencies I am using in the application are spring-boot-starter-web - 2.0.8.RELEASE gson - 2.3.1 json-lib - 2.3 spring-cloud-starter-kubernetes-config -1.1.10.RELEASE json - 20230227 xmlrpc-client - 3.1.3 spring-security-oauth2-autoconfigure - 2.0.8.RELEASE spring-security-config spring-security-web spring-cloud-starter-openfeign - 2.0.0.RELEASE spring-cloud-starter-netflix-ribbon - 2.0.0.RELEASE spring-boot-starter-actuator commons-lang3 - 3.8.1 lombok spring-cloud-starter-config - 2.0.3.RELEASE micrometer-registry-prometheus - 1.2.2 micrometer-core - 1.2.2 spring-boot-starter-test spring-cloud-dependencies - Finchley.SR3
Work around: when deploying this addon in the AWS console, click on the "advanced options" in the first panel after specifying the version and the IAM role. At the very bottom is a button that can be selected to override conflicts and this allows the installation to succeed.
###Instruction: I am trying to update the eks add-on named "vpc-cni". This plugin, does the following: "The CNI plugin allows Kubernetes Pods to have the same IP address as they do on the VPC network. More specifically, all containers inside the Pod share a network namespace, and they can communicate with each-other using local ports." I am however getting the following "Conflict" when updating: Conflicts: ClusterRole.rbac.authorization.k8s.io aws-node - .rules DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image I don't really know where to begin in remediating this, or even what this error conflict is saying is conflicting. Any help appreciated. ### Response: Work around: when deploying this addon in the AWS console, click on the "advanced options" in the first panel after specifying the version and the IAM role. At the very bottom is a button that can be selected to override conflicts and this allows the installation to succeed.
I am trying to update the eks add-on named "vpc-cni". This plugin, does the following: "The CNI plugin allows Kubernetes Pods to have the same IP address as they do on the VPC network. More specifically, all containers inside the Pod share a network namespace, and they can communicate with each-other using local ports." I am however getting the following "Conflict" when updating: Conflicts: ClusterRole.rbac.authorization.k8s.io aws-node - .rules DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image I don't really know where to begin in remediating this, or even what this error conflict is saying is conflicting. Any help appreciated.
You can define a Service without specifying a selector to match Pods. Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints) objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually.
###Instruction: Can I target a K8s service to a Pod without labels? That is, I have a K8s Pod created with the following configuration. apiVersion: v1 kind: Pod metadata: name: nofrills-first-pod spec: containers: - name: nofrills-container image: nofrills/to-deploy:0.0.1 ports: - containerPort: 3000 I would like to expose this pod as a K8s service. Normally, I would do this by creating a Service configuration that looked something like this apiVersion: v1 kind: Service metadata: name: test-nofrills-service spec: type: NodePort selector: ## ?????? no labels to target? ports: - protocol: TCP port: 3000 targetPort: 3000 nodePort: 32525 However, since the pod doesn't have any labels I don't know how to tell the Service which pod to use. I suppose another way of asking this questions is "Can a K8s selector target an object without any labels?" I realize I could (in many scenarios) easily add labels to the Pod -- but I'm specifically interested in the abilities of K8s selectors here. ### Response: You can define a Service without specifying a selector to match Pods. Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints) objects are not created automatically. You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually.
Can I target a K8s service to a Pod without labels? That is, I have a K8s Pod created with the following configuration. apiVersion: v1 kind: Pod metadata: name: nofrills-first-pod spec: containers: - name: nofrills-container image: nofrills/to-deploy:0.0.1 ports: - containerPort: 3000 I would like to expose this pod as a K8s service. Normally, I would do this by creating a Service configuration that looked something like this apiVersion: v1 kind: Service metadata: name: test-nofrills-service spec: type: NodePort selector: ## ?????? no labels to target? ports: - protocol: TCP port: 3000 targetPort: 3000 nodePort: 32525 However, since the pod doesn't have any labels I don't know how to tell the Service which pod to use. I suppose another way of asking this questions is "Can a K8s selector target an object without any labels?" I realize I could (in many scenarios) easily add labels to the Pod -- but I'm specifically interested in the abilities of K8s selectors here.
for opensearch sso(keycloak) please use below steps: opensearch: Make a custom iamge for opensearch, for this make 2 files as below. i.config.yml(for opensearch security plugin) --- _meta: type: "config" config_version: 2 config: dynamic: http: anonymous_auth_enabled: false authc: internal_auth: order: 0 description: "HTTP basic authentication using the internal user database" http_enabled: true transport_enabled: true http_authenticator: type: basic challenge: false authentication_backend: type: internal openid_auth_domain: http_enabled: true transport_enabled: true order: 1 http_authenticator: type: openid challenge: false config: subject_key: preferred_username roles_key: roles openid_connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration" authentication_backend: type: noop --- ii. log4j2.properties(this file will start logs in opensearch so we can see logs which are otherwise turned-off) --- logger.securityjwt.name = com.amazon.dlic.auth.http.jwt logger.securityjwt.level = trace --- iii. Dockerfile --- FROM opensearchproject/opensearch:2.5.0 RUN mkdir /usr/share/opensearch/plugins/opensearch-security/securityconfig COPY config.yaml /usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml COPY config.yaml /usr/share/opensearch/config/opensearch-security/config.yml COPY log4j2.properties /usr/share/opensearch/config/log4j2.properties --- Deploy opensearch with opensearch helm chart(change image with your customimage built using above configs). opensearch will deploy 3 pods.now go in each pod and fire belo command to start security plugin(do this only once for each pod of opensearch). /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem -cd /usr/share/opensearch/config/opensearch-security -h localhost make sure all 3 pods are up and in ready state. opensearch-dashboard 3.Now we will configure opensearch-dashboard i. In values.yml of helm chart of opensearch-dashboard search for config --- config: opensearch_dashboards.yml: | opensearch.hosts: [https://localhost:9200] opensearch.ssl.verificationMode: none opensearch.username: admin opensearch.password: admin opensearch.requestHeadersWhitelist: [authorization, securitytenant] opensearch_security.multitenancy.enabled: true opensearch_security.multitenancy.tenants.preferred: [Private, Global] opensearch_security.readonly_mode.roles: [kibana_read_only] opensearch_security.cookie.secure: false server.host: '0.0.0.0' opensearch_security.auth.type: "openid" opensearch_security.openid.connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration" opensearch_security.openid.client_id: "admin" opensearch_security.openid.client_secret: "asgduasdjsadk" opensearch_security.openid.scope: "email openid" opensearch_security.openid.base_redirect_url: "https://opensearch_dashboards-url.io" --- ii. deploy opensearch_dashboards. Now once opensearch_dashboards is deployed and pod is in up and ready state you can go to https://opensearch_dashboards-url.io (your opensearch_dashboards url ) and you will see keycloak login form.
###Instruction: I have a kubernetes cluster on which I have deployed a opensearch cluster and opensearch dashboard using Helm, I am also able to deploy logstash using helm successfully but I am confused on how to integrate those, I want to feed data to my Opensearch using logstash as my OBJECTIVE as I am not able to find much documentation on it as well. Any help is appreciated....Thanks in advance! Deployed opensearch using Helm and logstash as well but unable to integrate them Update here!!! Have made a few changes to simplify the deployment and more control over the function, I am testing deployment and service files this time, I will add the files below Opensearch deployment file --- apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: opensearch labels: component: opensearch spec: selector: matchLabels: component: opensearch replicas: 1 serviceName: opensearch template: metadata: labels: component: opensearch spec: initContainers: - name: init-sysctl image: busybox imagePullPolicy: IfNotPresent command: - sysctl - -w - vm.max_map_count=262144 securityContext: privileged: true containers: - name: opensearch securityContext: capabilities: add: - IPC_LOCK image: opensearchproject/opensearch env: - name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "cluster.name" value: "opensearch-cluster" - name: "network.host" value: "0.0.0.0" - name: "discovery.seed_hosts" value: "[]" - name: discovery.type value: single-node - name: OPENSEARCH_JAVA_OPTS value: -Xmx512M -Xms512M - name: "plugins.security.disabled" value: "false" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: os-mount mountPath: /data volumes: - name: os-mount persistentVolumeClaim: claimName: nfs-pvc-os-logging Opensearch svc file --- apiVersion: v1 kind: Service metadata: name: opensearch namespace: logging labels: service: opensearch spec: type: ClusterIP selector: component: opensearch ports: - port: 9200 targetPort: 9200 Opensearch dashboard deployment --- apiVersion: apps/v1 kind: Deployment metadata: name: open-dash namespace: logging spec: replicas: 1 selector: matchLabels: app: open-dash template: metadata: labels: app: open-dash spec: # securityContext: # runAsUser: 0 containers: - name: opensearch-dashboard image: opensearchproject/opensearch-dashboards:latest ports: - containerPort: 80 env: # - name: ELASTICSEARCH_URL # value: https://opensearch.logging:9200 # - name: "SERVER_HOST" # value: "localhost" # - name: "opensearch.hosts" # value: https://opensearch.logging:9200 - name: OPENSEARCH_HOSTS value: '["https://opensearch.logging:9200"]' Opensearch Dashboard svc --- apiVersion: v1 kind: Service metadata: name: opensearch namespace: logging labels: service: opensearch spec: type: ClusterIP selector: component: opensearch ports: - port: 9200 targetPort: 9200 with the above configuration I am able to get the Dashboard UI open but in Dashboard pod logs I can see a 400 code logs can anyone please try to reproduce this issue, Also I need to integrate the logstash with this stack. {"type":"response","@timestamp":"2023-02-20T05:05:34Z","tags":[],"pid":1,"method":"head","statusCode":400,"req":{"url":"/app/home","method":"head","headers":{"connection":"Keep-Alive","content-type":"application/json","host":"3.108.199.0:30406","user-agent":"Manticore 0.9.1","accept-encoding":"gzip,deflate","securitytenant":"user"},"remoteAddress":"10.244.1.1","userAgent":"Manticore 0.9.1"},"res":{"statusCode":400,"responseTime":2,"contentLength":9},"message":"HEAD /app/home 400 2ms - 9.0B When deploying a logstash pod I get an error that [WARN ] 2023-02-20 05:13:52.212 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"} Can somebody please try to help me with this puzzle @Benla have made changes as per your recommendation to image now I am getting the following logs in logstash [2023-02-20T05:18:43,028][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false} [2023-02-20T05:18:43,147][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 127 keys and 444 values [2023-02-20T05:18:43,351][INFO ][logstash.javapipeline ] Pipeline main is configured with pipeline.ecs_compatibility: v8 setting. All plugins in this pipeline will default to ecs_compatibility => v8 unless explicitly configured otherwise. [2023-02-20T05:18:43,370][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bf49916@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"} [2023-02-20T05:18:43,811][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.44} [2023-02-20T05:18:43,816][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"} [2023-02-20T05:18:43,821][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} [2023-02-20T05:18:43,835][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2023-02-20T05:18:43,869][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044 I started getting these endless loop of logs [WARN ] 2023-02-20 05:13:37.191 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"} ### Response: for opensearch sso(keycloak) please use below steps: opensearch: Make a custom iamge for opensearch, for this make 2 files as below. i.config.yml(for opensearch security plugin) --- _meta: type: "config" config_version: 2 config: dynamic: http: anonymous_auth_enabled: false authc: internal_auth: order: 0 description: "HTTP basic authentication using the internal user database" http_enabled: true transport_enabled: true http_authenticator: type: basic challenge: false authentication_backend: type: internal openid_auth_domain: http_enabled: true transport_enabled: true order: 1 http_authenticator: type: openid challenge: false config: subject_key: preferred_username roles_key: roles openid_connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration" authentication_backend: type: noop --- ii. log4j2.properties(this file will start logs in opensearch so we can see logs which are otherwise turned-off) --- logger.securityjwt.name = com.amazon.dlic.auth.http.jwt logger.securityjwt.level = trace --- iii. Dockerfile --- FROM opensearchproject/opensearch:2.5.0 RUN mkdir /usr/share/opensearch/plugins/opensearch-security/securityconfig COPY config.yaml /usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml COPY config.yaml /usr/share/opensearch/config/opensearch-security/config.yml COPY log4j2.properties /usr/share/opensearch/config/log4j2.properties --- Deploy opensearch with opensearch helm chart(change image with your customimage built using above configs). opensearch will deploy 3 pods.now go in each pod and fire belo command to start security plugin(do this only once for each pod of opensearch). /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem -cd /usr/share/opensearch/config/opensearch-security -h localhost make sure all 3 pods are up and in ready state. opensearch-dashboard 3.Now we will configure opensearch-dashboard i. In values.yml of helm chart of opensearch-dashboard search for config --- config: opensearch_dashboards.yml: | opensearch.hosts: [https://localhost:9200] opensearch.ssl.verificationMode: none opensearch.username: admin opensearch.password: admin opensearch.requestHeadersWhitelist: [authorization, securitytenant] opensearch_security.multitenancy.enabled: true opensearch_security.multitenancy.tenants.preferred: [Private, Global] opensearch_security.readonly_mode.roles: [kibana_read_only] opensearch_security.cookie.secure: false server.host: '0.0.0.0' opensearch_security.auth.type: "openid" opensearch_security.openid.connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration" opensearch_security.openid.client_id: "admin" opensearch_security.openid.client_secret: "asgduasdjsadk" opensearch_security.openid.scope: "email openid" opensearch_security.openid.base_redirect_url: "https://opensearch_dashboards-url.io" --- ii. deploy opensearch_dashboards. Now once opensearch_dashboards is deployed and pod is in up and ready state you can go to https://opensearch_dashboards-url.io (your opensearch_dashboards url ) and you will see keycloak login form.
I have a kubernetes cluster on which I have deployed a opensearch cluster and opensearch dashboard using Helm, I am also able to deploy logstash using helm successfully but I am confused on how to integrate those, I want to feed data to my Opensearch using logstash as my OBJECTIVE as I am not able to find much documentation on it as well. Any help is appreciated....Thanks in advance! Deployed opensearch using Helm and logstash as well but unable to integrate them Update here!!! Have made a few changes to simplify the deployment and more control over the function, I am testing deployment and service files this time, I will add the files below Opensearch deployment file --- apiVersion: apps/v1 kind: StatefulSet metadata: namespace: logging name: opensearch labels: component: opensearch spec: selector: matchLabels: component: opensearch replicas: 1 serviceName: opensearch template: metadata: labels: component: opensearch spec: initContainers: - name: init-sysctl image: busybox imagePullPolicy: IfNotPresent command: - sysctl - -w - vm.max_map_count=262144 securityContext: privileged: true containers: - name: opensearch securityContext: capabilities: add: - IPC_LOCK image: opensearchproject/opensearch env: - name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "cluster.name" value: "opensearch-cluster" - name: "network.host" value: "0.0.0.0" - name: "discovery.seed_hosts" value: "[]" - name: discovery.type value: single-node - name: OPENSEARCH_JAVA_OPTS value: -Xmx512M -Xms512M - name: "plugins.security.disabled" value: "false" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: os-mount mountPath: /data volumes: - name: os-mount persistentVolumeClaim: claimName: nfs-pvc-os-logging Opensearch svc file --- apiVersion: v1 kind: Service metadata: name: opensearch namespace: logging labels: service: opensearch spec: type: ClusterIP selector: component: opensearch ports: - port: 9200 targetPort: 9200 Opensearch dashboard deployment --- apiVersion: apps/v1 kind: Deployment metadata: name: open-dash namespace: logging spec: replicas: 1 selector: matchLabels: app: open-dash template: metadata: labels: app: open-dash spec: # securityContext: # runAsUser: 0 containers: - name: opensearch-dashboard image: opensearchproject/opensearch-dashboards:latest ports: - containerPort: 80 env: # - name: ELASTICSEARCH_URL # value: https://opensearch.logging:9200 # - name: "SERVER_HOST" # value: "localhost" # - name: "opensearch.hosts" # value: https://opensearch.logging:9200 - name: OPENSEARCH_HOSTS value: '["https://opensearch.logging:9200"]' Opensearch Dashboard svc --- apiVersion: v1 kind: Service metadata: name: opensearch namespace: logging labels: service: opensearch spec: type: ClusterIP selector: component: opensearch ports: - port: 9200 targetPort: 9200 with the above configuration I am able to get the Dashboard UI open but in Dashboard pod logs I can see a 400 code logs can anyone please try to reproduce this issue, Also I need to integrate the logstash with this stack. {"type":"response","@timestamp":"2023-02-20T05:05:34Z","tags":[],"pid":1,"method":"head","statusCode":400,"req":{"url":"/app/home","method":"head","headers":{"connection":"Keep-Alive","content-type":"application/json","host":"3.108.199.0:30406","user-agent":"Manticore 0.9.1","accept-encoding":"gzip,deflate","securitytenant":"user"},"remoteAddress":"10.244.1.1","userAgent":"Manticore 0.9.1"},"res":{"statusCode":400,"responseTime":2,"contentLength":9},"message":"HEAD /app/home 400 2ms - 9.0B When deploying a logstash pod I get an error that [WARN ] 2023-02-20 05:13:52.212 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"} Can somebody please try to help me with this puzzle @Benla have made changes as per your recommendation to image now I am getting the following logs in logstash [2023-02-20T05:18:43,028][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false} [2023-02-20T05:18:43,147][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 127 keys and 444 values [2023-02-20T05:18:43,351][INFO ][logstash.javapipeline ] Pipeline main is configured with pipeline.ecs_compatibility: v8 setting. All plugins in this pipeline will default to ecs_compatibility => v8 unless explicitly configured otherwise. [2023-02-20T05:18:43,370][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bf49916@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"} [2023-02-20T05:18:43,811][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.44} [2023-02-20T05:18:43,816][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"} [2023-02-20T05:18:43,821][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"} [2023-02-20T05:18:43,835][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2023-02-20T05:18:43,869][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044 I started getting these endless loop of logs [WARN ] 2023-02-20 05:13:37.191 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"}
Use the List method to get a list of all resources in all namespaces and then loop through the list and delete each resource using the Delete method. cr := &v1alpha1.CustomResource{} // Get a list of all instances of your custom resource in all namespaces listOpts := []client.ListOption{ client.InNamespace(v1.NamespaceAll), } err := k8sClient.List(context.Background(), cr, listOpts...) if err != nil { return err } // Loop through the list and delete each instance of your custom resource for _, item := range cr.Items { err = k8sClient.Delete(context.Background(), &item) if err != nil { return err } }
###Instruction: I'm trying to delete resources of a particular kind in a k8s cluster using client-go. I'm using this code but it requires a specific namespace to be declared, but i want to delete this resource in all namespaces. u.SetName("test") u.SetNamespace(v1.NamespaceAll) u.SetGroupVersionKind(schema.GroupVersionKind{ Group: "group", Kind: "kind", Version: "v1", }) err := k8sClient.Delete(context.TODO(), u) if err != nil { fmt.Println(err.Error()) return err } Found the example here - https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client but it doesn't mention anything about all namespaces. Could someone plz provide a way to figure this out. NOTE: This is custom resource. not default kind such as pod or deployment etc ### Response: Use the List method to get a list of all resources in all namespaces and then loop through the list and delete each resource using the Delete method. cr := &v1alpha1.CustomResource{} // Get a list of all instances of your custom resource in all namespaces listOpts := []client.ListOption{ client.InNamespace(v1.NamespaceAll), } err := k8sClient.List(context.Background(), cr, listOpts...) if err != nil { return err } // Loop through the list and delete each instance of your custom resource for _, item := range cr.Items { err = k8sClient.Delete(context.Background(), &item) if err != nil { return err } }
I'm trying to delete resources of a particular kind in a k8s cluster using client-go. I'm using this code but it requires a specific namespace to be declared, but i want to delete this resource in all namespaces. u.SetName("test") u.SetNamespace(v1.NamespaceAll) u.SetGroupVersionKind(schema.GroupVersionKind{ Group: "group", Kind: "kind", Version: "v1", }) err := k8sClient.Delete(context.TODO(), u) if err != nil { fmt.Println(err.Error()) return err } Found the example here - https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client but it doesn't mention anything about all namespaces. Could someone plz provide a way to figure this out. NOTE: This is custom resource. not default kind such as pod or deployment etc
You can directly hit the POD if using the statefulset with headless service So if you have three replicas running web-0, web-1, web-2 you can use curl web-0.<service-name>.<namespace-name>.svc.cluster.local POD name <pod-name>.<service-name>.<namespace-name>.svc.cluster.local But the important part is your service should be headless Example apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 minReadySeconds: 10 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: registry.k8s.io/nginx ports: - containerPort: 80 name: web Official doc ref : https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
###Instruction: I wanted to know if there is any way that I can set a DNS for the Pod in the StatefulSet, so I can call them directly. I have read the Kubernetes documentation, but I haven't seen anything useful. ### Response: You can directly hit the POD if using the statefulset with headless service So if you have three replicas running web-0, web-1, web-2 you can use curl web-0.<service-name>.<namespace-name>.svc.cluster.local POD name <pod-name>.<service-name>.<namespace-name>.svc.cluster.local But the important part is your service should be headless Example apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 minReadySeconds: 10 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: registry.k8s.io/nginx ports: - containerPort: 80 name: web Official doc ref : https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
I wanted to know if there is any way that I can set a DNS for the Pod in the StatefulSet, so I can call them directly. I have read the Kubernetes documentation, but I haven't seen anything useful.
As seen from your get pods, the Deployment in CrashLoopBackOff state has a different hash from the other 2; it would appear that it is being handled by a different ReplicaSet than the other 2. The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label Try running kubectl -n YOUR-NAMESPACE get replicasets; if you find 2, delete the one that corresponds to the Pod with the error.
###Instruction: I am facing an issue in kubernetes. I have a deployment and in replicaset we have given value as 2. After updating my release it is showing 3 replicas. 2 of them are running properly but one is in CrashLoopBackOff. I tried deleting it but it again comes up with same error. There are 2 containers running in the po. In one container I am able to login but not able to login into nginx-cache container deployment-5bd9ff7f9d 1/2 CrashLoopBackOff 297 (2m19s ago) 24h (this is the error) deployment-ffbf89fcd 2/2 Running 0 36d deployment-ffbf89fcd 2/2 Running 0 36d Kubectl describe pod Warning Failed 44m (x4 over 44m) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "107374182400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podc22d1a88-befe-4680-8eec-2ad69a4cc890/nginx-cache/cpu.cfs_quota_us: invalid argument: unknown Normal Pulled 43m (x5 over 44m) kubelet Container image "abcd2.azurecr.io/ab_cde/nginx-cache:0.2-ROOT" already present on machine How to remove that error ### Response: As seen from your get pods, the Deployment in CrashLoopBackOff state has a different hash from the other 2; it would appear that it is being handled by a different ReplicaSet than the other 2. The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label Try running kubectl -n YOUR-NAMESPACE get replicasets; if you find 2, delete the one that corresponds to the Pod with the error.
I am facing an issue in kubernetes. I have a deployment and in replicaset we have given value as 2. After updating my release it is showing 3 replicas. 2 of them are running properly but one is in CrashLoopBackOff. I tried deleting it but it again comes up with same error. There are 2 containers running in the po. In one container I am able to login but not able to login into nginx-cache container deployment-5bd9ff7f9d 1/2 CrashLoopBackOff 297 (2m19s ago) 24h (this is the error) deployment-ffbf89fcd 2/2 Running 0 36d deployment-ffbf89fcd 2/2 Running 0 36d Kubectl describe pod Warning Failed 44m (x4 over 44m) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "107374182400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podc22d1a88-befe-4680-8eec-2ad69a4cc890/nginx-cache/cpu.cfs_quota_us: invalid argument: unknown Normal Pulled 43m (x5 over 44m) kubelet Container image "abcd2.azurecr.io/ab_cde/nginx-cache:0.2-ROOT" already present on machine How to remove that error
I had the same problem a while ago. It seems to be an issue between the logs being emitted from the container and what is being written to the log file. Something is prefixing all logs with the <stdout/stderr> <?> Ref. https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567 Try following the discussion in the link I pasted you above; I solved it like this: <parse> @type regexp expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/ </parse>
###Instruction: The below container.conf works fine in Kubernetes 1.23 but fails after migrating to 1.25. I have also specified the deamonset that I have used to push the logs to cloudwatch. When I look into the logs of the fluentd deamonset I could see a lot of below errors 2023-04-03 01:32:06 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-04-03T01:32:02.9256618Z stdout F [2023-04-03T01:32:02.925Z] DEBUG transaction-677fffdfc4-tc4rx-18/TRANSPORTER: NATS client pingTimer: 1" container.conf ============== <source> @type tail @id in_tail_container_logs @label @containers path /var/log/containers/*.log exclude_path ["/var/log/containers/fluentd*"] pos_file /var/log/fluentd-containers.log.pos tag * read_from_head true <parse> @type json time_format %Y-%m-%dT%H:%M:%S.%NZ </parse> </source> <label @containers> <filter **> @type kubernetes_metadata @id filter_kube_metadata </filter> <filter **> @type record_transformer @id filter_containers_stream_transformer <record> stream_name ${tag_parts[3]} </record> </filter> <match **> @type cloudwatch_logs @id out_cloudwatch_logs_containers region "#{ENV.fetch('AWS_REGION')}" log_group_name "/k8s-nest/#{ENV.fetch('AWS_EKS_CLUSTER_NAME')}/containers" log_stream_name_key stream_name remove_log_stream_name_key true auto_create_stream true <buffer> flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true </buffer> </match> </label> Deamonset ========== apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: fluentd-cloudwatch name: fluentd-cloudwatch namespace: kube-system spec: selector: matchLabels: k8s-app: fluentd-cloudwatch template: metadata: labels: k8s-app: fluentd-cloudwatch annotations: iam.amazonaws.com/role: fluentd spec: serviceAccount: fluentd serviceAccountName: fluentd containers: - env: - name: AWS_REGION value: us-west-1 - name: AWS_EKS_CLUSTER_NAME value: dex-eks-west #image: 'fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch' image: 'fluent/fluentd-kubernetes-daemonset:v1.15.3-debian-cloudwatch-1.1' imagePullPolicy: IfNotPresent name: fluentd-cloudwatch resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true - mountPath: /run/log/journal name: runlogjournal readOnly: true dnsPolicy: ClusterFirst initContainers: - command: - sh - '-c' - cp /config-volume/..data/* /fluentd/etc image: busybox imagePullPolicy: Always name: copy-fluentd-config resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: fluentd-config name: config-volume - emptyDir: {} name: fluentdconf - hostPath: path: /var/log type: '' name: varlog - hostPath: path: /var/lib/docker/containers type: '' name: varlibdockercontainers - hostPath: path: /run/log/journal type: '' name: runlogjournal ### Response: I had the same problem a while ago. It seems to be an issue between the logs being emitted from the container and what is being written to the log file. Something is prefixing all logs with the <stdout/stderr> <?> Ref. https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567 Try following the discussion in the link I pasted you above; I solved it like this: <parse> @type regexp expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/ </parse>
The below container.conf works fine in Kubernetes 1.23 but fails after migrating to 1.25. I have also specified the deamonset that I have used to push the logs to cloudwatch. When I look into the logs of the fluentd deamonset I could see a lot of below errors 2023-04-03 01:32:06 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-04-03T01:32:02.9256618Z stdout F [2023-04-03T01:32:02.925Z] DEBUG transaction-677fffdfc4-tc4rx-18/TRANSPORTER: NATS client pingTimer: 1" container.conf ============== <source> @type tail @id in_tail_container_logs @label @containers path /var/log/containers/*.log exclude_path ["/var/log/containers/fluentd*"] pos_file /var/log/fluentd-containers.log.pos tag * read_from_head true <parse> @type json time_format %Y-%m-%dT%H:%M:%S.%NZ </parse> </source> <label @containers> <filter **> @type kubernetes_metadata @id filter_kube_metadata </filter> <filter **> @type record_transformer @id filter_containers_stream_transformer <record> stream_name ${tag_parts[3]} </record> </filter> <match **> @type cloudwatch_logs @id out_cloudwatch_logs_containers region "#{ENV.fetch('AWS_REGION')}" log_group_name "/k8s-nest/#{ENV.fetch('AWS_EKS_CLUSTER_NAME')}/containers" log_stream_name_key stream_name remove_log_stream_name_key true auto_create_stream true <buffer> flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true </buffer> </match> </label> Deamonset ========== apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: fluentd-cloudwatch name: fluentd-cloudwatch namespace: kube-system spec: selector: matchLabels: k8s-app: fluentd-cloudwatch template: metadata: labels: k8s-app: fluentd-cloudwatch annotations: iam.amazonaws.com/role: fluentd spec: serviceAccount: fluentd serviceAccountName: fluentd containers: - env: - name: AWS_REGION value: us-west-1 - name: AWS_EKS_CLUSTER_NAME value: dex-eks-west #image: 'fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch' image: 'fluent/fluentd-kubernetes-daemonset:v1.15.3-debian-cloudwatch-1.1' imagePullPolicy: IfNotPresent name: fluentd-cloudwatch resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true - mountPath: /run/log/journal name: runlogjournal readOnly: true dnsPolicy: ClusterFirst initContainers: - command: - sh - '-c' - cp /config-volume/..data/* /fluentd/etc image: busybox imagePullPolicy: Always name: copy-fluentd-config resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: fluentd-config name: config-volume - emptyDir: {} name: fluentdconf - hostPath: path: /var/log type: '' name: varlog - hostPath: path: /var/lib/docker/containers type: '' name: varlibdockercontainers - hostPath: path: /run/log/journal type: '' name: runlogjournal
In general (although there are workarounds), if you want to trap an error in order to return a more useful error, you want to ensure the following conditions are met: The error you're trapping has a meaningful type You're using go version >= 1.13 which ships with useful helper functions In the following example I'm trying to read a config file that doesn't exist. My code checks that the error returned is a fs.PathError and then throws it's own more useful error. You can extend this general idea to your use case. package main import ( "errors" "fmt" "io/fs" "k8s.io/client-go/tools/clientcmd" ) func main() { var myError error config, originalError := clientcmd.BuildConfigFromFlags("", "/some/path/that/doesnt/exist") if originalError != nil { var pathError *fs.PathError switch { case errors.As(originalError, &pathError): myError = fmt.Errorf("there is no config file at %s", originalError.(*fs.PathError).Path) default: myError = fmt.Errorf("there was an error and it's type was %T", originalError) } fmt.Printf("%#v", myError) } else { fmt.Println("There was no error") fmt.Println(config) } } In your debugging, you will find the %T formatter useful. For your specific use-case, you can use a Regex to parse out the desired text. The regex below says: ^\W* start with any non-alhpanumeric characters (\w+) capture the alphanumeric string following \W*\s? match non-alphanumeric characters (is\sinvalid) capture "is invalid" func MyError(inError error) error { pattern, _ := regexp.Compile(`^\W*(\w+)\W*\s?(is\sinvalid)(.*)$`) myErrorString := pattern.ReplaceAll([]byte(inError.Error()), []byte("$1 $2")) return errors.New(string(myErrorString)) } As seen on this playground: https://goplay.tools/snippet/bcZO7wa8Vnl
###Instruction: I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend. I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ... And i want something generic not hardcoded in each endpoint's controller. I am using kubernetes/client-go. First error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel's name: hotel123, that's already exist. I get this error message: \"hotel123\" already exists. What i want : hotel123 already exists. second error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel name: hotel_123, that's alerady exist. I get this error message: \"hotel_123\" is invalid, Invalid value: \"hotel_123\"... What i want: hotel_123 is invalid How to return a custom user friendly error message ? PS: i have multiple functions, so the validation should be generic. ### Response: In general (although there are workarounds), if you want to trap an error in order to return a more useful error, you want to ensure the following conditions are met: The error you're trapping has a meaningful type You're using go version >= 1.13 which ships with useful helper functions In the following example I'm trying to read a config file that doesn't exist. My code checks that the error returned is a fs.PathError and then throws it's own more useful error. You can extend this general idea to your use case. package main import ( "errors" "fmt" "io/fs" "k8s.io/client-go/tools/clientcmd" ) func main() { var myError error config, originalError := clientcmd.BuildConfigFromFlags("", "/some/path/that/doesnt/exist") if originalError != nil { var pathError *fs.PathError switch { case errors.As(originalError, &pathError): myError = fmt.Errorf("there is no config file at %s", originalError.(*fs.PathError).Path) default: myError = fmt.Errorf("there was an error and it's type was %T", originalError) } fmt.Printf("%#v", myError) } else { fmt.Println("There was no error") fmt.Println(config) } } In your debugging, you will find the %T formatter useful. For your specific use-case, you can use a Regex to parse out the desired text. The regex below says: ^\W* start with any non-alhpanumeric characters (\w+) capture the alphanumeric string following \W*\s? match non-alphanumeric characters (is\sinvalid) capture "is invalid" func MyError(inError error) error { pattern, _ := regexp.Compile(`^\W*(\w+)\W*\s?(is\sinvalid)(.*)$`) myErrorString := pattern.ReplaceAll([]byte(inError.Error()), []byte("$1 $2")) return errors.New(string(myErrorString)) } As seen on this playground: https://goplay.tools/snippet/bcZO7wa8Vnl
I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend. I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ... And i want something generic not hardcoded in each endpoint's controller. I am using kubernetes/client-go. First error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel's name: hotel123, that's already exist. I get this error message: \"hotel123\" already exists. What i want : hotel123 already exists. second error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel name: hotel_123, that's alerady exist. I get this error message: \"hotel_123\" is invalid, Invalid value: \"hotel_123\"... What i want: hotel_123 is invalid How to return a custom user friendly error message ? PS: i have multiple functions, so the validation should be generic.
You can use the simple Curl command manually to configure the Elasticsearch that will be an easy way. Ref doc : https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html You can use the postman also to manage these curl or might found a collection of necessary API online for ES.
###Instruction: I have a local cluster running using minikube. I installed elastic operator with official helm chart and have a elastic cluster running with this yaml: apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: es-qs spec: version: 8.6.2 nodeSets: - name: default count: 1 config: node.store.allow_mmap: false I have never worked with elasticsearch on ECK so my question is: What is the easiest way to manually add datasets to my elastic search cluster for testing purposes? ### Response: You can use the simple Curl command manually to configure the Elasticsearch that will be an easy way. Ref doc : https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html You can use the postman also to manage these curl or might found a collection of necessary API online for ES.
I have a local cluster running using minikube. I installed elastic operator with official helm chart and have a elastic cluster running with this yaml: apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: es-qs spec: version: 8.6.2 nodeSets: - name: default count: 1 config: node.store.allow_mmap: false I have never worked with elasticsearch on ECK so my question is: What is the easiest way to manually add datasets to my elastic search cluster for testing purposes?
String err.Error() is the original, meaningful and best error message you can get from Kubernetes server for the user (Or you have to translate it by yourself). Explains: You need to look beyond the surface of kubernetes/client-go client library. Each client talks to k8s server through HTTP REST APIs, which sends back response in json. It's the client-go library that decodes the response body and stores the result into object, if possible. As for your case, let me give you some examples through the Namespace resource: First error: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 409 Conflict { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "namespaces \"hotel123\" already exists", "reason": "AlreadyExists", "details": { "name": "hotel123", "kind": "namespaces" }, "code": 409 } second error: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 422 Unprocessable Entity { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Namespace \"hotel_123\" is invalid: metadata.name: Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]\r\n([-a-z0-9]*[a-z0-9])?')", "reason": "Invalid", "details": { "name": "hotel_123", "kind": "Namespace", "causes": [ { "reason": "FieldValueInvalid", "message": "Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')", "field": "metadata.name" } ] }, "code": 422 } normal return: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 201 Created { "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "hotel12345", "uid": "7a301d8b-37cd-45a5-8345-82wsufy88223456", "resourceVersion": "12233445566", "creationTimestamp": "2023-04-03T15:35:59Z", "managedFields": [ { "manager": "kubectl-create", "operation": "Update", "apiVersion": "v1", "time": "2023-04-03T15:35:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { "f:phase": {} } } } ] }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } } In a word, if the HTTP Status is not 2xx, the returned object is of type Status and has .Status != StatusSuccess, the additional information(message in this case) in Status will be used to enrich the error, just as the code snippets below: createdNamespace, err := clientset.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{}) if err != nil { // print "namespaces \"hotel123\" already exists" or so fmt.Println(err.Error()) return err.Error() } fmt.Printf("Created Namespace %+v in the cluster\n", createdNamespace) return ""
###Instruction: I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend. I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ... And i want something generic not hardcoded in each endpoint's controller. I am using kubernetes/client-go. First error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel's name: hotel123, that's already exist. I get this error message: \"hotel123\" already exists. What i want : hotel123 already exists. second error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel name: hotel_123, that's alerady exist. I get this error message: \"hotel_123\" is invalid, Invalid value: \"hotel_123\"... What i want: hotel_123 is invalid How to return a custom user friendly error message ? PS: i have multiple functions, so the validation should be generic. ### Response: String err.Error() is the original, meaningful and best error message you can get from Kubernetes server for the user (Or you have to translate it by yourself). Explains: You need to look beyond the surface of kubernetes/client-go client library. Each client talks to k8s server through HTTP REST APIs, which sends back response in json. It's the client-go library that decodes the response body and stores the result into object, if possible. As for your case, let me give you some examples through the Namespace resource: First error: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 409 Conflict { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "namespaces \"hotel123\" already exists", "reason": "AlreadyExists", "details": { "name": "hotel123", "kind": "namespaces" }, "code": 409 } second error: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 422 Unprocessable Entity { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Namespace \"hotel_123\" is invalid: metadata.name: Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]\r\n([-a-z0-9]*[a-z0-9])?')", "reason": "Invalid", "details": { "name": "hotel_123", "kind": "Namespace", "causes": [ { "reason": "FieldValueInvalid", "message": "Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')", "field": "metadata.name" } ] }, "code": 422 } normal return: POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create Response Status: 201 Created { "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "hotel12345", "uid": "7a301d8b-37cd-45a5-8345-82wsufy88223456", "resourceVersion": "12233445566", "creationTimestamp": "2023-04-03T15:35:59Z", "managedFields": [ { "manager": "kubectl-create", "operation": "Update", "apiVersion": "v1", "time": "2023-04-03T15:35:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { "f:phase": {} } } } ] }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } } In a word, if the HTTP Status is not 2xx, the returned object is of type Status and has .Status != StatusSuccess, the additional information(message in this case) in Status will be used to enrich the error, just as the code snippets below: createdNamespace, err := clientset.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{}) if err != nil { // print "namespaces \"hotel123\" already exists" or so fmt.Println(err.Error()) return err.Error() } fmt.Printf("Created Namespace %+v in the cluster\n", createdNamespace) return ""
I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend. I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ... And i want something generic not hardcoded in each endpoint's controller. I am using kubernetes/client-go. First error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel's name: hotel123, that's already exist. I get this error message: \"hotel123\" already exists. What i want : hotel123 already exists. second error: For example lets say i want to add a hotel to the etcd, when i try to add the hotel name: hotel_123, that's alerady exist. I get this error message: \"hotel_123\" is invalid, Invalid value: \"hotel_123\"... What i want: hotel_123 is invalid How to return a custom user friendly error message ? PS: i have multiple functions, so the validation should be generic.
Kubernetes Ingress does NOT allow you to do this. You must necessarily distinguish by host or by path. The only thing you could do is implement a structure like this: https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout Take a look here: How do I map multiple services to one Kubernetes Ingress path? Otherwise consider replacing the Ingress Kubernetes with a level 7 Load Balancer.
###Instruction: I have a service for which i dont want to use autoscaling, so i created a copy of it and now i want to have it behind an internal LB using nginx ingress, one of them will be always active and other one will be passive which can be get through probes status, ingress is as below apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: appcj-ingress spec: ingressClassName: nginx rules: - host: "{{ .Values.ingress.appcjudomain }}" http: paths: - pathType: Prefix path: "/" backend: service: name: {{ .Values.Name }}-appcjuengineprihttp port: number: 81 service: name: {{ .Values.Name }}-appcjuenginesechttp port: number: 81 This is more like a traditional load balancer, like having multiple servers behind a loadbalancer, having multiple backend services for same host, is it possible? ### Response: Kubernetes Ingress does NOT allow you to do this. You must necessarily distinguish by host or by path. The only thing you could do is implement a structure like this: https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout Take a look here: How do I map multiple services to one Kubernetes Ingress path? Otherwise consider replacing the Ingress Kubernetes with a level 7 Load Balancer.
I have a service for which i dont want to use autoscaling, so i created a copy of it and now i want to have it behind an internal LB using nginx ingress, one of them will be always active and other one will be passive which can be get through probes status, ingress is as below apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: appcj-ingress spec: ingressClassName: nginx rules: - host: "{{ .Values.ingress.appcjudomain }}" http: paths: - pathType: Prefix path: "/" backend: service: name: {{ .Values.Name }}-appcjuengineprihttp port: number: 81 service: name: {{ .Values.Name }}-appcjuenginesechttp port: number: 81 This is more like a traditional load balancer, like having multiple servers behind a loadbalancer, having multiple backend services for same host, is it possible?
Finally I solved like following: 1: Create a secret with a token and service account. Then associate them. 2: Create a clusterrole (give necessary permissions) and clusterrolebinding. Associate them. 3: Associate the service account and clusterrolebinding. Now you can use the token to connect gke cluster without using gke-gcloud-auth-plugin. If I dont use the token argument, the output is: But , if I use the token, it works successfully Also the go code that uses token and prints the pods is like following: package main import ( "context" "fmt" "os" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" ) func main() { // Path to the kubeconfig file kubeconfigPath := "/home/latif/.kube/config" // Build the configuration from the kubeconfig file config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath) if err != nil { fmt.Printf("Error building config from kubeconfig: %v", err) os.Exit(1) } // Set the authentication token config.BearerToken = "my_token" // Create a new Kubernetes client clientset, err := kubernetes.NewForConfig(config) if err != nil { fmt.Printf("Error creating Kubernetes client: %v", err) os.Exit(1) } // Get the list of pods in the default namespace pods, err := clientset.CoreV1().Pods("default").List(context.Background(), v1.ListOptions{}) if err != nil { fmt.Printf("Error getting pods: %v", err) os.Exit(1) } // Print the name of each pod for _, pod := range pods.Items { fmt.Println(pod.Name) } }
###Instruction: I have a cluster on Digital Ocean. The API of my project is in DO. My image is from google/cloud-sdk:alpine and also I installed gke-gcloud-auth-plugin with the following command gcloud components install gke-gcloud-auth-plugin I have another cluster on Google Kubernetes Engine. I want to create, list, delete pods on gke cluster from my api which is in DO. I use kubernetes go client. I give the kubeconfig file of gke cluster to go client. But when I try to do an operation (like listing pods) I get the following error: cred.go:145] print credential failed with error: Failed to retrieve access token:: failure while executing gcloud, with args [config config-helper --format=json]: exit status 1 2023/04/04 07:28:21 code: get_pod_error, error: Get "https://..../api/v1/namespaces/default/pods/cloud-flow-80117068-9715-4374-b91b-799472d647be": getting credentials: exec: executable gke-gcloud-auth-plugin failed with exit code 1 I used the same method to create or delete pods on another cluster in DO and worked successfuly. Briefly I want to connect GKE cluster from DO cluster by using kubeconfig file of GKE Is there anyone who encounter with the same problem? Thanks.. ### Response: Finally I solved like following: 1: Create a secret with a token and service account. Then associate them. 2: Create a clusterrole (give necessary permissions) and clusterrolebinding. Associate them. 3: Associate the service account and clusterrolebinding. Now you can use the token to connect gke cluster without using gke-gcloud-auth-plugin. If I dont use the token argument, the output is: But , if I use the token, it works successfully Also the go code that uses token and prints the pods is like following: package main import ( "context" "fmt" "os" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" ) func main() { // Path to the kubeconfig file kubeconfigPath := "/home/latif/.kube/config" // Build the configuration from the kubeconfig file config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath) if err != nil { fmt.Printf("Error building config from kubeconfig: %v", err) os.Exit(1) } // Set the authentication token config.BearerToken = "my_token" // Create a new Kubernetes client clientset, err := kubernetes.NewForConfig(config) if err != nil { fmt.Printf("Error creating Kubernetes client: %v", err) os.Exit(1) } // Get the list of pods in the default namespace pods, err := clientset.CoreV1().Pods("default").List(context.Background(), v1.ListOptions{}) if err != nil { fmt.Printf("Error getting pods: %v", err) os.Exit(1) } // Print the name of each pod for _, pod := range pods.Items { fmt.Println(pod.Name) } }
I have a cluster on Digital Ocean. The API of my project is in DO. My image is from google/cloud-sdk:alpine and also I installed gke-gcloud-auth-plugin with the following command gcloud components install gke-gcloud-auth-plugin I have another cluster on Google Kubernetes Engine. I want to create, list, delete pods on gke cluster from my api which is in DO. I use kubernetes go client. I give the kubeconfig file of gke cluster to go client. But when I try to do an operation (like listing pods) I get the following error: cred.go:145] print credential failed with error: Failed to retrieve access token:: failure while executing gcloud, with args [config config-helper --format=json]: exit status 1 2023/04/04 07:28:21 code: get_pod_error, error: Get "https://..../api/v1/namespaces/default/pods/cloud-flow-80117068-9715-4374-b91b-799472d647be": getting credentials: exec: executable gke-gcloud-auth-plugin failed with exit code 1 I used the same method to create or delete pods on another cluster in DO and worked successfuly. Briefly I want to connect GKE cluster from DO cluster by using kubeconfig file of GKE Is there anyone who encounter with the same problem? Thanks..
A cron expression allows certain values per field. For the hour field the values 0-23 are allowed. (See https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax ) If you want to achieve something that does not run in an interval that can be expressed using the cron expressions you might want to have a cronjob that runs once per hour and uses a configmap to store the execution state and schedule a regular job programmatically.
###Instruction: I have a kubernetes cronjob object defined as follows: apiVersion: batch/v1 kind: CronJob metadata: name: cloud-backup namespace: test-backup spec: schedule: "0 */50 * * *" ..... However, it's running every single day at 8pm which makes no sense to me. Isn't this cron format configured to run once every 50 hours? I'm using this exact cron schedule in a concourse pipeline at it runs when expected there so not sure what I'm missing. Thanks! ### Response: A cron expression allows certain values per field. For the hour field the values 0-23 are allowed. (See https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax ) If you want to achieve something that does not run in an interval that can be expressed using the cron expressions you might want to have a cronjob that runs once per hour and uses a configmap to store the execution state and schedule a regular job programmatically.
I have a kubernetes cronjob object defined as follows: apiVersion: batch/v1 kind: CronJob metadata: name: cloud-backup namespace: test-backup spec: schedule: "0 */50 * * *" ..... However, it's running every single day at 8pm which makes no sense to me. Isn't this cron format configured to run once every 50 hours? I'm using this exact cron schedule in a concourse pipeline at it runs when expected there so not sure what I'm missing. Thanks!
First you need to enable spring.cloud.kubernetes.secrets.enabled then you can use Kubernetes Secrets via SecretsPropertySource. You can use secret name via: -Dspring.cloud.kubernetes.secrets.name=my-secret where my-sercret is name of a secret which is defined in kubernetes. or you can access it in multiple other methods described in the Documentation.
###Instruction: I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder. Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file. How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file? Does defining the CM and Secret dependency in application deployment.yaml does everything? ### Response: First you need to enable spring.cloud.kubernetes.secrets.enabled then you can use Kubernetes Secrets via SecretsPropertySource. You can use secret name via: -Dspring.cloud.kubernetes.secrets.name=my-secret where my-sercret is name of a secret which is defined in kubernetes. or you can access it in multiple other methods described in the Documentation.
I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder. Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file. How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file? Does defining the CM and Secret dependency in application deployment.yaml does everything?
You have two options: one can be achieved without extra dependencies and one with Spring Boot Cloud. Plain way You define enviroment variable placeholders in your application.yml: spring: datasource: driver-class-name: org.postgresql.Driver url: jdbc:postgresql://${DB_HOST}:${DB_PORT}/${DB_NAME} username: ${DB_USER} password: ${DB_PASSWORD} You then define the environment variables in your k8s deployment: env: - name: DB_HOST valueFrom: configMapKeyRef: name: your-config-map key: dbHost - name: DB_PORT valueFrom: configMapKeyRef: name: your-config-map key: dbPort - name: DB_NAME valueFrom: configMapKeyRef: name: your-config-map key: dbName - name: DB_USER valueFrom: secretKeyRef: name: your-secret key: dbUser - name: DB_PASSWORD valueFrom: secretKeyRef: name: your-secret key: dbPassword More on defining environment variables for containers can be found in the k8s documentation. Spring Boot Cloud Kubernetes There is a whole section in the reference called Using a ConfigMap PropertySource and Secrets PropertySource. I suggest you go and look it up there.
###Instruction: I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder. Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file. How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file? Does defining the CM and Secret dependency in application deployment.yaml does everything? ### Response: You have two options: one can be achieved without extra dependencies and one with Spring Boot Cloud. Plain way You define enviroment variable placeholders in your application.yml: spring: datasource: driver-class-name: org.postgresql.Driver url: jdbc:postgresql://${DB_HOST}:${DB_PORT}/${DB_NAME} username: ${DB_USER} password: ${DB_PASSWORD} You then define the environment variables in your k8s deployment: env: - name: DB_HOST valueFrom: configMapKeyRef: name: your-config-map key: dbHost - name: DB_PORT valueFrom: configMapKeyRef: name: your-config-map key: dbPort - name: DB_NAME valueFrom: configMapKeyRef: name: your-config-map key: dbName - name: DB_USER valueFrom: secretKeyRef: name: your-secret key: dbUser - name: DB_PASSWORD valueFrom: secretKeyRef: name: your-secret key: dbPassword More on defining environment variables for containers can be found in the k8s documentation. Spring Boot Cloud Kubernetes There is a whole section in the reference called Using a ConfigMap PropertySource and Secrets PropertySource. I suggest you go and look it up there.
I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder. Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file. How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file? Does defining the CM and Secret dependency in application deployment.yaml does everything?
Run the command below to check whether your service is mapped to your podIP:port properly. If the ENDPOINTS column in the output of the command below does not show podIP:port, then your service is most likely misconfigured - Label in the deployment is not matching with the selector in the service Fix the selectors to solve the issue -- kubectl get ep -n <namespace>
###Instruction: I have several PHP applications which I am trying to configure inside K8s. Currently I have issue with one custom service - it is using own server (listening 8800 port) and nginx server which is redirecting traffic from 80/443 (we can use 80 this time to simplify example) to 8800. Here is part of nginx conf: location / { add_header Access-Control-Allow-Origin $http_origin always; add_header Access-Control-Allow-Credentials true always; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache off; proxy_pass http://localhost:8800; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_hide_header Access-Control-Allow-Origin; proxy_hide_header Access-Control-Allow-Credentials; } I already configured deployments + services for my pods but I am new in NGINX Ingres. Currently I am working with this ingress but I am getting 500 error (502 Bad Gateway) and looks like my traffic does not reach my service. --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-[service] annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: [my domain name] http: paths: - path: /api/ pathType: Prefix backend: service: name: [service] port: number: 8800 ingressClassName: nginx Did I convert nginx to ingres config correctly? UPDATE: deployment + service (my last variant) --- apiVersion: apps/v1 kind: Deployment metadata: name: some-service spec: replicas: 1 ... spec: containers: - name: [some-name] image: [image] imagePullPolicy: Always env: ... ports: - containerPort: 8800 name: some-web-svc --- apiVersion: v1 kind: Service metadata: name: some-service spec: type: ClusterIP ports: - name: some-service-port protocol: TCP port: 8800 targetPort: some-web-svc # - name: tcp # port: 8800 # targetPort: 8800 selector: app: some-service ### Response: Run the command below to check whether your service is mapped to your podIP:port properly. If the ENDPOINTS column in the output of the command below does not show podIP:port, then your service is most likely misconfigured - Label in the deployment is not matching with the selector in the service Fix the selectors to solve the issue -- kubectl get ep -n <namespace>
I have several PHP applications which I am trying to configure inside K8s. Currently I have issue with one custom service - it is using own server (listening 8800 port) and nginx server which is redirecting traffic from 80/443 (we can use 80 this time to simplify example) to 8800. Here is part of nginx conf: location / { add_header Access-Control-Allow-Origin $http_origin always; add_header Access-Control-Allow-Credentials true always; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache off; proxy_pass http://localhost:8800; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_hide_header Access-Control-Allow-Origin; proxy_hide_header Access-Control-Allow-Credentials; } I already configured deployments + services for my pods but I am new in NGINX Ingres. Currently I am working with this ingress but I am getting 500 error (502 Bad Gateway) and looks like my traffic does not reach my service. --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-[service] annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: [my domain name] http: paths: - path: /api/ pathType: Prefix backend: service: name: [service] port: number: 8800 ingressClassName: nginx Did I convert nginx to ingres config correctly? UPDATE: deployment + service (my last variant) --- apiVersion: apps/v1 kind: Deployment metadata: name: some-service spec: replicas: 1 ... spec: containers: - name: [some-name] image: [image] imagePullPolicy: Always env: ... ports: - containerPort: 8800 name: some-web-svc --- apiVersion: v1 kind: Service metadata: name: some-service spec: type: ClusterIP ports: - name: some-service-port protocol: TCP port: 8800 targetPort: some-web-svc # - name: tcp # port: 8800 # targetPort: 8800 selector: app: some-service
I would suggest looking at the node-problem-detector or this blog by Cloudflare. There is an issue on the EKS roadmap for automated node health checking. I would upvote the issue if it's important to you.
###Instruction: I am using aws EKS with a managed node group. Twice in the passed couple of weeks I had a case where the Kubelet in one of the nodes crashed or stopped reporting back to the control plane. In this case I would expect the Autoscaling group to identify this node as unhealthy, and replace it. However, this is not what happens. I have recreated the issue by creating a node and manually stopping the Kubelet, see image below: My first thought was to create an Event Bus alert that would trigger a lambda to take care of this but I couldn't find the EKS service in the list of services in Event Bus, so … Does anyone know of a tool or configuration that would help with this? To be clear I am looking for something that would: Detect that that kubelet isn't connecting to the control plane Delete the node in the cluster Terminate the EC2 THANKS!! ### Response: I would suggest looking at the node-problem-detector or this blog by Cloudflare. There is an issue on the EKS roadmap for automated node health checking. I would upvote the issue if it's important to you.
I am using aws EKS with a managed node group. Twice in the passed couple of weeks I had a case where the Kubelet in one of the nodes crashed or stopped reporting back to the control plane. In this case I would expect the Autoscaling group to identify this node as unhealthy, and replace it. However, this is not what happens. I have recreated the issue by creating a node and manually stopping the Kubelet, see image below: My first thought was to create an Event Bus alert that would trigger a lambda to take care of this but I couldn't find the EKS service in the list of services in Event Bus, so … Does anyone know of a tool or configuration that would help with this? To be clear I am looking for something that would: Detect that that kubelet isn't connecting to the control plane Delete the node in the cluster Terminate the EC2 THANKS!!
Probably some Pod mounts the volume you are trying to remove. Check out this old answer of mine: https://stackoverflow.com/a/75768413/21404450
###Instruction: My team is experiencing an issue with longhorn where sometimes our RWX PVCs are indefinitely terminating after running kubectl delete. A symptom of this is that the finalizers never get removed. It was explained to me that the longhorn-csi-plugin containers should execute ControllerUnpublishVolume when no workload is using the volume and then execute DeleteVolume to remove the finalizer. Upon inspection of the logs when this issue occurs, the ControllerUnpublishVolume event looks unsuccessful and DeleteVolume is never called. It looks like the response to ControllerUnpublishVolume is {} which does not seem right to me. The following logs are abridged and only include lines relevant to the volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1: 2023-04-04T19:28:52.993226550Z time="2023-04-04T19:28:52Z" level=info msg="CreateVolume: creating a volume by API client, name: pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1, size: 21474836480 accessMode: rwx" ... 2023-04-04T19:29:01.119651932Z time="2023-04-04T19:29:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume created at 2023-04-04 19:29:01.119514295 +0000 UTC m=+2789775.717296902" 2023-04-04T19:29:01.123721718Z time="2023-04-04T19:29:01Z" level=info msg="CreateVolume: rsp: {\"volume\":{\"capacity_bytes\":21474836480,\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}}" ... 2023-04-04T19:29:01.355417228Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" ... 2023-04-04T19:29:01.362958346Z time="2023-04-04T19:29:01Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com" 2023-04-04T19:29:01.363013363Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com" ... 2023-04-04T19:29:13.477036437Z time="2023-04-04T19:29:13Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:29:13.476922567 +0000 UTC m=+2789788.074705223" 2023-04-04T19:29:13.479320941Z time="2023-04-04T19:29:13Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com" ... 2023-04-04T19:31:59.230234638Z time="2023-04-04T19:31:59Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:31:59.233597451Z time="2023-04-04T19:31:59Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" ... 2023-04-04T19:32:01.242531135Z time="2023-04-04T19:32:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:01.242373423 +0000 UTC m=+2789955.840156051" 2023-04-04T19:32:01.245744768Z time="2023-04-04T19:32:01Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T19:32:01.268399507Z time="2023-04-04T19:32:01Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:32:01.270584270Z time="2023-04-04T19:32:01Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" ... 2023-04-04T19:32:02.512117513Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" ... 2023-04-04T19:32:02.528810094Z time="2023-04-04T19:32:02Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node2.example.com" 2023-04-04T19:32:02.528829340Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node2.example.com" ... 2023-04-04T19:32:03.273890290Z time="2023-04-04T19:32:03Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:03.272811565 +0000 UTC m=+2789957.870594214" 2023-04-04T19:32:03.289152604Z time="2023-04-04T19:32:03Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T19:32:03.760644399Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:32:03.770050254Z time="2023-04-04T19:32:03Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com" 2023-04-04T19:32:03.770093689Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com" ... 2023-04-04T19:32:04.654700819Z time="2023-04-04T19:32:04Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:04.654500435 +0000 UTC m=+2789959.252283106" 2023-04-04T19:32:04.657991819Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node2.example.com" 2023-04-04T19:32:04.658583043Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: rsp: {}" ... 2023-04-04T19:32:05.822264526Z time="2023-04-04T19:32:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:05.82208573 +0000 UTC m=+2789960.419868382" 2023-04-04T19:32:05.826506892Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com" 2023-04-04T19:32:05.827051042Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: rsp: {}" ... 2023-04-04T20:07:03.798730851Z time="2023-04-04T20:07:03Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:03.802360032Z time="2023-04-04T20:07:03Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" 2023-04-04T20:07:05.808796454Z time="2023-04-04T20:07:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:05.808607472 +0000 UTC m=+2792060.406390073" 2023-04-04T20:07:05.811653301Z time="2023-04-04T20:07:05Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T20:07:11.017524059Z time="2023-04-04T20:07:11Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:11.024127188Z time="2023-04-04T20:07:11Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com" ... 2023-04-04T20:07:13.047834933Z time="2023-04-04T20:07:13Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com" 2023-04-04T20:07:13.047839690Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}" 2023-04-04T20:07:13.378731066Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:13.384575838Z time="2023-04-04T20:07:13Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com" ... 2023-04-04T20:07:13.385792532Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}" 2023-04-04T20:07:15.386784410Z time="2023-04-04T20:07:15Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:15.386596264 +0000 UTC m=+2792069.984378910" 2023-04-04T20:07:15.391059508Z time="2023-04-04T20:07:15Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com" We are using Longhorn v1.2.2 on Rancher RKE v2.6.5. We would expect that DeleteVolume would be called, the finalizers would be removed, and the PVC would be deleted, but none of those events occur. As a workaround we tried forcefully removing the finalizer using the command kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}' --type=merge. This worked, but is not ideal to do every time. Any ideas about what is wrong? If not, what should be my next steps in investigating this issue? ### Response: Probably some Pod mounts the volume you are trying to remove. Check out this old answer of mine: https://stackoverflow.com/a/75768413/21404450
My team is experiencing an issue with longhorn where sometimes our RWX PVCs are indefinitely terminating after running kubectl delete. A symptom of this is that the finalizers never get removed. It was explained to me that the longhorn-csi-plugin containers should execute ControllerUnpublishVolume when no workload is using the volume and then execute DeleteVolume to remove the finalizer. Upon inspection of the logs when this issue occurs, the ControllerUnpublishVolume event looks unsuccessful and DeleteVolume is never called. It looks like the response to ControllerUnpublishVolume is {} which does not seem right to me. The following logs are abridged and only include lines relevant to the volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1: 2023-04-04T19:28:52.993226550Z time="2023-04-04T19:28:52Z" level=info msg="CreateVolume: creating a volume by API client, name: pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1, size: 21474836480 accessMode: rwx" ... 2023-04-04T19:29:01.119651932Z time="2023-04-04T19:29:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume created at 2023-04-04 19:29:01.119514295 +0000 UTC m=+2789775.717296902" 2023-04-04T19:29:01.123721718Z time="2023-04-04T19:29:01Z" level=info msg="CreateVolume: rsp: {\"volume\":{\"capacity_bytes\":21474836480,\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}}" ... 2023-04-04T19:29:01.355417228Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" ... 2023-04-04T19:29:01.362958346Z time="2023-04-04T19:29:01Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com" 2023-04-04T19:29:01.363013363Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com" ... 2023-04-04T19:29:13.477036437Z time="2023-04-04T19:29:13Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:29:13.476922567 +0000 UTC m=+2789788.074705223" 2023-04-04T19:29:13.479320941Z time="2023-04-04T19:29:13Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com" ... 2023-04-04T19:31:59.230234638Z time="2023-04-04T19:31:59Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:31:59.233597451Z time="2023-04-04T19:31:59Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" ... 2023-04-04T19:32:01.242531135Z time="2023-04-04T19:32:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:01.242373423 +0000 UTC m=+2789955.840156051" 2023-04-04T19:32:01.245744768Z time="2023-04-04T19:32:01Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T19:32:01.268399507Z time="2023-04-04T19:32:01Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:32:01.270584270Z time="2023-04-04T19:32:01Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" ... 2023-04-04T19:32:02.512117513Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" ... 2023-04-04T19:32:02.528810094Z time="2023-04-04T19:32:02Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node2.example.com" 2023-04-04T19:32:02.528829340Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node2.example.com" ... 2023-04-04T19:32:03.273890290Z time="2023-04-04T19:32:03Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:03.272811565 +0000 UTC m=+2789957.870594214" 2023-04-04T19:32:03.289152604Z time="2023-04-04T19:32:03Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T19:32:03.760644399Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T19:32:03.770050254Z time="2023-04-04T19:32:03Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com" 2023-04-04T19:32:03.770093689Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com" ... 2023-04-04T19:32:04.654700819Z time="2023-04-04T19:32:04Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:04.654500435 +0000 UTC m=+2789959.252283106" 2023-04-04T19:32:04.657991819Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node2.example.com" 2023-04-04T19:32:04.658583043Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: rsp: {}" ... 2023-04-04T19:32:05.822264526Z time="2023-04-04T19:32:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:05.82208573 +0000 UTC m=+2789960.419868382" 2023-04-04T19:32:05.826506892Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com" 2023-04-04T19:32:05.827051042Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: rsp: {}" ... 2023-04-04T20:07:03.798730851Z time="2023-04-04T20:07:03Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:03.802360032Z time="2023-04-04T20:07:03Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com" 2023-04-04T20:07:05.808796454Z time="2023-04-04T20:07:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:05.808607472 +0000 UTC m=+2792060.406390073" 2023-04-04T20:07:05.811653301Z time="2023-04-04T20:07:05Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com" ... 2023-04-04T20:07:11.017524059Z time="2023-04-04T20:07:11Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:11.024127188Z time="2023-04-04T20:07:11Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com" ... 2023-04-04T20:07:13.047834933Z time="2023-04-04T20:07:13Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com" 2023-04-04T20:07:13.047839690Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}" 2023-04-04T20:07:13.378731066Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}" 2023-04-04T20:07:13.384575838Z time="2023-04-04T20:07:13Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com" ... 2023-04-04T20:07:13.385792532Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}" 2023-04-04T20:07:15.386784410Z time="2023-04-04T20:07:15Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:15.386596264 +0000 UTC m=+2792069.984378910" 2023-04-04T20:07:15.391059508Z time="2023-04-04T20:07:15Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com" We are using Longhorn v1.2.2 on Rancher RKE v2.6.5. We would expect that DeleteVolume would be called, the finalizers would be removed, and the PVC would be deleted, but none of those events occur. As a workaround we tried forcefully removing the finalizer using the command kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}' --type=merge. This worked, but is not ideal to do every time. Any ideas about what is wrong? If not, what should be my next steps in investigating this issue?
It can be done with simple if statement. Your template file {{- if .Values.serviceAccount.enabled -}} ... {{- end }} Your values file serviceAccount: enabled: true/false You also can do the same for resources that are nested and conditional add volumet to your deployment.
###Instruction: I'm reading helm documentation, The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes. I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes. I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it: {{- define "statement.resource"}} {{- $statementProvider := lower ( $.Values.statementProvider ) -}} {{- $statementFiles := list -}} {{- range $path, $bytes := .Files.Glob "templates/**" }} {{- if eq $statementProvider "x" -}} {{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}} {{- $statementFiles = append $statementFiles $path -}} {{- end }} {{- $statementFiles -}} {{- end }} {{- end }} {{- end }} ### Response: It can be done with simple if statement. Your template file {{- if .Values.serviceAccount.enabled -}} ... {{- end }} Your values file serviceAccount: enabled: true/false You also can do the same for resources that are nested and conditional add volumet to your deployment.
I'm reading helm documentation, The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes. I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes. I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it: {{- define "statement.resource"}} {{- $statementProvider := lower ( $.Values.statementProvider ) -}} {{- $statementFiles := list -}} {{- range $path, $bytes := .Files.Glob "templates/**" }} {{- if eq $statementProvider "x" -}} {{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}} {{- $statementFiles = append $statementFiles $path -}} {{- end }} {{- $statementFiles -}} {{- end }} {{- end }} {{- end }}
Even though Autopilot handles node management for you, behind the scenes it is still creating nodes which count against your CPU quota. While you only pay for the CPU/Memory requested by your pods, the nodes which are spun up behind the scenes actually use more CPU/Memory than that as they also run system pods which you don't pay for. Autopilot tends to provision smaller nodes to optimize for scale down without disrupting workloads. So in your case what is happening is that Autopilot is provisioning nodes which each use 2 vCPUs which results in 32 nodes being provisioned. You can see the nodes / number of nodes using kubectl get nodes|wc -l (well technically this command will be # of nodes + 1). This results in 64 vCPUs being used, which is why you are hitting the CPU quota. Additionally, seems like your Autopilot cluster is actually a public cluster, which results in a public IP being assigned to each of the 32 nodes and that's how you hit the in-use IP address quota. To avoid the in-use IP address quota, you should create a private Autopilot cluster. Unfortunately, the best way to do this would be to create a brand new Autopilot cluster. If you are unable to create a new cluster, then you'll need to request a quota increase for in-use IP addresses (64 should probably be enough). But I'd highly recommend creating a new private cluster if at all possible. To resolve issues with CPU quota, I'd recommend requesting double what you expect your total request/limits to be and rounding to the nearest power of 2, which in your case I'd suggest something like 128 vCPUs. You'll need to make sure that your total CPU quota (the one in your image) and your E2 CPU quota are both set (your E2 default quota is probably fine).
###Instruction: I am currently facing an issue with a Kubernetes configuration on my cluster running in Google Kubernetes Engine in Autopilot mode in the us-west1 region. The configuration requires 40 replicas, each with a CPU limit of 1000m. I have an Nginx load balancer with an external IP that distributes load to these pods, and its CPU limit is 250m. However, when I attempt to deploy this configuration, only 26 pods are created, and the remaining 14 remain in Unschedulable status. On the cluster page, I see two warnings: "Can't scale up nodes" and "Pods unschedulable." Upon checking the quota page, I discovered that Google is calculating my current usage incorrectly. Although I am using 26.25 CPUs, Google shows the current usage as 64. Additionally, while there are 27 pods in total, Google calculates it as 32. Here is the screenshot from quotas page: This miscalculation by Google is preventing my pods from scaling, and I am unsure how to resolve this issue. Can anyone offer guidance on how to avoid this situation? ### Response: Even though Autopilot handles node management for you, behind the scenes it is still creating nodes which count against your CPU quota. While you only pay for the CPU/Memory requested by your pods, the nodes which are spun up behind the scenes actually use more CPU/Memory than that as they also run system pods which you don't pay for. Autopilot tends to provision smaller nodes to optimize for scale down without disrupting workloads. So in your case what is happening is that Autopilot is provisioning nodes which each use 2 vCPUs which results in 32 nodes being provisioned. You can see the nodes / number of nodes using kubectl get nodes|wc -l (well technically this command will be # of nodes + 1). This results in 64 vCPUs being used, which is why you are hitting the CPU quota. Additionally, seems like your Autopilot cluster is actually a public cluster, which results in a public IP being assigned to each of the 32 nodes and that's how you hit the in-use IP address quota. To avoid the in-use IP address quota, you should create a private Autopilot cluster. Unfortunately, the best way to do this would be to create a brand new Autopilot cluster. If you are unable to create a new cluster, then you'll need to request a quota increase for in-use IP addresses (64 should probably be enough). But I'd highly recommend creating a new private cluster if at all possible. To resolve issues with CPU quota, I'd recommend requesting double what you expect your total request/limits to be and rounding to the nearest power of 2, which in your case I'd suggest something like 128 vCPUs. You'll need to make sure that your total CPU quota (the one in your image) and your E2 CPU quota are both set (your E2 default quota is probably fine).
I am currently facing an issue with a Kubernetes configuration on my cluster running in Google Kubernetes Engine in Autopilot mode in the us-west1 region. The configuration requires 40 replicas, each with a CPU limit of 1000m. I have an Nginx load balancer with an external IP that distributes load to these pods, and its CPU limit is 250m. However, when I attempt to deploy this configuration, only 26 pods are created, and the remaining 14 remain in Unschedulable status. On the cluster page, I see two warnings: "Can't scale up nodes" and "Pods unschedulable." Upon checking the quota page, I discovered that Google is calculating my current usage incorrectly. Although I am using 26.25 CPUs, Google shows the current usage as 64. Additionally, while there are 27 pods in total, Google calculates it as 32. Here is the screenshot from quotas page: This miscalculation by Google is preventing my pods from scaling, and I am unsure how to resolve this issue. Can anyone offer guidance on how to avoid this situation?
Please use below command: kubectl get pods --all-namespaces -o json | jq '.items | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})' Output format: [ { "namespace": "keda", "NoOfPods": 3 }, { "namespace": "kube-system", "NoOfPods": 12 }, { "namespace": "prod", "NoOfPods": 1 }, { "namespace": "stage", "NoOfPods": 1 } ]
###Instruction: Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command? Looking to exclude certain namespaces as well kubectl get pods gets me a list of every pod. ### Response: Please use below command: kubectl get pods --all-namespaces -o json | jq '.items | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})' Output format: [ { "namespace": "keda", "NoOfPods": 3 }, { "namespace": "kube-system", "NoOfPods": 12 }, { "namespace": "prod", "NoOfPods": 1 }, { "namespace": "stage", "NoOfPods": 1 } ]
Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command? Looking to exclude certain namespaces as well kubectl get pods gets me a list of every pod.
There are a few ways you can do this direct in yq. First of, you can use string concatenation with another property to comeup with a unique filename: yq -s '(.kind | downcase) + "_" + .metadata.name' testfile.yaml That will create files like: deployment_amazing-deployment.yml pod_amazing-application.yml service_amazing-deployment.yml service_tea-service.yml Or you can use the built in $index to make the filenames unique: yq -s '.metadata.name + "_" + $index' Which will create: amazing-application_0.yml amazing-deployment_1.yml amazing-deployment_2.yml tea-service_3.yml Disclaimer: I wrote yq
###Instruction: Below is my testfile.yaml: --- kind: Pod metadata: name: amazing-application --- kind: Deployment metadata: name: amazing-deployment --- kind: Service metadata: name: amazing-deployment --- kind: Service metadata: name: tea-service My goal is to split this into 4 files where the filename is .metadata.name and the dir that file goes into is .kind. I have achieved what I want with this: for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq); do mkdir "$kind" cd "$kind" yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name' cd ..; done What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop. Is there a way to use yq and -s or select to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')? Because if you do yq -s '.kind' testfile.yaml it will yield three yaml files, not four. Same for yq -s '.metadata.name' testfile.yaml; we get three files as not all name are unique - one gets lost. ### Response: There are a few ways you can do this direct in yq. First of, you can use string concatenation with another property to comeup with a unique filename: yq -s '(.kind | downcase) + "_" + .metadata.name' testfile.yaml That will create files like: deployment_amazing-deployment.yml pod_amazing-application.yml service_amazing-deployment.yml service_tea-service.yml Or you can use the built in $index to make the filenames unique: yq -s '.metadata.name + "_" + $index' Which will create: amazing-application_0.yml amazing-deployment_1.yml amazing-deployment_2.yml tea-service_3.yml Disclaimer: I wrote yq
Below is my testfile.yaml: --- kind: Pod metadata: name: amazing-application --- kind: Deployment metadata: name: amazing-deployment --- kind: Service metadata: name: amazing-deployment --- kind: Service metadata: name: tea-service My goal is to split this into 4 files where the filename is .metadata.name and the dir that file goes into is .kind. I have achieved what I want with this: for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq); do mkdir "$kind" cd "$kind" yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name' cd ..; done What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop. Is there a way to use yq and -s or select to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')? Because if you do yq -s '.kind' testfile.yaml it will yield three yaml files, not four. Same for yq -s '.metadata.name' testfile.yaml; we get three files as not all name are unique - one gets lost.
As you suggested, Istio derives protocol information from the protocol name and will try to use HTTPS on your port 80/8080. You should name them http-web. Additionally, your VS wants to match traffic via TLS-SNI-header on a port 80. Don't use a tls matcher but a http matcher and let it use the host-header to capture your traffic and then distribute to both versions. Istio will take care to add mTLS between two Istio-enabled pods (given that your trafficPolicy is set to ISTIO_MUTUAL which seems to be true in your istioctl output.
###Instruction: In our kubernetes cluster we are using istio, with mutual tls for the communication between the pods inside the mesh. Everything is working fine, but now we would like to introduce a VirtualService to able to do traffic shifting for canary deployments. We configured everything according to the istio documentation, but for some reason, the VirtualService seems just to be ignored, our canary version does not receive any traffic, even with a 50/50 traffic split. Note, we are only talking about traffic inside the mesh, there is no external traffic, it's exclusively between pods in the same namespace. Our setup: Service of our application 'parser-service' # service parser-service spec: clusterIP: 172.20.181.129 ports: - name: https-web port: 80 protocol: TCP targetPort: 8080 selector: service: parser-service type: ClusterIP Service of the canary version # service parser-service-canary spec: clusterIP: 172.20.30.101 ports: - name: https-web port: 80 protocol: TCP targetPort: 8080 selector: service: parser-service-canary type: ClusterIP This is what we tried, a VirtualService that should split traffic 50/50 spec: gateways: - mesh hosts: - parser-service tls: - match: - port: 80 sniHosts: - parser-service route: - destination: host: parser-service port: number: 80 weight: 50 - destination: host: parser-service-canary port: number: 80 weight: 50 I think we misunderstood something, but we can't figure out what it is. The traffic is still routed 100% to parser-service and istioctl x describe pod parser-service-xxx-xxx also shows no VirtualService, which suggests to me that the VirtualService is just ignored. Pod: parser-service-7cfd596dbb-hjqd9 Pod Revision: 1-14-6 Pod Ports: 8080 (parser-service), 15090 (istio-proxy) Suggestion: add 'version' label to pod for Istio telemetry. -------------------- Service: parser-service Port: https-web 80/HTTPS targets pod port 8080 DestinationRule: istio-mutual for "*.mynamespace.svc.cluster.local" Traffic Policy TLS Mode: ISTIO_MUTUAL -------------------- Effective PeerAuthentication: Workload mTLS mode: PERMISSIVE I think it has something to do with the fact that we named our ports https-web so that the traffic between the istio-sidecars is encrypted. When we use 'http-web' as port names and HTTP Match instead of tls in the VirtualService, the traffic split works fine. Would appreciate any hints or pointers in the right direction ### Response: As you suggested, Istio derives protocol information from the protocol name and will try to use HTTPS on your port 80/8080. You should name them http-web. Additionally, your VS wants to match traffic via TLS-SNI-header on a port 80. Don't use a tls matcher but a http matcher and let it use the host-header to capture your traffic and then distribute to both versions. Istio will take care to add mTLS between two Istio-enabled pods (given that your trafficPolicy is set to ISTIO_MUTUAL which seems to be true in your istioctl output.
In our kubernetes cluster we are using istio, with mutual tls for the communication between the pods inside the mesh. Everything is working fine, but now we would like to introduce a VirtualService to able to do traffic shifting for canary deployments. We configured everything according to the istio documentation, but for some reason, the VirtualService seems just to be ignored, our canary version does not receive any traffic, even with a 50/50 traffic split. Note, we are only talking about traffic inside the mesh, there is no external traffic, it's exclusively between pods in the same namespace. Our setup: Service of our application 'parser-service' # service parser-service spec: clusterIP: 172.20.181.129 ports: - name: https-web port: 80 protocol: TCP targetPort: 8080 selector: service: parser-service type: ClusterIP Service of the canary version # service parser-service-canary spec: clusterIP: 172.20.30.101 ports: - name: https-web port: 80 protocol: TCP targetPort: 8080 selector: service: parser-service-canary type: ClusterIP This is what we tried, a VirtualService that should split traffic 50/50 spec: gateways: - mesh hosts: - parser-service tls: - match: - port: 80 sniHosts: - parser-service route: - destination: host: parser-service port: number: 80 weight: 50 - destination: host: parser-service-canary port: number: 80 weight: 50 I think we misunderstood something, but we can't figure out what it is. The traffic is still routed 100% to parser-service and istioctl x describe pod parser-service-xxx-xxx also shows no VirtualService, which suggests to me that the VirtualService is just ignored. Pod: parser-service-7cfd596dbb-hjqd9 Pod Revision: 1-14-6 Pod Ports: 8080 (parser-service), 15090 (istio-proxy) Suggestion: add 'version' label to pod for Istio telemetry. -------------------- Service: parser-service Port: https-web 80/HTTPS targets pod port 8080 DestinationRule: istio-mutual for "*.mynamespace.svc.cluster.local" Traffic Policy TLS Mode: ISTIO_MUTUAL -------------------- Effective PeerAuthentication: Workload mTLS mode: PERMISSIVE I think it has something to do with the fact that we named our ports https-web so that the traffic between the istio-sidecars is encrypted. When we use 'http-web' as port names and HTTP Match instead of tls in the VirtualService, the traffic split works fine. Would appreciate any hints or pointers in the right direction
You can use the Vault Secrets Operator to synchronize secrets from Vault to Kubernetes Secret resources. Once you've done that, you can then expose those secrets as environment variables using envFrom or vaultFrom directives in your deployment manifests, as described in the documentation. This method does not require overriding the entrypoint or arguments of your containers. It looks like Vault Secrets Operator is relatively new and the documentation seems a bit slim. You can achieve similar functionality using the External Secrets Operator, which has the added advantage that it supports a variety of secret store backends.
###Instruction: Trying to export vault secrets as an environment variable to k8s pod using vault injector. Following vault documentation https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example as mention in example, you need to source config file inside a container and it will override ENTRYPOINT script/command that you are passing in dockerfile. containers: - name: web image: alpine:latest command: ['sh', '-c'] args: ['source /vault/secrets/config && <entrypoint script>'] in my setup, I don't have a static entry point script that I can put here in args. docker file has its own command/script running as entrypoint script. Trying to find if there is any alternative to source this vault config inside a container that allow me to not change anything in entrypoint script in dockerfile. not sure if k8s is providing any way to do this with post-hook or something. that runs entrypoint mentioned in dockerfile first and then execute other scripts/command passed in post-hook. ### Response: You can use the Vault Secrets Operator to synchronize secrets from Vault to Kubernetes Secret resources. Once you've done that, you can then expose those secrets as environment variables using envFrom or vaultFrom directives in your deployment manifests, as described in the documentation. This method does not require overriding the entrypoint or arguments of your containers. It looks like Vault Secrets Operator is relatively new and the documentation seems a bit slim. You can achieve similar functionality using the External Secrets Operator, which has the added advantage that it supports a variety of secret store backends.
Trying to export vault secrets as an environment variable to k8s pod using vault injector. Following vault documentation https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example as mention in example, you need to source config file inside a container and it will override ENTRYPOINT script/command that you are passing in dockerfile. containers: - name: web image: alpine:latest command: ['sh', '-c'] args: ['source /vault/secrets/config && <entrypoint script>'] in my setup, I don't have a static entry point script that I can put here in args. docker file has its own command/script running as entrypoint script. Trying to find if there is any alternative to source this vault config inside a container that allow me to not change anything in entrypoint script in dockerfile. not sure if k8s is providing any way to do this with post-hook or something. that runs entrypoint mentioned in dockerfile first and then execute other scripts/command passed in post-hook.
To create a google managed certificate with DNS Authorization follow this Google official doc and Terraform doc. ` Each DNS authorization stores information about the DNS record that you need to set up and covers a single domain plus its wildcard—for example, example.com and *.example.com. You need to add a domain name and wild card name in the same domain name while creating the certificate. By using the certificate map and certificate Mapping entry, you need to map this domain and wild card domain. Create two certificate map entries one for domain and other for wild card domain. This will help the certificate to be active. You can also refer to this gitlink by fbozic for relevant info Already a feature request is raised on this for more wild card usage and the Google Product team is working on this.
###Instruction: By default creating a managed certificate object on GKE creates a managed certificate of type "Load Balancer Authorization". How can I create one with DNS authorization through GKE? https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 I want to add wildcard domains and this only possible with DNS authorization. How to generate Google-managed certificates for wildcard hostnames in GCP? ### Response: To create a google managed certificate with DNS Authorization follow this Google official doc and Terraform doc. ` Each DNS authorization stores information about the DNS record that you need to set up and covers a single domain plus its wildcard—for example, example.com and *.example.com. You need to add a domain name and wild card name in the same domain name while creating the certificate. By using the certificate map and certificate Mapping entry, you need to map this domain and wild card domain. Create two certificate map entries one for domain and other for wild card domain. This will help the certificate to be active. You can also refer to this gitlink by fbozic for relevant info Already a feature request is raised on this for more wild card usage and the Google Product team is working on this.
By default creating a managed certificate object on GKE creates a managed certificate of type "Load Balancer Authorization". How can I create one with DNS authorization through GKE? https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 I want to add wildcard domains and this only possible with DNS authorization. How to generate Google-managed certificates for wildcard hostnames in GCP?
I have just modified the command posted in the above answer owing to OP's need of excluding certain namespaces as well. kubectl get pods --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace != "logging")] | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})' Result without exclusion: Result after exclusion: In here, I'm excluding the pods from logging namespace only but you can exclude more namespaces using the format: .metadata.namespace != "logging" and .metadata.namespace != "monitoring" Hope this helps!
###Instruction: Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command? Looking to exclude certain namespaces as well kubectl get pods gets me a list of every pod. ### Response: I have just modified the command posted in the above answer owing to OP's need of excluding certain namespaces as well. kubectl get pods --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace != "logging")] | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})' Result without exclusion: Result after exclusion: In here, I'm excluding the pods from logging namespace only but you can exclude more namespaces using the format: .metadata.namespace != "logging" and .metadata.namespace != "monitoring" Hope this helps!
Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command? Looking to exclude certain namespaces as well kubectl get pods gets me a list of every pod.
As Sathya S. noted it appears that Velero doesn't (reliably) restore metadata.OwnerReferences from it's backups. I will add to that, that SOMETIMES it does. And that's what throws me. It almost seems like it has a pattern when it does at least in my case. if CB-dev-0000 has it, then the Services will also. But then the remaining CB pods won't. Otherwise all of them 'might' have it set, or none of them. At least in the example I've setup here. Couchbase notes in thier docs about NOT including 'pods' and 'services' in the Velero backup. This had stuck in my mind but I kinda didn't trust it. Turns out THAT seems to be VITAL for Velero to properly restore my Couchbase cluster and avoid the "Pod ignored, no owner" issue seen in Couchbase Operator logs. Once I removed 'pods' and 'services' from my scheduled backup and it created a backup, then I kubectl deleted my Couchbase cluster. Then I velero restore create --from-backup and wah-lah the cluster came up. Additionally I'll also note that the Indexes and Bucket documents I'd created also were restored. Most importantly to this issue is that metadata.ownerReferences were all setup properly. I've done this several times now before Answering this issue. And this seems to be the important thing. Don't include pods, and services in the backup. "You may have noticed that neither pods nor services were backed up. This is because the Operator will be able to recreate them from the cluster ConfigMap, metadata attached to the persistent volume claims, and the CouchbaseCluster resource itself. Likewise the deployment will be able to recreate the Operator pod." ~ https://docs.couchbase.com/operator/current/tutorial-velero-backup.html#creating-a-velero-backup Ultimately all I had to do was remove pods and services from my schedule backups 'includedResources' yaml and delete/apply the schedule.
###Instruction: I'm new here so if this comes out stupid please forgive me 🙂 I've been using Couchbase over 10+ years on real hardware. I've been working on establishing CB in Kubernetes and that seems to be working just fine. I'm also using Couchbase Autonomous Operator. Works great, no complaints with normal functioning thus far. However, I've been working through performing Velero Backup and Restore of both the Cluster and the CB Operator. I thought I finally had it working earlier last week, but a recent attempt to restore from a Velero backup once again resulted in messages like this in the CBO's logs: {"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"} {"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"} {"level":"info","ts":1680529172.0289645,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0003"} {"level":"info","ts":1680529172.0289707,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0001"} {"level":"info","ts":1680529172.0289757,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0004"} I've tried to find what this really means. And I have some suspicions but I don't know how to resolve it. Of note in the msgs above is that 'cb-dev-0000' never appears in the recurring list of msgs. These messages appear every few seconds in the couchbase-operator pod logs. Additionally, if I delete one pod at a time, they will be recreated by K8s or CBO (not real sure) and then it disappears from the list that keeps repeating. Once I do that with all of them, this issue stops. Any ideas, questions, comments on this would really be greatly appreciated This is all just for testing at this point, nothing here is for production, I'm just trying to validate that Velero can indeed backup both Couchbase Operator and Couchbase Cluster and subsequently restore them from the below Schedule Backup. I am using the default couchbase operator install using 2.4.0 I am using a very basic, functional couchase server cluster installation yaml I tried to use this Schedule Velero Backup, and then restore from this backup, and I'm expecting that both the Couchbase Cluster, and Couchbase Operator will restore without any issues. But what happens is that I get a functional CB Cluster, and a CBO which logs constantly msgs like this: {"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"} {"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"} } This might be important I dunno I never see 'cb-dev-0000' listed in these msgs tho the pod does exist. I reiterate the restore CB Cluster is functioning 'normally' near as I can tell, and CB operator is the only thing reporting these types of errors. kubectl apply -f schedule.yaml Where schedule.yaml contains this: apiVersion: velero.io/v1 kind: Schedule metadata: name: dev-everything-schedule namespace: velero spec: schedule: 0 * * * * template: metadata: labels: velero.io/schedule-name: dev-everything-schedule storageLocation: default includeClusterResources: true includedNamespaces: - kube-public - kube-system - istio-system - velero - default - cert-manager - kube-node-lease excludedResources: includedResources: - authorizationpolicies.security.istio.io - backuprepositories.velero.io - backupstoragelocations.velero.io - backups.velero.io - certificaterequests.cert-manager.io - certificates.cert-manager.io - cert-manager-webhook - challenges.acme.cert-manager.io - clusterissuers.cert-manager.io - clusterrolebindings.rbac.authorization.k8s.io - clusterroles.rbac.authorization.k8s.io - configmaps - controllerrevisions - couchbaseautoscalers.couchbase.com - couchbasebackuprestores.couchbase.com - couchbasebackups.couchbase.com - couchbasebuckets.couchbase.com - couchbaseclusteroauths - couchbaseclusters.couchbase.com - couchbasecollectiongroups.couchbase.com - couchbasecollections.couchbase.com - couchbaseephemeralbuckets.couchbase.com - couchbaseevents - couchbasegroups.couchbase.com - couchbasememcachedbuckets.couchbase.com - couchbasemigrationreplications.couchbase.com - couchbasereplications.couchbase.com - couchbaserolebindings.couchbase.com - couchbasescopegroups.couchbase.com - couchbasescopes.couchbase.com - couchbaseusers.couchbase.com - cronjobs - csidrivers - csistoragecapacities - customresourcedefinitions.apiextensions.k8s.io - daemonsets - deletebackuprequests - deletebackuprequests.velero.io - deployments - destinationrules.networking.istio.io - downloadrequests.velero.io - endpoints - endpointslices - eniconfigs.crd.k8s.amazonaws.com - envoyfilters.networking.istio.io - events - gateways - gateways.networking.istio.io - horizontalpodautoscalers - ingressclassparams.elbv2.k8s.aws - ingresses - issuers.cert-manager.io - istiooperators.install.istio.io - item_istiooperators - item_wasmplugins - jobs - leases - limitranges - namespaces - networkpolicies - orders.acme.cert-manager.io - peerauthentications.security.istio.io - persistentvolumeclaims - persistentvolumes - poddisruptionbudgets - pods - podtemplates - podvolumebackups.velero.io - podvolumerestores.velero.io - priorityclasses.scheduling.k8s.io - proxyconfigs.networking.istio.io - replicasets - replicationcontrollers - requestauthentications.security.istio.io - resourcequotas - restores.velero.io - rolebindings.rbac.authorization.k8s.io - roles.rbac.authorization.k8s.io - schedules.velero.io - secrets - securitygrouppolicies.vpcresources.k8s.aws - serverstatusrequests.velero.io - serviceaccounts - serviceentries - serviceentries.networking.istio.io - services - sidecars.networking.istio.io - statefulsets - targetgroupbindings.elbv2.k8s.aws - telemetries.telemetry.istio.io - telemetry - validatingwebhookconfiguration.admissionregistration.k8s.io - virtualservices.networking.istio.io - volumesnapshotlocations.velero.io - wasmplugins.extensions.istio.io - workloadentries.networking.istio.io - workloadgroups.networking.istio.io ttl: 12h I kubectl delete the cluster, and operator and subsequently restore them from the Velero backup using something like this: velero restore create dev-everything-schedule-20230331160030 --from-backup dev-everything-schedule-20230331160030 It restores the cluster, and cbo and that's when I start seeing the logs in the couchbase-operator pods logs. UPDATE: Digging into the JSON files of the Velero Backup under pods/namespaces/default/cb-dev-0000.json and comparing that with cb-dev-0001.json I just spotted a major difference that probably relates to this issue: { "apiVersion": "v1", "kind": "Pod", "metadata": { ... "name": "cb-dev-0000", "namespace": "default", "ownerReferences": [ { "apiVersion": "couchbase.com/v2", "blockOwnerDeletion": true, "controller": true, "kind": "CouchbaseCluster", "name": "cb-dev", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxx" } ], "resourceVersion": "xxxxxxx", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx" } ... } and now the same thing for cb-dev-0001 (one of the ones getting logged constantly in CBO) { "apiVersion": "v1", "kind": "Pod", "metadata": { ... "name": "cb-dev-0001", "namespace": "default", "resourceVersion": "xxxxxxx", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx" } ... } ownerReferences is missing from the Velero backup for cb-dev-0001, 0002, 0003, 0004. Now I think I'm onto something. I don't know why Velero would find this and store it in the backup for ONE POD vs all of them. But that's a clue I think... Still hunting... UPDATE 2: I've confirmed that Velero is storing the backup for couchbase objects in it's JSON files correctly every time (from what I've seen so far). However, the velero restore is almost randomly not setting the metadata.ownerReferences in the restored Couchbase pods. Sometimes it's only in the Couchbase Services and the CB-dev-0000 pod. Sometimes it's not in any of them. Sometimes I've seen in (in the past) set in all of them (correctly?). SO it's still a mystery, but that's where I am so far. I've seen other people mentioning on various chat/forums that they've experienced similar issues with Velero. I'm secretly hoping I'll find a missing argument or annotation were I can specifically force ownerReferences to be restored for certain objects. But I haven't seen that yet... ### Response: As Sathya S. noted it appears that Velero doesn't (reliably) restore metadata.OwnerReferences from it's backups. I will add to that, that SOMETIMES it does. And that's what throws me. It almost seems like it has a pattern when it does at least in my case. if CB-dev-0000 has it, then the Services will also. But then the remaining CB pods won't. Otherwise all of them 'might' have it set, or none of them. At least in the example I've setup here. Couchbase notes in thier docs about NOT including 'pods' and 'services' in the Velero backup. This had stuck in my mind but I kinda didn't trust it. Turns out THAT seems to be VITAL for Velero to properly restore my Couchbase cluster and avoid the "Pod ignored, no owner" issue seen in Couchbase Operator logs. Once I removed 'pods' and 'services' from my scheduled backup and it created a backup, then I kubectl deleted my Couchbase cluster. Then I velero restore create --from-backup and wah-lah the cluster came up. Additionally I'll also note that the Indexes and Bucket documents I'd created also were restored. Most importantly to this issue is that metadata.ownerReferences were all setup properly. I've done this several times now before Answering this issue. And this seems to be the important thing. Don't include pods, and services in the backup. "You may have noticed that neither pods nor services were backed up. This is because the Operator will be able to recreate them from the cluster ConfigMap, metadata attached to the persistent volume claims, and the CouchbaseCluster resource itself. Likewise the deployment will be able to recreate the Operator pod." ~ https://docs.couchbase.com/operator/current/tutorial-velero-backup.html#creating-a-velero-backup Ultimately all I had to do was remove pods and services from my schedule backups 'includedResources' yaml and delete/apply the schedule.
I'm new here so if this comes out stupid please forgive me 🙂 I've been using Couchbase over 10+ years on real hardware. I've been working on establishing CB in Kubernetes and that seems to be working just fine. I'm also using Couchbase Autonomous Operator. Works great, no complaints with normal functioning thus far. However, I've been working through performing Velero Backup and Restore of both the Cluster and the CB Operator. I thought I finally had it working earlier last week, but a recent attempt to restore from a Velero backup once again resulted in messages like this in the CBO's logs: {"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"} {"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"} {"level":"info","ts":1680529172.0289645,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0003"} {"level":"info","ts":1680529172.0289707,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0001"} {"level":"info","ts":1680529172.0289757,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0004"} I've tried to find what this really means. And I have some suspicions but I don't know how to resolve it. Of note in the msgs above is that 'cb-dev-0000' never appears in the recurring list of msgs. These messages appear every few seconds in the couchbase-operator pod logs. Additionally, if I delete one pod at a time, they will be recreated by K8s or CBO (not real sure) and then it disappears from the list that keeps repeating. Once I do that with all of them, this issue stops. Any ideas, questions, comments on this would really be greatly appreciated This is all just for testing at this point, nothing here is for production, I'm just trying to validate that Velero can indeed backup both Couchbase Operator and Couchbase Cluster and subsequently restore them from the below Schedule Backup. I am using the default couchbase operator install using 2.4.0 I am using a very basic, functional couchase server cluster installation yaml I tried to use this Schedule Velero Backup, and then restore from this backup, and I'm expecting that both the Couchbase Cluster, and Couchbase Operator will restore without any issues. But what happens is that I get a functional CB Cluster, and a CBO which logs constantly msgs like this: {"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"} {"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"} } This might be important I dunno I never see 'cb-dev-0000' listed in these msgs tho the pod does exist. I reiterate the restore CB Cluster is functioning 'normally' near as I can tell, and CB operator is the only thing reporting these types of errors. kubectl apply -f schedule.yaml Where schedule.yaml contains this: apiVersion: velero.io/v1 kind: Schedule metadata: name: dev-everything-schedule namespace: velero spec: schedule: 0 * * * * template: metadata: labels: velero.io/schedule-name: dev-everything-schedule storageLocation: default includeClusterResources: true includedNamespaces: - kube-public - kube-system - istio-system - velero - default - cert-manager - kube-node-lease excludedResources: includedResources: - authorizationpolicies.security.istio.io - backuprepositories.velero.io - backupstoragelocations.velero.io - backups.velero.io - certificaterequests.cert-manager.io - certificates.cert-manager.io - cert-manager-webhook - challenges.acme.cert-manager.io - clusterissuers.cert-manager.io - clusterrolebindings.rbac.authorization.k8s.io - clusterroles.rbac.authorization.k8s.io - configmaps - controllerrevisions - couchbaseautoscalers.couchbase.com - couchbasebackuprestores.couchbase.com - couchbasebackups.couchbase.com - couchbasebuckets.couchbase.com - couchbaseclusteroauths - couchbaseclusters.couchbase.com - couchbasecollectiongroups.couchbase.com - couchbasecollections.couchbase.com - couchbaseephemeralbuckets.couchbase.com - couchbaseevents - couchbasegroups.couchbase.com - couchbasememcachedbuckets.couchbase.com - couchbasemigrationreplications.couchbase.com - couchbasereplications.couchbase.com - couchbaserolebindings.couchbase.com - couchbasescopegroups.couchbase.com - couchbasescopes.couchbase.com - couchbaseusers.couchbase.com - cronjobs - csidrivers - csistoragecapacities - customresourcedefinitions.apiextensions.k8s.io - daemonsets - deletebackuprequests - deletebackuprequests.velero.io - deployments - destinationrules.networking.istio.io - downloadrequests.velero.io - endpoints - endpointslices - eniconfigs.crd.k8s.amazonaws.com - envoyfilters.networking.istio.io - events - gateways - gateways.networking.istio.io - horizontalpodautoscalers - ingressclassparams.elbv2.k8s.aws - ingresses - issuers.cert-manager.io - istiooperators.install.istio.io - item_istiooperators - item_wasmplugins - jobs - leases - limitranges - namespaces - networkpolicies - orders.acme.cert-manager.io - peerauthentications.security.istio.io - persistentvolumeclaims - persistentvolumes - poddisruptionbudgets - pods - podtemplates - podvolumebackups.velero.io - podvolumerestores.velero.io - priorityclasses.scheduling.k8s.io - proxyconfigs.networking.istio.io - replicasets - replicationcontrollers - requestauthentications.security.istio.io - resourcequotas - restores.velero.io - rolebindings.rbac.authorization.k8s.io - roles.rbac.authorization.k8s.io - schedules.velero.io - secrets - securitygrouppolicies.vpcresources.k8s.aws - serverstatusrequests.velero.io - serviceaccounts - serviceentries - serviceentries.networking.istio.io - services - sidecars.networking.istio.io - statefulsets - targetgroupbindings.elbv2.k8s.aws - telemetries.telemetry.istio.io - telemetry - validatingwebhookconfiguration.admissionregistration.k8s.io - virtualservices.networking.istio.io - volumesnapshotlocations.velero.io - wasmplugins.extensions.istio.io - workloadentries.networking.istio.io - workloadgroups.networking.istio.io ttl: 12h I kubectl delete the cluster, and operator and subsequently restore them from the Velero backup using something like this: velero restore create dev-everything-schedule-20230331160030 --from-backup dev-everything-schedule-20230331160030 It restores the cluster, and cbo and that's when I start seeing the logs in the couchbase-operator pods logs. UPDATE: Digging into the JSON files of the Velero Backup under pods/namespaces/default/cb-dev-0000.json and comparing that with cb-dev-0001.json I just spotted a major difference that probably relates to this issue: { "apiVersion": "v1", "kind": "Pod", "metadata": { ... "name": "cb-dev-0000", "namespace": "default", "ownerReferences": [ { "apiVersion": "couchbase.com/v2", "blockOwnerDeletion": true, "controller": true, "kind": "CouchbaseCluster", "name": "cb-dev", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxx" } ], "resourceVersion": "xxxxxxx", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx" } ... } and now the same thing for cb-dev-0001 (one of the ones getting logged constantly in CBO) { "apiVersion": "v1", "kind": "Pod", "metadata": { ... "name": "cb-dev-0001", "namespace": "default", "resourceVersion": "xxxxxxx", "uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx" } ... } ownerReferences is missing from the Velero backup for cb-dev-0001, 0002, 0003, 0004. Now I think I'm onto something. I don't know why Velero would find this and store it in the backup for ONE POD vs all of them. But that's a clue I think... Still hunting... UPDATE 2: I've confirmed that Velero is storing the backup for couchbase objects in it's JSON files correctly every time (from what I've seen so far). However, the velero restore is almost randomly not setting the metadata.ownerReferences in the restored Couchbase pods. Sometimes it's only in the Couchbase Services and the CB-dev-0000 pod. Sometimes it's not in any of them. Sometimes I've seen in (in the past) set in all of them (correctly?). SO it's still a mystery, but that's where I am so far. I've seen other people mentioning on various chat/forums that they've experienced similar issues with Velero. I'm secretly hoping I'll find a missing argument or annotation were I can specifically force ownerReferences to be restored for certain objects. But I haven't seen that yet...
I haven't found a precise explanation for your question, I believe it's an implementation choice. But I'm sharing some resources that might clarify your ideas a bit. Resources are bound together in API groups - each group may have one or more versions that evolve independent of other API groups, and each version within the group has one or more resources. Group names are typically in domain name form - the Kubernetes project reserves use of the empty group, all single word names ("extensions", "apps"), and any group name ending in "*.k8s.io" for its sole use. When choosing a group name, we recommend selecting a subdomain your group or organization owns, such as "widget.mycompany.com". https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions https://stackoverflow.com/a/57854939/21404450
###Instruction: When you read about API Groups in Kubernetes, you may come across group names like core/v1, batch/v1, storage.k8s.io/v1, and scheduling.k8s.io/v1. Some of these groups have the k8s.io postfix, while others do not. It can be confusing when specifying the apiVersion in your Kubernetes manifests, as you need to specify it as like v1, batch/v1, and storage.k8s.io/v1. Knowing the reason will make me less confused. Thanks. ### Response: I haven't found a precise explanation for your question, I believe it's an implementation choice. But I'm sharing some resources that might clarify your ideas a bit. Resources are bound together in API groups - each group may have one or more versions that evolve independent of other API groups, and each version within the group has one or more resources. Group names are typically in domain name form - the Kubernetes project reserves use of the empty group, all single word names ("extensions", "apps"), and any group name ending in "*.k8s.io" for its sole use. When choosing a group name, we recommend selecting a subdomain your group or organization owns, such as "widget.mycompany.com". https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions https://stackoverflow.com/a/57854939/21404450
When you read about API Groups in Kubernetes, you may come across group names like core/v1, batch/v1, storage.k8s.io/v1, and scheduling.k8s.io/v1. Some of these groups have the k8s.io postfix, while others do not. It can be confusing when specifying the apiVersion in your Kubernetes manifests, as you need to specify it as like v1, batch/v1, and storage.k8s.io/v1. Knowing the reason will make me less confused. Thanks.
I found it out... When I installed Nginx using Helm, it adds a hostPort:80 to the pod definition by default. Even it is not in the values.yaml, and not mentioned anywhere. So, if you remove that by hand, then Nginx won't steal port 80 from the Apache installed outside of Kubernetes.
###Instruction: I have a very interesting situation here. :-) I reinstalled my Kubernetes cluster (bare metal). I have 1 master and 2 worker nodes. On worker no 2, on the host machine, I have an Apache http server listening on port 80 and serving a web page. NAME STATUS ROLES AGE VERSION kubemaster Ready control-plane 22h v1.26.3 kubenode-1 Ready <none> 21h v1.26.3 kubenode-2 Ready <none> 21h v1.26.3 I installed Nginx ingress controller on my cluster as a daemon set. So there are 2 controllers running on my 2 worker nodes. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES bkk-ingress-5p7b8 1/1 Running 0 31m 192.168.29.136 kubenode-1 <none> <none> bkk-ingress-rxjw4 1/1 Running 0 31m 192.168.77.72 kubenode-2 <none> <none> And here is the interesting part. When I want to visit the web page served by the Apache on the host machine the traffic is taken by the Nginx and I saw the Nginx ingress controller not found page instead of my web page. However only the apache is listening on port 80: [root@kubenode-2 ~]$ netstat -tulpn | grep --color :80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1099870/httpd [root@kubenode-2 ~]$ netstat -tulpn | grep --color :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1099870/httpd The ingress service is a NodePort and it is not supposed to use port 80 or 443: apiVersion: v1 kind: Service metadata: annotations: meta.helm.sh/release-name: bkk meta.helm.sh/release-namespace: nginx creationTimestamp: "2023-03-30T21:00:17Z" labels: app.kubernetes.io/instance: bkk app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: bkk-nginx-ingress helm.sh/chart: nginx-ingress-0.16.2 name: bkk-nginx-ingress namespace: nginx resourceVersion: "159007" uid: 1f45c592-80e0-4fb5-acbd-a04c76772e6a spec: clusterIP: 10.104.170.119 clusterIPs: - 10.104.170.119 externalTrafficPolicy: Local internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 31902 port: 801 protocol: TCP targetPort: 801 - name: https nodePort: 31903 port: 4431 protocol: TCP targetPort: 4431 selector: app: bkk-ingress sessionAffinity: None type: NodePort status: loadBalancer: {} And there is not a single ingress defined anywhere in my system: [root@kubemaster helm-chart]$ kubectl get ingresses --all-namespaces No resources found Does anyone have any idea what is going on here? Why the Nginx, living and running as a Kubernetes pod, steals the traffic from the host machine? ### Response: I found it out... When I installed Nginx using Helm, it adds a hostPort:80 to the pod definition by default. Even it is not in the values.yaml, and not mentioned anywhere. So, if you remove that by hand, then Nginx won't steal port 80 from the Apache installed outside of Kubernetes.
I have a very interesting situation here. :-) I reinstalled my Kubernetes cluster (bare metal). I have 1 master and 2 worker nodes. On worker no 2, on the host machine, I have an Apache http server listening on port 80 and serving a web page. NAME STATUS ROLES AGE VERSION kubemaster Ready control-plane 22h v1.26.3 kubenode-1 Ready <none> 21h v1.26.3 kubenode-2 Ready <none> 21h v1.26.3 I installed Nginx ingress controller on my cluster as a daemon set. So there are 2 controllers running on my 2 worker nodes. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES bkk-ingress-5p7b8 1/1 Running 0 31m 192.168.29.136 kubenode-1 <none> <none> bkk-ingress-rxjw4 1/1 Running 0 31m 192.168.77.72 kubenode-2 <none> <none> And here is the interesting part. When I want to visit the web page served by the Apache on the host machine the traffic is taken by the Nginx and I saw the Nginx ingress controller not found page instead of my web page. However only the apache is listening on port 80: [root@kubenode-2 ~]$ netstat -tulpn | grep --color :80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1099870/httpd [root@kubenode-2 ~]$ netstat -tulpn | grep --color :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1099870/httpd The ingress service is a NodePort and it is not supposed to use port 80 or 443: apiVersion: v1 kind: Service metadata: annotations: meta.helm.sh/release-name: bkk meta.helm.sh/release-namespace: nginx creationTimestamp: "2023-03-30T21:00:17Z" labels: app.kubernetes.io/instance: bkk app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: bkk-nginx-ingress helm.sh/chart: nginx-ingress-0.16.2 name: bkk-nginx-ingress namespace: nginx resourceVersion: "159007" uid: 1f45c592-80e0-4fb5-acbd-a04c76772e6a spec: clusterIP: 10.104.170.119 clusterIPs: - 10.104.170.119 externalTrafficPolicy: Local internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 31902 port: 801 protocol: TCP targetPort: 801 - name: https nodePort: 31903 port: 4431 protocol: TCP targetPort: 4431 selector: app: bkk-ingress sessionAffinity: None type: NodePort status: loadBalancer: {} And there is not a single ingress defined anywhere in my system: [root@kubemaster helm-chart]$ kubectl get ingresses --all-namespaces No resources found Does anyone have any idea what is going on here? Why the Nginx, living and running as a Kubernetes pod, steals the traffic from the host machine?
One viable solution is to utilize an Egress Router resource defined here, which redirects traffic to a specified IP using a dedicated source IP address: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: <egress_router> - name: EGRESS_GATEWAY value: <egress_gateway> - name: EGRESS_DESTINATION value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod An example configuration looks like follows: apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 203.0.113.25 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod The Egress Router pod is exposed by a Service and linked to the application that needs to send outbound SNMP traps: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: snmp port: 162 type: ClusterIP selector: name: egress-1 The application sends the SNMP trap to the ClusterIP/Service-Name of the Service exposing the Egress Router pod, and the pod redirects the request to the specified remote server. Once redirected, the source IP is changed to the Source IP specified in the Egress Router resource. For more information on implementing the egress router in redirection mode, see here. Note that depending on your network configuration, you might need to configure the assign-macvlan field to a different NIC interface and set it to the name of that interface, e.g. eth1.
###Instruction: I have a pod within a Kubernetes cluster that needs to send alarms via SNMP to an external network management system. However, the external system will only be able to identify the pod if it keeps a stable IP address. Considering the ephermal nature of pods, would it be possible to send/redirect requests to a system outside of the cluster with a static IP? The information I could gather by now only proposed solutions on how to reach the pod from outside the cluster with e.g. Services. I found the following answer that suggests using an egress gateway, but not much information is provided on how to approach the issue. ### Response: One viable solution is to utilize an Egress Router resource defined here, which redirects traffic to a specified IP using a dedicated source IP address: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: <egress_router> - name: EGRESS_GATEWAY value: <egress_gateway> - name: EGRESS_DESTINATION value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod An example configuration looks like follows: apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 203.0.113.25 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod The Egress Router pod is exposed by a Service and linked to the application that needs to send outbound SNMP traps: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: snmp port: 162 type: ClusterIP selector: name: egress-1 The application sends the SNMP trap to the ClusterIP/Service-Name of the Service exposing the Egress Router pod, and the pod redirects the request to the specified remote server. Once redirected, the source IP is changed to the Source IP specified in the Egress Router resource. For more information on implementing the egress router in redirection mode, see here. Note that depending on your network configuration, you might need to configure the assign-macvlan field to a different NIC interface and set it to the name of that interface, e.g. eth1.
I have a pod within a Kubernetes cluster that needs to send alarms via SNMP to an external network management system. However, the external system will only be able to identify the pod if it keeps a stable IP address. Considering the ephermal nature of pods, would it be possible to send/redirect requests to a system outside of the cluster with a static IP? The information I could gather by now only proposed solutions on how to reach the pod from outside the cluster with e.g. Services. I found the following answer that suggests using an egress gateway, but not much information is provided on how to approach the issue.
You need to: Enable Haproxy on pfsense Create a frontend on your 80,443 ports. Create two backends pointing to docker and k8s environments. Write ACLs in your frontend to point traffic to specific backend, according to hostname and SNI(for ssl).
###Instruction: I have in my home env two systems: a Docker server on 10.0.0.10/16, using Traefik reverse proxy with one of my public domains and a kubernetes cluster with VIP on 10.0.0.201/16, with Nginx and supposedly with another of my public domains my firewall is pfsense, I have reverse proxy installed; certificates are managed by nginx /traefik on docker /k8s How can I configure pfsense to direct domainA traffic to the Docker host and DomainB to the k8s-one? (no need for it do do ACME, just directing the traffic...) A step by step guide, or maybe some pointer to a good guide would be much appreciated... Thanks, G Tried setting up reverse proxy on pfsense, but clearly missing some bits ### Response: You need to: Enable Haproxy on pfsense Create a frontend on your 80,443 ports. Create two backends pointing to docker and k8s environments. Write ACLs in your frontend to point traffic to specific backend, according to hostname and SNI(for ssl).
I have in my home env two systems: a Docker server on 10.0.0.10/16, using Traefik reverse proxy with one of my public domains and a kubernetes cluster with VIP on 10.0.0.201/16, with Nginx and supposedly with another of my public domains my firewall is pfsense, I have reverse proxy installed; certificates are managed by nginx /traefik on docker /k8s How can I configure pfsense to direct domainA traffic to the Docker host and DomainB to the k8s-one? (no need for it do do ACME, just directing the traffic...) A step by step guide, or maybe some pointer to a good guide would be much appreciated... Thanks, G Tried setting up reverse proxy on pfsense, but clearly missing some bits