prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster.
The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm.</p>
<p>However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown".</p>
<p>The installation of ArgoCD was done with the next command. (Basic install)</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>And, as example, the kube-prometheus-stack Application I create in Github looks this way:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack
namespace: argocd
spec:
project: default
source:
chart: kube-prometheus-stack
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: 44.4.1
helm:
releaseName: kube-prometheus-stack
destination:
server: "https://kubernetes.default.svc"
namespace: observability
</code></pre>
<p>Any idea what I could be missing?</p>
<p>Thanks!</p>
| <p>Try changing:</p>
<pre><code>FROM repoURL: https://prometheus-community.github.io/helm-charts
TO repoURL: git@github.com:prometheus-community/helm-charts.git
OR repoURL: https://github.com/prometheus-community/helm-charts.git
</code></pre>
<pre><code>FROM targetRevision: 44.4.1
TO targetRevision: kube-prometheus-stack-44.4.1
</code></pre>
<p>And under the <em>targetRevision</em> field, add:</p>
<pre><code>path: charts/kube-prometheus-stack
</code></pre>
|
<p>I have superset installed via helmchart in my kubernetes environment, I took everything from official documentation and repository: <a href="https://github.com/apache/superset" rel="nofollow noreferrer">https://github.com/apache/superset</a></p>
<p>I'm trying to archive a data autorefresh of the dashboard every 12 hours via helmchart and not via the UI; I read that this can be done enabling superset cache so data will be cached for 12 hours and then dynamically refreshed and everyone that access superset UI can see the same values.</p>
<p>My problem now is one.... I can see the cache configuration on the superset/config.py file:</p>
<pre><code># Default cache for Superset objects
CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"}
# Cache for datasource metadata and query results
DATA_CACHE_CONFIG: CacheConfig = {"CACHE_TYPE": "NullCache"}
# Cache for dashboard filter state (`CACHE_TYPE` defaults to `SimpleCache` when
# running in debug mode unless overridden)
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
"CACHE_DEFAULT_TIMEOUT": int(timedelta(days=90).total_seconds()),
# should the timeout be reset when retrieving a cached value
"REFRESH_TIMEOUT_ON_RETRIEVAL": True,
}
# Cache for explore form data state (`CACHE_TYPE` defaults to `SimpleCache` when
# running in debug mode unless overridden)
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
"CACHE_DEFAULT_TIMEOUT": int(timedelta(days=7).total_seconds()),
# should the timeout be reset when retrieving a cached value
"REFRESH_TIMEOUT_ON_RETRIEVAL": True,
}
</code></pre>
<p>As per documentation I'm using the <strong>configOverrides</strong> section of the helmchart to overwrite the default values and enable the cache of config, data, filter ad explore but I can't find any example of how to do it and everything I try always fail in helmrelease.</p>
<p>I try to read the helmchart but looks that it take all the <strong>configOverrides</strong> section and I was not able to find where it overwrite those specific values.</p>
<p>Some example of what I try to overwrite, for example enabling some flag works without problem:</p>
<pre><code>configOverrides:
enable_flags: |
FEATURE_FLAGS = {
"DASHBOARD_NATIVE_FILTERS": True,
"ENABLE_TEMPLATE_PROCESSING": True,
"DASHBOARD_CROSS_FILTERS": True,
"DYNAMIC_PLUGINS": True,
"VERSIONED_EXPORT": True,
"DASHBOARD_RBAC": True,
}
</code></pre>
<p>But if I try to overwrite one or more cache value it fail (config.py <a href="https://github.com/apache/superset/blob/master/superset/config.py" rel="nofollow noreferrer">https://github.com/apache/superset/blob/master/superset/config.py</a>), this is one of the different way I try to overwrite checking the helm value file, the template and the superser config.py (and checkign other articles):</p>
<pre><code>configOverrides:
cache_config: |
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
data_cache_config: |
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
filter_cache_config: |
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
explore_cache_config: |
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<p>Any help pls? Or redirect to some good documentation that has example! Ps the redis installation I have it's the default one created by the helmchart, I didn't change anything on it.</p>
| <p><strong>TL;DR;</strong> your <code>configOverrides</code> should look like this:</p>
<pre><code>configOverrides:
cache_config: |
from datetime import timedelta
from superset.superset_typing import CacheConfig
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<h2>Details:</h2>
<p>After running a helm install with your settings, your config file will look a bit like this:</p>
<pre><code>import os
from cachelib.redis import RedisCache
...
CACHE_CONFIG = {
'CACHE_TYPE': 'redis',
'CACHE_DEFAULT_TIMEOUT': 300,
'CACHE_KEY_PREFIX': 'superset_',
'CACHE_REDIS_HOST': env('REDIS_HOST'),
'CACHE_REDIS_PORT': env('REDIS_PORT'),
'CACHE_REDIS_PASSWORD': env('REDIS_PASSWORD'),
'CACHE_REDIS_DB': env('REDIS_DB', 1),
}
DATA_CACHE_CONFIG = CACHE_CONFIG
...
# Overrides
# cache_config
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
# data_cache_config
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
# enable_flags
FEATURE_FLAGS = {
"DASHBOARD_NATIVE_FILTERS": True,
"ENABLE_TEMPLATE_PROCESSING": True,
"DASHBOARD_CROSS_FILTERS": True,
"DYNAMIC_PLUGINS": True,
"VERSIONED_EXPORT": True,
"DASHBOARD_RBAC": True,
}
# explore_cache_config
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
# filter_cache_config
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
</code></pre>
<p>When I looked at the pod logs, there were a lot of errors due to the function <code>timedelta</code> not being defined, here is a sample of the logs I can see:</p>
<pre><code>File "/app/pythonpath/superset_config.py", line 42, in <module>
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
NameError: name 'timedelta' is not defined
</code></pre>
<p>The file in question, <code>/app/pythonpath/superset_config.py</code> , is loaded via an import <a href="https://github.com/apache/superset/blob/master/superset/config.py#L1587" rel="nofollow noreferrer">here</a> as mentioned in <a href="https://github.com/apache/superset/blob/master/superset/config.py#L19" rel="nofollow noreferrer">the comment at the top of the file</a>.</p>
<p>Notice that you're writing a fresh new <code>.py</code> file; which means that you need to add <code>from datetime import timedelta</code> at the top in the configOverrides section.</p>
<p>However, since the doc in the helm chart states the following warning <code>WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py</code>, and you clearly want to use the function <code>timedelta</code>, we must combine all three blocks under the same section like this:</p>
<pre><code>configOverrides:
cache_config: |
from datetime import timedelta
CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_cache_'
}
DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_data_'
}
FILTER_STATE_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_filter_'
}
EXPLORE_FORM_DATA_CACHE_CONFIG: CacheConfig = {
'CACHE_TYPE': 'RedisCache',
'CACHE_DEFAULT_TIMEOUT': int(timedelta(hours=6).total_seconds()),
'CACHE_KEY_PREFIX': 'superset_explore_'
}
</code></pre>
<p>Furthermore, you wanted to use the type <code>CacheConfig</code>, so we should also include an import for it at the top.</p>
|
<p>I am running airflow via MWAA on aws and the worker nodes are running k8s. The pods are getting scheduled just fine but I am trying to use pod_template_file with KubernetesPodOperator, it's giving me a bunch of uncertain behavior.</p>
<p>My template file stored in S3</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app1
namespace: app1
spec:
containers:
- name: base
image: "alpine:latest"
command: ["/bin/sh"]
args: ["-c", "while true; do echo hi>> /data/app.log; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: claim1
</code></pre>
<p>My DAG file</p>
<pre><code>t_1 = KubernetesPodOperator(
task_id=job_name_1,
namespace="app",
name=job_name_1,
get_logs=True,
is_delete_operator_pod=False,
in_cluster=False,
config_file=kube_config_path,
startup_timeout_seconds=240,
cluster_context='test',
pod_template_file="/opt/airflow/pod_template_file_example-1.yaml",
dag=dag)
</code></pre>
<p>When I go with this, I get an error that the pod spec is invalid as it's missing image field. This is surprising as <code>image</code> is present in pod-template.</p>
<p>I also tried below, it works, but it totally ignores the pod-template file and spins up a alpine container and exits. So looks like it's totally ignoring the pod_template_file param.</p>
<pre><code>full_pod_spec = k8s.V1Pod(
metadata=metadata_2,
spec=k8s.V1PodSpec(containers=[
k8s.V1Container(
name="base",
image="alpine:latest",
)
], ))
t_1 = KubernetesPodOperator(
task_id=job_name_1,
namespace="mlops",
name=job_name_1,
get_logs=True,
is_delete_operator_pod=False,
in_cluster=False,
config_file=kube_config_path,
startup_timeout_seconds=240,
cluster_context='aws',
full_pod_spec=full_pod_spec,
pod_template_file="/opt/airflow/pod_template_file_example-1.yaml",
dag=dag)
</code></pre>
<p>What is the correct way to reference a pod_template_file in KubernetesPodOperator in airflow?</p>
<p>References : <a href="https://imsharadmishra.medium.com/different-ways-to-use-kubernetes-pod-operator-in-airflow-74e655221386" rel="nofollow noreferrer">medium</a></p>
| <p>Never used <code>MWAA</code> on aws before. But your problem seems to be:</p>
<p><strong>Is your template file stored in S3 available to the DAG script file through path <code>"/opt/airflow/pod_template_file_example-1.yaml"</code>?</strong></p>
<p>Check out the docs for <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#difference-between-kubernetespodoperator-and-kubernetes-object-spec" rel="nofollow noreferrer">Difference between KubernetesPodOperator and Kubernetes object spec</a>.</p>
<p>So <code>KubernetesPodOperator</code> just creates the equivalent YAML/JSON object spec for the Pod, and it contains 4 types of arguments for this purpose with the following order of precedence:</p>
<ol>
<li>KubernetesPodOperator arguments, like <code>name/namespace/image</code>...;</li>
<li>full pod spec, which is arg: <code>full_pod_spec</code>;</li>
<li>pod template file, which is arg: <code>pod_template_file</code>;</li>
<li>airflow connection;</li>
</ol>
|
<p>I have a kafka topic that has 5 partitions. I have 5 pods currently consuming these 5 partitions. However due to a particular need, I need each pod to only consume from its assigned partition. But due to pods all having the identical configuration on kubernetes I can't tell each pod which partition to consumer. What's the best way to achieve this? Should I just have 5 independent deployments instead?</p>
<p>That seems odd with all 5 having the same code. I've seen the use of stateful set, but that seems to rely on an index. Not sure how robust that approach is.</p>
| <p>If you're able to change your deployment to a statefulset, each pod of your replica will contain a sequence number starting from 0. The podname is fixed like my-kafka-consumer-0, my-kafka-consumer-1, my-kafka-consumer-(N-1), and so on.</p>
<p>Next you should use the podname as an environment variable inside your pod which can be read by your application.</p>
<p>You can do it by set the env variable like this:</p>
<pre><code>env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>If this is done, your application should be able to read this information from the environment variables. This is needed to identify a consumer inside the consumer group.</p>
<p>So the next thing what you're looking for it's called a static membership in your kafka client configuration.</p>
<p>You can reach this by setting the group.id and the group.instance.id</p>
<pre><code>'group.id': 'consumer-group'
'group.instance.id': '<MY_POD_NAME FROM THE ENVIRONMENT VARS>'
</code></pre>
<p>Not tested, but i think this should do the trick.</p>
|
<p>I want to change kube-apiserver log level use --v parameter , I checked all the documents, and none of them gave the value range of this parameter.</p>
<p>Check the code, the value is int32 type, I tried to fill in 0, 1, 2, and even 9999, all can pass the verification and start kube-apiserver normally.</p>
<p>So I want to know what is the value range of this parameter.Is the value 9999 really meaningful?</p>
| <p>9999 isn't a valid value.</p>
<p><a href="https://i.stack.imgur.com/bcFc4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bcFc4.png" alt="Table" /></a></p>
<p><a href="https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.8/rest_api/editing-kubelet-log-level-verbosity.html#log-verbosity-descriptions_editing-kubelet-log-level-verbosity</a></p>
<p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md</a></p>
<p>I suggest not to increase the verbosity of the logs in the production environment (activate them only if necessary, otherwise you may have performance problems).</p>
|
<p>I am having a problem in my Kubernetes cluster. Currently I am running my Laravel application in kubernetes with success. Now I am trying to make the storage folder in my app a persistant volume, because it can be used to store images and stuff. My deployment looks like this now:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-api-app
namespace: my-project
labels:
app.kubernetes.io/name: laravel-api-app
spec:
replicas: 1
selector:
matchLabels:
app: laravel-api-app
template:
metadata:
labels:
app: laravel-api-app
spec:
containers:
- name: laravel-api-app
image: me/laravel-api:v1.0.0
ports:
- name: laravel
containerPort: 8080
imagePullPolicy: Always
envFrom:
- secretRef:
name: laravel-api-secret
- configMapRef:
name: laravel-api-config
volumeMounts:
- name: storage
mountPath: /var/www/html/storage
imagePullSecrets:
- name: regcred
volumes:
- name: storage
persistentVolumeClaim:
claimName: laravel-api-persistant-volume-claim
</code></pre>
<p>As you can see my claim is mounted to the <code>/var/www/html/storage</code> folder. Now in my Dockerfile I set all my folders to the user <code>nobody</code> like this:</p>
<pre><code>USER nobody
COPY --chown=nobody . /var/www/html
</code></pre>
<p>However, using this results in the following folder rights in my pod (<code>ls -la</code>):</p>
<pre><code>drwxrwxrwx 1 www-data www-data 4096 Mar 14 18:24 .
drwxr-xr-x 1 root root 4096 Feb 26 17:43 ..
-rw-rw-rw- 1 nobody nobody 48 Mar 12 22:27 .dockerignore
-rw-rw-rw- 1 nobody nobody 220 Mar 12 22:27 .editorconfig
-rw-r--r-- 1 nobody nobody 718 Mar 14 18:22 .env
-rw-rw-rw- 1 nobody nobody 660 Mar 14 18:22 .env.example
-rw-rw-rw- 1 nobody nobody 718 Mar 14 12:10 .env.pipeline
-rw-rw-rw- 1 nobody nobody 111 Mar 12 22:27 .gitattributes
-rw-rw-rw- 1 nobody nobody 171 Mar 14 12:10 .gitignore
drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:30 .gitlab-ci-scripts
-rw-rw-rw- 1 nobody nobody 2336 Mar 14 01:13 .gitlab-ci.yml
-rw-rw-rw- 1 nobody nobody 174 Mar 12 22:27 .styleci.yml
-rw-rw-rw- 1 nobody nobody 691 Mar 14 10:02 Makefile
drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 app
-rwxrwxrwx 1 nobody nobody 1686 Mar 12 22:27 artisan
drwxrwxrwx 1 nobody nobody 4096 Mar 12 22:27 bootstrap
-rw-rw-rw- 1 nobody nobody 1476 Mar 12 22:27 composer.json
-rw-rw-rw- 1 nobody nobody 261287 Mar 12 22:27 composer.lock
drwxrwxrwx 2 nobody nobody 4096 Mar 14 12:10 config
drwxrwxrwx 5 nobody nobody 4096 Mar 12 22:27 database
drwxrwxrwx 5 nobody nobody 4096 Mar 13 09:45 docker
-rw-rw-rw- 1 nobody nobody 569 Mar 14 12:27 docker-compose-test.yml
-rw-rw-rw- 1 nobody nobody 584 Mar 14 12:27 docker-compose.yml
-rw-rw-rw- 1 nobody nobody 1013 Mar 14 18:24 package.json
-rw-rw-rw- 1 nobody nobody 1405 Mar 12 22:27 phpunit.xml
drwxrwxrwx 5 nobody nobody 4096 Mar 14 18:23 public
-rw-rw-rw- 1 nobody nobody 3496 Mar 12 22:27 readme.md
drwxrwxrwx 6 nobody nobody 4096 Mar 12 22:27 resources
drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 routes
drwxrwxrwx 2 nobody nobody 4096 Mar 12 22:27 scripts
-rw-rw-rw- 1 nobody nobody 563 Mar 12 22:27 server.php
drwxr-xr-x 2 root root 4096 Mar 14 18:18 storage
drwxrwxrwx 4 nobody nobody 4096 Mar 12 22:27 tests
drwxr-xr-x 38 nobody nobody 4096 Mar 14 18:22 vendor
-rw-rw-rw- 1 nobody nobody 538 Mar 12 22:27 webpack.mix.js
</code></pre>
<p>As you can see, my storage folder has <code>root/root</code> which I also want to be <code>nobody/nobody</code>. I thought about creating an initContainer like this:</p>
<pre><code>initContainers:
- name: setup-storage
image: busybox
command: ['sh', '-c', '/path/to/setup-script.sh']
volumeMounts:
- name: storage
mountPath: /path/to/storage/directory
</code></pre>
<p>With setup-script.sh containing:</p>
<pre><code>#!/bin/sh
chown -R nobody:nobody /path/to/storage/directory
chmod -R 755 /path/to/storage/directory
</code></pre>
<p>But I have a feeling that there should be (or is) something much simpler to get the result I want.</p>
<p>I already tried adding securityContext with id: 65534 like so:</p>
<p>securityContext:
runAsUser: 65534
runAsGroup: 65534
fsGroup: 65534</p>
<p>But that resulted in the same <code>root/root</code> <code>owner/group</code>. The last thing I tried was creating a initContainer like this:</p>
<pre><code>initContainers:
- name: laravel-api-init
image: me/laravel-api:v1.0.0
args:
- /bin/bash
- -c
- cp -Rnp /var/www/html/storage/* /mnt
imagePullPolicy: Always
envFrom:
- secretRef:
name: laravel-api-secret
- configMapRef:
name: laravel-api-config
volumeMounts:
- name: storage
mountPath: /mnt
</code></pre>
<p>This "should" copy all the content to <code>/mnt</code> which is the mounted location for the storage and then start the real deployment which mounts the copied data in the app. Unfortunatly this returns the error: <code>Init:ExitCode:127 kubernetes</code>, which is weird, because both of those locations do exist. One other thing with this approach that should not happen (I don't know if it will) is that once the volume contains data from a previous session (maybe after server reboot), that it doesn't tamper with the already existing data of the app.</p>
<h2>In short</h2>
<p>So after this explanation and my tries, here is what I am trying to achieve. I want my Laravel application to have a Persistant Volume (the storage folder), so that I limit the developers of that Laravel app to a given storage. For instance, when I create a PV of 5GB, they cannot store more than 5GB of data for their application. This storage has to be persistant, so that after a server reboot, the storage is still there!</p>
<h2>Update</h2>
<p>Here is the updated yaml with security context:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-api-app
namespace: my-project
labels:
app.kubernetes.io/name: laravel-api-app
spec:
replicas: 1
selector:
matchLabels:
app: laravel-api-app
template:
metadata:
labels:
app: laravel-api-app
spec:
containers:
- name: laravel-api-init
image: docker.argoplan.nl/clients/opus-volvere/laravel-api/production:v1.0.0
args:
- /bin/sh
- -c
- cp -Rnp /var/www/html/storage/* /mnt
imagePullPolicy: Always
envFrom:
- secretRef:
name: laravel-api-secret
- configMapRef:
name: laravel-api-config
volumeMounts:
- name: storage
mountPath: /mnt
securityContext:
fsGroup: 65534
fsGroupChangePolicy: "OnRootMismatch"
imagePullSecrets:
- name: regcred
volumes:
- name: storage
persistentVolumeClaim:
claimName: laravel-api-persistant-volume-claim
</code></pre>
<p>For debugging purpose I copied my initContainer as actual container, so I can see my container logs in ArgoCD. If is is an initContainer, I can't see any logs. Using the yaml above, I see this in the logs:</p>
<pre><code>cp: can't create directory '/mnt/app': Permission denied
cp: can't create directory '/mnt/framework': Permission denied
</code></pre>
<p>This is the live manifest, which apparantly does not contain the new security context, while I generated the app just now:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 0a4ce0e873c92442fdaf1ac8a1313966bd995ae65471b34f70b9de2634edecf9
cni.projectcalico.org/podIP: 10.1.10.55/32
cni.projectcalico.org/podIPs: 10.1.10.55/32
creationTimestamp: '2023-03-17T09:17:58Z'
generateName: laravel-api-app-74b7d9584c-
labels:
app: laravel-api-app
pod-template-hash: 74b7d9584c
name: laravel-api-app-74b7d9584c-4dc9h
namespace: my-project
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: laravel-api-app-74b7d9584c
uid: d2e2ab4d-0916-43fc-b294-3e5eb2778c0d
resourceVersion: '4954636'
uid: 12327d67-cdf9-4387-afe8-3cf536531dd2
spec:
containers:
- args:
- /bin/sh
- '-c'
- cp -Rnp /var/www/html/storage/* /mnt
envFrom:
- secretRef:
name: laravel-api-secret
- configMapRef:
name: laravel-api-config
image: 'me/laravel-api:v1.0.0'
imagePullPolicy: Always
name: laravel-api-init
resources: {}
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt
name: storage
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8cfg8
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: regcred
nodeName: tohatsu
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: storage
persistentVolumeClaim:
claimName: laravel-api-persistant-volume-claim
- name: kube-api-access-8cfg8
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: '2023-03-17T09:17:58Z'
status: 'True'
type: Initialized
- lastProbeTime: null
lastTransitionTime: '2023-03-17T09:17:58Z'
message: 'containers with unready status: [laravel-api-init]'
reason: ContainersNotReady
status: 'False'
type: Ready
- lastProbeTime: null
lastTransitionTime: '2023-03-17T09:17:58Z'
message: 'containers with unready status: [laravel-api-init]'
reason: ContainersNotReady
status: 'False'
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: '2023-03-17T09:17:58Z'
status: 'True'
type: PodScheduled
containerStatuses:
- containerID: >-
containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c
image: 'me/laravel-api:v1.0.0'
imageID: >-
me/laravel-api@secret
lastState:
terminated:
containerID: >-
containerd://eaf8e09f0e2aceec6cb26e09406518a5d9851f94dfb8f8be3ce3e65ee47e282c
exitCode: 1
finishedAt: '2023-03-17T09:20:53Z'
reason: Error
startedAt: '2023-03-17T09:20:53Z'
name: laravel-api-init
ready: false
restartCount: 5
started: false
state:
waiting:
message: >-
back-off 2m40s restarting failed container=laravel-api-init
pod=laravel-api-app-74b7d9584c-4dc9h_my-project(12327d67-cdf9-4387-afe8-3cf536531dd2)
reason: CrashLoopBackOff
hostIP: 192.168.1.8
phase: Running
podIP: 10.1.10.55
podIPs:
- ip: 10.1.10.55
qosClass: BestEffort
startTime: '2023-03-17T09:17:58Z'
</code></pre>
| <p>You didn't mention your k8s version. My answer might not be suitable for you, when you're using k8s below of v1.23.</p>
<p>Kubernetes can setup the permissions for you. Use <code>fsGroup</code> and <code>fsGroupChangePolicy</code> and k8s will take over the job for you.</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: laravel-api-app
image: me/laravel-api:v1.0.0
ports:
- name: laravel
containerPort: 8080
imagePullPolicy: Always
envFrom:
- secretRef:
name: laravel-api-secret
- configMapRef:
name: laravel-api-config
volumeMounts:
- name: storage
mountPath: /var/www/html/storage
# this part is new
securityContext:
# user/group of nobody should have the highest possible id
fsGroup: 65534
fsGroupChangePolicy: "OnRootMismatch"
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">Related configuration specs from k8s</a></p>
|
<p>i got a configuration of a postgres deployment, something like this..</p>
<p>on the volumeMounts level</p>
<pre><code> volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistance-storage-new
</code></pre>
<p>on the volumes level</p>
<pre><code> volumes:
- name: postgres-persistance-storage-new
persistentVolumeClaim:
claimName: postgres-pv-claim-new
</code></pre>
<p>the PersistentVolumeClaim spec</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim-new # name of PVC essential for identifying the storage data
labels:
app: postgres
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>My problem is the following : when i delete persistentVolumeClaim via a kubectl command, and then launch again my postgres deployment yaml spec,
the persistentVolume seems to be always here, as logs in postgres container say :</p>
<pre><code>PostgreSQL Database directory appears to contain a database; Skipping initialization
</code></pre>
<p>How is it possible ?</p>
| <p>When you delete a PVC, if there is a resource that uses it (for example if the volume is attached to a Deployments with running Pods) this remains ACTIVE.</p>
<p>This is the reason:
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection</a></p>
<p>When you reapply the YAML describing the Deployment, Kubernetes will upgrade to rolling-update mode.</p>
<blockquote>
<p>Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p>
<p>This means that your "old" Pod will remain active until the "new" becomes Up&Running (but continuing to fail, the "old" will never be killed and consequently the PVC will never be destroyed, continuing to maintain the application configurations).</p>
<p>To conclude, I suggest you delete the resources (postgresql Deployment in this case) that use the deleted PVC before re-installing them.</p>
|
<p>Our project is undergoing a refactor to a micro-services architecture, and we are currently considering different API gateway solutions.</p>
<p>We did our research, looked at the official sites for several solutions, went over some technical comparisons of different solutions, and read articles about our top picks.</p>
<p>So far our main contenders are <strong>Apachee APISIX</strong> and <strong>Kong</strong>, but we are quite torn between them and would like to get a general opinion from actual users.</p>
<p>Below are outlined the different properties and requirements of the project, I would appreciate it if any of you can point out some pros and cons of a solution you are familiar with in regard to them, and it would be great if someone facing similar requirements could share their experience with actually integrating one.</p>
<p><strong>General Info</strong></p>
<ul>
<li>The project is of medium scale, has an active user base, and sees daily use around the clock with an incoming traffic count of a few thousand per minute on the backend.</li>
<li>The project is hosted in a private network, and no cloud services are utilized, so we are looking for a good on-prem solution.</li>
<li>Looking for a rather lightweight solution.</li>
</ul>
<p><strong>Technical Info and Requirements</strong></p>
<ul>
<li>AD FS-based authentication.</li>
<li>Significant reliance on JWT.</li>
<li>Using WebSocket in some micro-services, specifically Socket.io.</li>
<li>Kubernetes deployment, supported by Helm.</li>
<li>Full-stack under Monorepo.</li>
<li>Repository and CI/CD are hosted and managed on GitLab.</li>
<li>The team is trained in several coding languages but prefers working mainly with Typescript as we use React for the front-end, and NestJS for the back-end.</li>
</ul>
<p>Thank you!</p>
| <p>Both Kong and Apache APISIX are popular and feature-rich API gateway solutions. Choosing the right one depends on your specific requirements and use case.</p>
<ol>
<li><p>API Management Features: Both Kong and Apache APISIX provide a wide range of API management features including API authentication, rate limiting, caching, SSL/TLS termination, request/response transformations, and more.</p>
</li>
<li><p>Scalability: Both solutions are built to scale horizontally and vertically. However, Apache APISIX uses a more lightweight and efficient architecture, making it a better option for high performance and low-latency workloads.</p>
</li>
<li><p>Both solutions have a rich ecosystem of plugins and extensions, and can be installed and configured easily.</p>
</li>
</ol>
<p>In summary, for use cases with large-scale, high performance, and low-latency workloads, Apache APISIX might be a better fit.</p>
<p>There has one comparison page may help you: <a href="https://api7.ai/apisix-vs-kong" rel="nofollow noreferrer">https://api7.ai/apisix-vs-kong</a></p>
|
<p>What does the <code>maxReplicas</code> property mean in the pipeline yaml in Azure in context of the k8s deployment?</p>
<p>E.g. in <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli" rel="nofollow noreferrer">this</a> documentation the <code>maxReplicas: 10</code> is mentioned. But there is no explanation about what it means. At least I was not able to find one. Would be grateful if someone will help me to find the documentation on that.</p>
<p>I have two assumptions.</p>
<p>First, it means that we need to duplicate pods. I.e. with the <code>maxReplicas: 10</code> we may have up to 10 clusters with identical pods.</p>
<p>Second assumption, the <code>maxReplicas: 10</code> means that in a k8s cluster we can have no more than 10 pods.</p>
| <p>It refers to the Kubernetes Horizontal Pod Autoscaler.</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
<p>Example: <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli</a></p>
|
<p>We have <a href="https://github.com/jonashackt/tekton-argocd-eks" rel="nofollow noreferrer">a EKS setup provisioned</a> where we use a Cloud Native Buildpacks based Tekton Pipeline our <code>PipelineRuns</code> are stuck and pending forever without getting a Pod resource. We created a PersistentVolumeClaim like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: buildpacks-source-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
</code></pre>
<p>Having a look into the events of this PVC after creation there are the following events indicating something is wrong with our EKS setup:</p>
<p>The command <code>kubectl describe pvc buildpacks-source-pvc</code> gives the following event messages:</p>
<pre><code>Name: buildpacks-source-pvc
Namespace: default
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
volume.kubernetes.io/selected-node: ip-999-99-99-99.eu-central-1.compute.internal
volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: affinity-assistant-0b3d266b91-0
affinity-assistant-53a7c08baf-0
affinity-assistant-a375f28de3-0
affinity-assistant-e8cb1a6e15-0
buildpacks-test-pipeline-run-9rz4l-fetch-repository-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m43s (x561 over 143m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
</code></pre>
<p>What is this EBS CSI thing and how do we get our cluster working as before?</p>
| <p>From EKS 1.23 on <a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/" rel="noreferrer">a Container Storage Interface (CSI) driver</a> is needed in order to get your PersisentVolumeClaims served by a PersistentVolume as you are used to from earlier EKS versions.</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="noreferrer">The docs tell us</a>, what needs to be configured:</p>
<h2>Solution: Configure Amazon EBS CSI driver for working PersistentVolumes in EKS</h2>
<p>In essence we need to enable the AWS EBS CSI driver as an EKS addon. But beforehand we need to enable the IAM OIDC provider and create the IAM role for the EBS CSI driver. The easiest way to do both is to use <a href="https://github.com/weaveworks/eksctl" rel="noreferrer">eksctl</a> (other ways like <a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html" rel="noreferrer">using plain <code>aws</code> cli or the AWS GUI are described in the docs</a>).</p>
<h4>1.) Install eksctl</h4>
<p>We assume here that <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html" rel="noreferrer">the aws cli is installed and configured</a> - and you have access to your EKS cluster. To use <code>eksctl</code> we need to install it first. On a Mac use brew like:</p>
<pre><code>brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
</code></pre>
<p>or on Linux use:</p>
<pre><code>curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
</code></pre>
<h4>2.) Enable IAM OIDC provider</h4>
<p>A prerequisite for the EBS CSI driver to work is to have an existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. This IAM OIDC provider can be enabled with the following command:</p>
<pre><code>eksctl utils associate-iam-oidc-provider --region=eu-central-1 --cluster=YourClusterNameHere --approve
</code></pre>
<h4>3.) Create Amazon EBS CSI driver IAM role</h4>
<p>Now having <code>eksctl</code> in place, create the IAM role:</p>
<pre><code>eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster YourClusterNameHere \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name AmazonEKS_EBS_CSI_DriverRole
</code></pre>
<p>As you can see AWS maintains a managed policy for us we can simply use (<code>AWS maintains a managed policy, available at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy</code>). Only if you use encrypted EBS drives <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#installation-1" rel="noreferrer">you need to additionally add configuration to the policy</a>.</p>
<p>The command...</p>
<blockquote>
<p>...deploys an AWS CloudFormation stack that creates an IAM role,
attaches the IAM policy to it, and annotates the existing
ebs-csi-controller-sa service account with the Amazon Resource Name
(ARN) of the IAM role.</p>
</blockquote>
<h4>4.) Add the Amazon EBS CSI add-on</h4>
<p>Now we can finally add the EBS CSI add-on. Therefor we also need the AWS Account id which we can obtain by running <code>aws sts get-caller-identity --query Account --output text</code> (see <a href="https://stackoverflow.com/questions/33791069/quick-way-to-get-aws-account-number-from-the-aws-cli-tools">Quick way to get AWS Account number from the AWS CLI tools?</a>). Now the <code>eksctl create addon</code> command looks like this:</p>
<pre><code>eksctl create addon --name aws-ebs-csi-driver --cluster YourClusterNameHere --service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/AmazonEKS_EBS_CSI_DriverRole --force
</code></pre>
<p>Now your PersistentVolumeClaim should get the status <code>Bound</code> while a EBS volume got created for you - and the Tekton Pipeline should run again.</p>
|
<p>I am trying to install using Helm Chart Repository image of Keycloak so that MariaDB Galera is used as database.</p>
<p><strong>Installation</strong></p>
<pre><code>helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade keycloak bitnami/keycloak --create-namespace --install --namespace default --values values-keycloak.yaml --version 13.3.0
</code></pre>
<p>**values-keycloak.yaml **</p>
<pre><code>global:
storageClass: "hcloud-volumes"
auth:
adminUser: user
adminPassword: "user"
tls:
enabled: true
autoGenerated: true
production: true
extraEnvVars:
- name: KC_DB
value: 'mariadb'
- name: KC_DB_URL
value: 'jdbc:mariadb://mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;'
replicaCount: 1
service:
type: ClusterIP
ingress:
enabled: true
hostname: example.com
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
tls: true
postgresql:
enabled: false
externalDatabase:
host: "mariadb-galera.default.svc.cluster.local"
port: 3306
user: bn_keycloak
database: bitnami_keycloak
password: "password"
</code></pre>
<p><strong>Error</strong></p>
<pre><code>kubectl logs -n default keycloak-0
keycloak 23:50:06.59
keycloak 23:50:06.59 Welcome to the Bitnami keycloak container
keycloak 23:50:06.60 Subscribe to project updates by watching https://github.com/bitnami/containers
keycloak 23:50:06.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues
keycloak 23:50:06.60
keycloak 23:50:06.60 INFO ==> ** Starting keycloak setup **
keycloak 23:50:06.62 INFO ==> Validating settings in KEYCLOAK_* env vars...
keycloak 23:50:06.66 INFO ==> Trying to connect to PostgreSQL server mariadb-galera.default.svc.cluster.local...
keycloak 23:50:06.69 INFO ==> Found PostgreSQL server listening at mariadb-galera.default.svc.cluster.local:3306
keycloak 23:50:06.70 INFO ==> Configuring database settings
keycloak 23:50:06.78 INFO ==> Enabling statistics
keycloak 23:50:06.79 INFO ==> Configuring http settings
keycloak 23:50:06.82 INFO ==> Configuring hostname settings
keycloak 23:50:06.83 INFO ==> Configuring cache count
keycloak 23:50:06.85 INFO ==> Configuring log level
keycloak 23:50:06.89 INFO ==> Configuring proxy
keycloak 23:50:06.91 INFO ==> Configuring Keycloak HTTPS settings
keycloak 23:50:06.94 INFO ==> ** keycloak setup finished! **
keycloak 23:50:06.96 INFO ==> ** Starting keycloak **
Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=keycloak-headless.default.svc.cluster.local
Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2023-03-18 23:50:13,551 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice
2023-03-18 23:50:14,494 WARN [org.keycloak.services] (build-22) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
2023-03-18 23:50:25,703 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 15407ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --optimized -cf=/opt/bitnami/keycloak/conf/keycloak.conf
2023-03-18 23:50:28,160 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true
2023-03-18 23:50:30,398 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-03-18 23:50:31,267 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;
2023-03-18 23:50:31,269 WARN [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread: keycloak-default) HHH000342: Could not obtain connection to query metadata: java.sql.SQLNonTransientConnectionException: Socket fail to connect to host:address=(host=mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;)(port=3306)(type=primary). mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;
at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:136)
at org.mariadb.jdbc.client.impl.StandardClient.<init>(StandardClient.java:103)
at org.mariadb.jdbc.Driver.connect(Driver.java:70)
at org.mariadb.jdbc.MariaDbDataSource.getXAConnection(MariaDbDataSource.java:225)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:232)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:535)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:516)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.UnknownHostException: mariadb-galera.default.svc.cluster.local;databaseName=bitnami_keycloak;
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:567)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
at java.base/java.net.Socket.connect(Socket.java:633)
at org.mariadb.jdbc.client.impl.ConnectionHelper.connectSocket(ConnectionHelper.java:130)
... 11 more
</code></pre>
<p>I would like to get the correct connection. Perhaps this link will help (<a href="https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion" rel="nofollow noreferrer">https://passe-de-mode.uedasoft.com/tips/software/server/keycloak/keycloak01.html#conclusion</a>), but I can't figure it out.</p>
| <p>Try using</p>
<pre><code>jdbc:mariadb://host/database
jdbc:mariadb://mariadb-galera.default.svc.cluster.local/bitnami_keycloak
</code></pre>
<p>as KC_DB_URL value.</p>
<p><a href="https://www.keycloak.org/server/containers" rel="nofollow noreferrer">https://www.keycloak.org/server/containers</a> (chapter “Relevant options”)</p>
<p><a href="https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/blob/fb315b57c3c308d5d5e6646b8cce1f86abf1d523/docs/tests-db.md#mariadb</a> (“Run tests:” step)</p>
|
<p>The EKS docs in the page <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html" rel="nofollow noreferrer">Amazon EKS node IAM role</a> state that before you create worker nodes, you must create a role with the following policies:</p>
<ul>
<li>AmazonEKSWorkerNodePolicy</li>
<li>AmazonEC2ContainerRegistryReadOnly</li>
<li>AmazonEKS_CNI_Policy</li>
</ul>
<p>Regarding the last one, the docs state that:</p>
<blockquote>
<p>Rather than attaching the policy to this role however, we recommend that you attach the policy to a separate role used specifically for the Amazon VPC CNI add-on</p>
</blockquote>
<p>Can someone explain why is this recommended?</p>
| <p>The reason why it is recommended to attach the AmazonEKS_CNI_Policy to a separate role used specifically for the Amazon VPC CNI add-on is to follow the principle of least privilege.</p>
<p>The Amazon VPC CNI (Container Network Interface) is a plugin for Kubernetes that enables networking between pods and the rest of the cluster in a VPC (Virtual Private Cloud) environment. This plugin needs certain permissions to function properly, such as creating and managing network interfaces and route tables.</p>
<p>By creating a separate role for the Amazon VPC CNI add-on, you can ensure that this plugin has only the necessary permissions to perform its specific tasks, and not other permissions that may be included in the AmazonEKSWorkerNodePolicy. This helps to reduce the risk of accidental or intentional misuse of privileges, and makes it easier to audit and manage permissions for different components of your cluster.</p>
<p>Additionally, separating the Amazon VPC CNI permissions from the worker node IAM role can also help with troubleshooting, as it allows you to isolate issues related to the network plugin from other potential problems that may affect the worker nodes or other components of your cluster.</p>
|
<p>Can I lint only changes or pull requests in the kubernetes file instead of linting whole kubernetes files everytime I make any changes in the kubernetes folder?</p>
<p>I was trying this <a href="https://docs.kubelinter.io/#/?id=using-docker" rel="nofollow noreferrer">https://docs.kubelinter.io/#/?id=using-docker</a></p>
| <p>I'm assuming that you are referering to the <a href="https://github.com/marketplace/actions/kube-linter" rel="nofollow noreferrer"><em>kube-linter</em> GitHub action</a>, because that's the one <a href="https://docs.kubelinter.io/#/?id=kubelinter-github-action" rel="nofollow noreferrer">referenced by the kube-linter documentation</a>.</p>
<p>Yes, that action can be given individual files, the <code>directory</code> parameter can be a single file, even though the name doesn't suggest this.</p>
<p>See the <a href="https://github.com/marketplace/actions/kube-linter#parameters" rel="nofollow noreferrer">documented parameters</a>:</p>
<blockquote>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Parameter name</th>
<th>Required?</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>directory</code></td>
<td><strong>(required)</strong></td>
<td><em><strong>Path of file</strong></em> or directory to scan, absolute or relative to the root of the repo.</td>
</tr>
</tbody>
</table>
</div></blockquote>
<p>(Bold italics emphasis mine).</p>
<p>The parameter is simply given to the <code>kube-linter</code> command line; see the <a href="https://github.com/stackrox/kube-linter-action/blob/3e0698d47a525061e50c1380af263c18824c748b/action.yml#L62-L68" rel="nofollow noreferrer">linting step in the <code>action.yml</code> definition file</a>:</p>
<blockquote>
<pre class="lang-bash prettyprint-override"><code>./kube-linter $CONFIG lint "${{ inputs.directory }}" --format "${{ inputs.format }}"
</code></pre>
</blockquote>
<p><code>$CONFIG</code> is set to <code>--config <filename></code> if you provided a <code>config</code> parameter.</p>
<p>In short, it acts <em>exactly</em> like <a href="https://docs.kubelinter.io/#/using-kubelinter?id=running-locally" rel="nofollow noreferrer">running the tool locally</a>, which explicitly states that it can take either an individual file or a directory:</p>
<blockquote>
<ul>
<li><p>The path to your Kubernetes <code>yaml</code> file:</p>
<pre><code>kube-linter lint /path/to/yaml-file.yaml
</code></pre>
</li>
<li><p>The path to a directory containing your Kubernetes <code>yaml</code> files:</p>
<pre><code>kube-linter lint /path/to/directory/containing/yaml-files/
</code></pre>
</li>
</ul>
</blockquote>
|
<p>Is it possible to run a deployment job that downloads helm chart from bitbucket repository and applies the helm chart into kubernetes:</p>
<pre><code>pipeline {
agent any
stages {
stage('test') {
steps {
script {
withKubeConfig([credentialsId: 'my_kubeconfig']) {
sh '''
helm upgrade --install // download and install helm chart here
'''
}
}
}
}
} //
}
</code></pre>
<p>If possible I want first to download the helm chart from bitbucket repository and then using kubernetes service account to apply the chart.</p>
| <p>Yes you are right first need to clone the Helm template repo build the files</p>
<p>You can review this file although i am not using Helm but mostly steps are similar might need to do last step <strong>tweaks</strong> : <a href="https://github.com/harsh4870/node-js-aws-codebuild-basic-ci-cd/blob/main/Jenkinsfile" rel="nofollow noreferrer">https://github.com/harsh4870/node-js-aws-codebuild-basic-ci-cd/blob/main/Jenkinsfile</a></p>
|
<p>so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations.</p>
<p>The question is: how do I conveniently upgrade the charts versions, while keeping the customizations?</p>
<p>For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one.</p>
<p>The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task.</p>
<p>I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice.</p>
| <p>Unfortunately it often happens that the Helm Chart repositories of an application aren't suitable for the environment in which you are going to install them.</p>
<p>I suggest modifying the templates following the patterns of how they were written and sending PRs to the application's GitHub repository.</p>
<p>Unless these are "reasonable" configurations, such as things that complicate the application for whatever reason, there's no other way than to do a "git diff" before downloading the updated repo.</p>
|
<p>I am using a StatefulSet to deploy my application and I have 10 replicas. Because each pod waits for the previous pod to be ready, it takes a long time to spin up my cluster. I found the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies" rel="nofollow noreferrer">podManagementPolicy option</a> on the Kubernetes documentation. I want to set <code>podManagementPolicy</code> to <code>Parallel</code> so my pods don't wait for each other (which I don't need) and my build time is shorter.</p>
<p>However, I am getting the following error when I try to set <code>podManagementPolicy</code> to <code>Parallel</code>:</p>
<blockquote>
<p>The StatefulSet "xxx" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden</p>
</blockquote>
<p>Here is the yaml file for my StatefulSet:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
podManagementPolicy: Parallel
selector:
matchLabels:
app: xxx
serviceName: "xxx"
replicas: 10
minReadySeconds: 0
template:
metadata:
labels:
app: xxx
spec:
containers:
- name: xxx
image: xxx.com/xxx:latest
imagePullPolicy: Always
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerregistrykey
</code></pre>
<p>I am using an<a href="https://www.linode.com/products/kubernetes/" rel="nofollow noreferrer"> Linode LKE</a> cluster and the Kubernetes version is 1.25.</p>
| <p>Looks like you are on K8s version <strong>1.24</strong> or above as mentioned in error you can only update a few fields in <strong>statefulset</strong> unlike <strong>Deployment</strong> or <strong>POD</strong>.</p>
<p>So you have to <strong>delete</strong> and <strong>re-apply</strong> the new changes is the only solution.</p>
<p>You can also create the <strong>new</strong> <strong>stateful</strong> set with a different name and keep the <strong>label</strong> same as the existing running one.</p>
<p>Once one stateful set is up & running you can remove the <strong>older</strong> one and <strong>service</strong> will forward the traffic to new running replicas, guessing there won't be any issues with data as you mentioned about it being fine to run <strong>parallel</strong>.</p>
<p>In case you have a requirement to use existing <strong>PVC</strong> to store the <strong>data</strong>, deleting the <strong>statefulset</strong> is only solution i am seeing.</p>
|
<p>I am deploying to version 1.16 but the pods are getting crashed below are the pod's error.</p>
<p>istiod pod:</p>
<p>2023-03-21T11:58:09.768255Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller
2023-03-21T11:58:09.868998Z info kube controller "extensions.istio.io/v1alpha1/WasmPlugin" is syncing... controller=crd-controller
2023-03-21T11:58:09.887383Z info klog k8s.io/client-go@v0.25.2/tools/cache/reflector.go:169: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope
2023-03-21T11:58:09.887472Z error watch error in cluster Kubernetes: failed to list *v1alpha1.WasmPlugin: wasmplugins.extensions.istio.io is forbidden: User "system:serviceaccount:istio-system:istiod-service-account" cannot list resource "wasmplugins" in API group "extensions.istio.io" at the cluster scope</p>
<p>external-dns:
time="2023-03-21T12:17:22Z" level=fatal msg="failed to sync cache: timed out waiting for the condition"</p>
<p>Version
istioctl version:</p>
<p>client version: 1.17.1
control plane version: 1.16.2
data plane version: none</p>
<p>kubectl version --short:</p>
<p>Client Version: v1.24.10
Kustomize Version: v4.5.4
Server Version: v1.24.10-eks-48e63af</p>
| <p>The error is speaking; the Service Account istiod-service-account has no privileges on the CRDs extensions.istio.io/v1alpha1/WasmPlugin.</p>
<p>The solution to your problem is documented here:
<a href="https://github.com/istio/istio/issues/36886#issue-1107794465" rel="nofollow noreferrer">https://github.com/istio/istio/issues/36886#issue-1107794465</a></p>
|
<p>I'm using NGINX (v4.5.2) on a Kubernetes cluster, and I'm trying to configure it to redirect non-www URLs to the www equivalent. For example...</p>
<blockquote>
<p><a href="https://example.com" rel="nofollow noreferrer">https://example.com</a> -> <a href="https://www.example.com" rel="nofollow noreferrer">https://www.example.com</a><br />
<a href="https://example.com/about" rel="nofollow noreferrer">https://example.com/about</a> -> <a href="http://www.example.com/about" rel="nofollow noreferrer">http://www.example.com/about</a></p>
</blockquote>
<p>So this is what I've added to the annotations (I'll carry on using <code>example.com</code> as the URL host)...</p>
<pre><code>annotations:
...
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'example.com' ) {
rewrite ^ https://www.$request_uri permanent;
}
</code></pre>
<p>However, if I now navigate to <code>https://example.com</code> I get redirected to <code>https://www.</code>.</p>
<p><a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri" rel="nofollow noreferrer">The docs</a> say <code>$request_uri</code> contains ...</p>
<blockquote>
<p>full original request URI (with arguments)</p>
</blockquote>
<p>... so why is <code>$request_uri</code> apparently empty in my case?</p>
| <p><code>$request_uri</code> will contain the path with arguments. To get your desired redirect you probably want to use <code>$host$request_uri</code>.
For some examples of usage of <code>$request_uri</code> you can search it on <a href="https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/" rel="nofollow noreferrer">https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/</a> to see examples or <a href="https://www.webhosting24.com/understanding-nginx-request_uri/" rel="nofollow noreferrer">https://www.webhosting24.com/understanding-nginx-request_uri/</a> also explains it quite well.</p>
|
<p>I have an application in Docker Container which connects with the host application using SCTP protocol. When this container is deployed on Kubernetes Pods, connectivity with this pod from another pod inside a cluster is working fine.
I have tried exposing this pod using a Load Balancer Service and NodePort service externally. When the host application tries to connect to this pod, I am getting an intermittent "Connection Reset By Peer" error. Sometimes after 1st request itself and sometimes after 3rd request.
I have tried other SCTP based demo containers other than my application but having the same issue, where after certain no. of request getting connection reset by peer error. So it isn't problem of my appliation.</p>
<p>My application is listening to the correct port. Below is the output of the command "netstat -anp" inside the pod.</p>
<pre><code>Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 10.244.0.27:80 0.0.0.0:* LISTEN 4579/./build/bin/AM
sctp 10.244.0.27:38412 LISTEN 4579/./build/bin/AM
</code></pre>
<p>My Service file is given below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
clusterIP: 10.100.0.2
selector:
app: my-app
type: NodePort
ports:
- name: sctp
protocol: SCTP
port: 38412
targetPort: 38412
nodePort : 31000
- name: tcp
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>I have this whole setup on Minikube.I haven't used any CNI. I am stuck due to this. Am I missing something ? Since I am working with K8s for the last 2 weeks only. Please help with this issue, and if possible mention any resource regarding SCTP on Kubernetes, since I could find very little.</p>
<p>The following is the tcpdump collected from inside the pod running the sctp connection.</p>
<pre><code>tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:59:02.410219 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100)
10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134]
20:59:02.410260 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324)
amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342]
20:59:02.410308 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296)
10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO]
20:59:02.410348 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [COOKIE ACK]
20:59:02.410552 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 100)
10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664]
20:59:02.410590 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 324)
amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521]
20:59:02.410640 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 296)
10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [COOKIE ECHO]
20:59:02.410673 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [COOKIE ACK]
20:59:04.643163 IP (tos 0x2,ECT(0), ttl 64, id 58512, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ]
20:59:05.155162 IP (tos 0x2,ECT(0), ttl 64, id 58513, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ]
20:59:05.411135 IP (tos 0x2,ECT(0), ttl 64, id 60101, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > charles-02.41024: sctp (1) [HB REQ]
20:59:05.411293 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.41024 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT]
20:59:06.179159 IP (tos 0x2,ECT(0), ttl 64, id 58514, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ]
20:59:06.403172 IP (tos 0x2,ECT(0), ttl 64, id 58515, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > host.minikube.internal.5369: sctp (1) [HB REQ]
20:59:06.695155 IP (tos 0x2,ECT(0), ttl 64, id 58516, offset 0, flags [DF], proto SCTP (132), length 92)
amf-6584c544-cvvrs.31000 > charles-02.5369: sctp (1) [HB REQ]
20:59:06.695270 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.5369 > amf-6584c544-cvvrs.31000: sctp (1) [ABORT]
20:59:09.584088 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 116)
10.244.0.1.41024 > amf-6584c544-cvvrs.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c]
20:59:09.584112 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
amf-6584c544-cvvrs.31000 > 10.244.0.1.41024: sctp (1) [ABORT]
20:59:10.530610 IP (tos 0x2,ECT(0), ttl 63, id 1, offset 0, flags [DF], proto SCTP (132), length 40)
10.244.0.1.5369 > amf-6584c544-cvvrs.31000: sctp (1) [SHUTDOWN]
20:59:10.530644 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
amf-6584c544-cvvrs.31000 > 10.244.0.1.5369: sctp (1) [ABORT]
</code></pre>
<p>The following is the tcpdump collected from the host trying to connect.</p>
<pre><code>tcpdump: listening on br-c54f52300570, link-type EN10MB (Ethernet), capture size 262144 bytes
02:29:02.410177 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100)
charles-02.58648 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2798567257] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 2733196134]
02:29:02.410282 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324)
192.168.49.2.31000 > charles-02.58648: sctp (1) [INIT ACK] [init tag: 1165596116] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 4194554342]
02:29:02.410299 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296)
charles-02.58648 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO]
02:29:02.410360 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
192.168.49.2.31000 > charles-02.58648: sctp (1) [COOKIE ACK]
02:29:02.410528 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 100)
charles-02.54336 > 192.168.49.2.31000: sctp (1) [INIT] [init tag: 2156436948] [rwnd: 106496] [OS: 2] [MIS: 100] [init TSN: 823324664]
02:29:02.410610 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 324)
192.168.49.2.31000 > charles-02.54336: sctp (1) [INIT ACK] [init tag: 2865549963] [rwnd: 106496] [OS: 2] [MIS: 2] [init TSN: 1236428521]
02:29:02.410630 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 296)
charles-02.54336 > 192.168.49.2.31000: sctp (1) [COOKIE ECHO]
02:29:02.410686 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
192.168.49.2.31000 > charles-02.54336: sctp (1) [COOKIE ACK]
02:29:04.643276 IP (tos 0x2,ECT(0), ttl 63, id 58512, offset 0, flags [DF], proto SCTP (132), length 92)
192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ]
02:29:04.643303 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT]
02:29:05.155288 IP (tos 0x2,ECT(0), ttl 63, id 58513, offset 0, flags [DF], proto SCTP (132), length 92)
192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ]
02:29:05.155322 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT]
02:29:06.179324 IP (tos 0x2,ECT(0), ttl 63, id 58514, offset 0, flags [DF], proto SCTP (132), length 92)
192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ]
02:29:06.179376 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT]
02:29:06.403290 IP (tos 0x2,ECT(0), ttl 63, id 58515, offset 0, flags [DF], proto SCTP (132), length 92)
192.168.49.2.31000 > charles-02.5369: sctp (1) [HB REQ]
02:29:06.403332 IP (tos 0x2,ECT(0), ttl 64, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
charles-02.5369 > 192.168.49.2.31000: sctp (1) [ABORT]
02:29:09.584056 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 116)
charles-02.58648 > 192.168.49.2.31000: sctp (1) [DATA] (B)(E) [TSN: 2733196134] [SID: 0] [SSEQ 0] [PPID 0x3c]
02:29:09.584132 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
192.168.49.2.31000 > charles-02.58648: sctp (1) [ABORT]
02:29:10.530566 IP (tos 0x2,ECT(0), ttl 64, id 1, offset 0, flags [DF], proto SCTP (132), length 40)
charles-02.54336 > 192.168.49.2.31000: sctp (1) [SHUTDOWN]
02:29:10.530668 IP (tos 0x2,ECT(0), ttl 63, id 0, offset 0, flags [DF], proto SCTP (132), length 36)
192.168.49.2.31000 > charles-02.54336: sctp (1) [ABORT]
</code></pre>
| <p>You cannot expose port 38412 via NodePort.</p>
<blockquote>
<p>If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag <strong>(default: 30000-32767)</strong>. Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
<p>Take a look at this link to understand how to translate the port:
<a href="https://stackoverflow.com/questions/71100744/unable-to-expose-sctp-server-running-in-a-kubernetes-pod-using-nodeport">Unable to expose SCTP server running in a kubernetes pod using NodePort</a></p>
<p>Also, make sure you are using Calico as a network plugin (minimum version 3.3).</p>
<blockquote>
<p>Kubernetes 1.12 includes alpha Stream Control Transmission Protocol (SCTP) support. Calico v3.3 has been updated to support SCTP if included in your network policy spec.</p>
</blockquote>
|
<p>I am building a Kubernetes cluster using kubeadm and have an issue with a single node.<br />
The worker nodes are running with sub-interfacing and policy based routing, which work as intended; however, out of the 4 worker nodes, if pods are moved to one of them, they fail liveness and readiness checks over http.<br />
I am using Kubernetes version 1.26.1, calico 3.25.0, metallb 0.13.9, and ingress-nginx 4.5.0.
The cluster stood up with little issue; outside of getting the policy based routing on the nodes worked out. Calico and MetalLB stood up and work as well.
The issue now is when I stand up the ingress-nginx controllers and force the pods on to a specific worker node. Standing them up and running on them on the other nodes works and I can curl the LoadBalancer IP; however, while testing, when the ingress-nginx pods are moved to a specific node, the liveness and readiness checks fail. Moving the pods back to any other worker node they come up and run just fine.
I've been verifying the routes and iptables on all the nodes; as well as, watching the interfaces via tcpdump, but I've not narrowed down the issue.</p>
<p>For the simple things:</p>
<ul>
<li>kernel parameters and loaded modules between the nodes are the same</li>
<li>No logs in messages/crio is showing an issue with starting the pod</li>
<li>the calico and metallb pods are working on the problem node</li>
<li>I've rebuilt the cluster since noticing the issue, and prior builds cert-manager was having issues on the node, as well as a few other random test deployments I've tried</li>
</ul>
<p>From with the pods while they are running, I can hit external webs via curl (dns work and outbound traffic work)
Using tcpdump on 'any' interface of the problem node, i can see the pod and the kubernetes internal api IP communicate
I can't hit the pod's IP, service IP, or anything from the problem node or other member node
the namespace events aren't showing any issues except for the liveness and readiness probes failing
The endpoints for the services aren't being filled while on the problem node (although this isn't a surprise).
Watching the traffic over the vxlan.calico interface isn't showing only one way traffic - there are responses to traffic that is making it through.</p>
<p>Im at a lose on where to look for the root issue. This has been going on for over a week and I could use some help.</p>
| <p>I found out what I was doing to cause the issue in the first place, so will document it just in case someone runs across the same scenario.</p>
<p>Some more background on this as it is very niche. But to some limitations we face, the worker nodes have 1 physical interface which is broken out into 2 additional sub-interfaces to allow for vlan tagged traffic. This being the case, I wrote iproute policy rules to direct traffic between the logical interfaces. So in summation, eth2 (the one actually cabled up) has logical interfaces eth2, eth2.3 and eth2.4 all on different subnets.</p>
<p>The issue I caused was writing rules for the primary interface, eth2. This was causing the kubelet traffic for liveness and readiness probes to be mis-routed and not actually follow the kube-proxy iptables rules and calico's felix routing rules. Once the policy rules for the primary interface were removed and the pods restarted (this last bit was more of my impatience) traffic flowed as it should and the pods came up and the probes finished satisfactorily.</p>
|
<p>I am currently working on Spring micro-service(Eureka Implementation) project. To manage the distributed configuration we are using Consul KV. We are deploying services on Kubernetes cluster.</p>
<p>The issue I am facing that, whenever I restart the cluster for Consul it deletes all the data of KV. I am creating Kubernetes cluster on local with docker image by having Deployment.yaml file.
Please refer the below Deployment.yaml file for consul.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: consul
labels:
app: consul
spec:
clusterIP: None
ports:
- port: 8500
name: consul
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
---
apiVersion: v1
kind: Service
metadata:
name: consul-lb
labels:
app: consul
spec:
selector:
app: consul
type: NodePort
ports:
- port: 80
targetPort: 8500
</code></pre>
<p>After some research I found that we can specify the -data-dir location in config, so I have modified StatefulSet kind yaml as below:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
args:
- "agent"
- "-server"
- "-data-dir=/home/consul/data"
</code></pre>
<p>But after this Consul UI is not getting started, so wanted some help to resolve so it stores the data even after I delete Consul cluster.
PS: I tried deploying cluster with helm, and it was persisting the data but I did not know how to make that cluster StatefulSet so I can refer it in other services with static url.
Thanks!</p>
| <p>Please note that k8s <code>pods</code> are by default ephemeral even if you deploy them as <code>StatefulSet</code>.</p>
<p><code>StatefulSet</code> is providing you option for <code>pod</code> with define name eg. <code>consul-0</code> rather that standard <code>consul-<<random string>></code>. It also keeps track of where to deploy <code>pod</code> in case you have different zones and you need to deploy <code>pod</code> in the same zone as <code>storage</code>.</p>
<p>What is missing in your manifest is <code>volumeMounts</code> and <code>volumeClaimTemplates</code> sections . If you set your data directory to <code>/home/consul/data</code> your manifest should looks similar to this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 1
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
containers:
- name: consul
image: hashicorp/consul:latest
imagePullPolicy: Always
ports:
- containerPort: 8500
args:
- "agent"
- "-server"
- "-data-dir=/home/consul/data"
volumeMounts:
- name: consul-data
mountPath: /home/consul/data
volumeClaimTemplates: # volume claim template will create volume for you so you don't need to define PVC
- metadata:
name: consul-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class" # you can get this with kubectl get sc
resources:
requests:
storage: 1Gi
</code></pre>
<p>Regarding you second problem with <code>consul</code> UI I would not help much since I never use <code>consul</code> but I can advise to deploy <code>helm chart</code> once again and check how arguments are passed there.</p>
|
<p>I have a local Kubernetes created by Rancher Desktop. I have deployed a named Cloudflared Tunnel based on <a href="https://github.com/cloudflare/argo-tunnel-examples/tree/86a2dccc880669ef3b5f9f2e6c2f034242c08f12/named-tunnel-k8s" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>I recently started to get error:</p>
<blockquote>
<p>failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See <a href="https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size" rel="nofollow noreferrer">https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size</a> for details.</p>
</blockquote>
<p>Note this does not affect the actual function of Cloudflared Tunnel, which is more like a warning. However, I do hope to fix it.</p>
<p>I have read the content in the link. However, this is running in a pod, so I am not sure how to fix it.</p>
<p>Below is full log:</p>
<pre class="lang-bash prettyprint-override"><code>2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Starting tunnel tunnelID=c9aa4140-fee8-4862-a479-3c1faacbd816
2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF Version 2023.3.1
2023-03-18 00:27:51.450Z 2023-03-18T00:27:51Z INF GOOS: linux, GOVersion: go1.19.7, GoArch: arm64
2023-03-18 00:27:51.451Z 2023-03-18T00:27:51Z INF Settings: map[config:/etc/cloudflared/config/config.yaml cred-file:/etc/cloudflared/creds/credentials.json credentials-file:/etc/cloudflared/creds/credentials.json metrics:0.0.0.0:2000 no-autoupdate:true]
2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Generated Connector ID: a2d07b8a-3343-4b28-bbb5-a0cc951d5093
2023-03-18 00:27:51.453Z 2023-03-18T00:27:51Z INF Initial protocol quic
2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use 10.42.0.32 as source for IPv4
2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z INF ICMP proxy will use fe80::3c91:31ff:fe74:68ee in zone eth0 as source for IPv6
2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN The user running cloudflared process has a GID (group ID) that is not within ping_group_range. You might need to add that user to a group within that range, or instead update the range to encompass a group the user is already in by modifying /proc/sys/net/ipv4/ping_group_range. Otherwise cloudflared will not be able to ping this network error="Group ID 65532 is not between ping group 1 to 0"
2023-03-18 00:27:51.456Z 2023-03-18T00:27:51Z WRN ICMP proxy feature is disabled error="cannot create ICMPv4 proxy: Group ID 65532 is not between ping group 1 to 0 nor ICMPv6 proxy: socket: permission denied"
2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting Hello World server at 127.0.0.1:34545
2023-03-18 00:27:51.460Z 2023-03-18T00:27:51Z INF Starting metrics server on [::]:2000/metrics
2023-03-18 00:27:51.462Z 2023/03/18 00:27:51 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
2023-03-18 00:27:51.592Z 2023-03-18T00:27:51Z INF Connection ca329025-1f06-4f36-a8b2-27eda979345d registered with protocol: quic connIndex=0 ip=198.41.192.107 location=LAX
2023-03-18 00:27:51.760Z 2023-03-18T00:27:51Z INF Connection a25fdab3-adff-4be5-8eb3-c22d593dfbc5 registered with protocol: quic connIndex=1 ip=198.41.200.193 location=SJC
2023-03-18 00:27:52.670Z 2023-03-18T00:27:52Z INF Connection ef583d03-d123-4e8e-b8ad-37eed817d2da registered with protocol: quic connIndex=2 ip=198.41.200.113 location=SJC
2023-03-18 00:27:53.684Z 2023-03-18T00:27:53Z INF Connection 25609514-8c37-451e-b4ac-1fb9fba2b9b8 registered with protocol: quic connIndex=3 ip=198.41.192.37 location=LAX
</code></pre>
| <p>My <code>cloudflared</code> pod is running under <code>hm-cloudflared</code> namesapce.</p>
<p>So I can get the node name by:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods -o wide -n hm-cloudflared
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cloudflared-7cdf78df46-x5fb7 0/1 CrashLoopBackOff 13 (93s ago) 26m 10.42.0.82 lima-rancher-desktop <none> <none>
</code></pre>
<p>Once getting the node name that the pod is running in, then you can ssh into the Kubernetes node by <a href="https://github.com/luksa/kubectl-plugins" rel="nofollow noreferrer">kubectl-plugins</a>:</p>
<pre class="lang-bash prettyprint-override"><code># Install kubectl-plugins
git clone https://github.com/luksa/kubectl-plugins $HOME/kubectl-plugins
export PATH=$PATH:$HOME/kubectl-plugins
# SSH into the Kubernetes node by kubectl-plugins
kubectl ssh node lima-rancher-desktop
</code></pre>
<p>Inside the Kubernetes node, based on <a href="https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size#non-bsd" rel="nofollow noreferrer">https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size#non-bsd</a>,
I can increase the UDP receive buffer size by:</p>
<pre class="lang-bash prettyprint-override"><code>sysctl -w net.core.rmem_max=2500000
</code></pre>
<p>This command would increase the maximum receive buffer size to roughly 2.5 MB.</p>
<p>Now just restart the <code>cloudflared</code> pod, the the issue should be gone! Hopefully it helps save some people time in future!</p>
|
<p>I have a <a href="https://konghq.com/" rel="nofollow noreferrer">Kong</a> deployment.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: local-test-kong
labels:
app: local-test-kong
spec:
replicas: 1
selector:
matchLabels:
app: local-test-kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: local-test-kong
spec:
automountServiceAccountToken: false
containers:
- envFrom:
- configMapRef:
name: kong-env-vars
image: kong:2.6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /bin/sleep 15 && kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8100
name: status
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: # ToDo
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 256m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: kong-prefix-dir
- mountPath: /tmp
name: tmp-dir
- mountPath: /kong_dbless/
name: kong-custom-dbless-config-volume
terminationGracePeriodSeconds: 30
volumes:
- name: kong-prefix-dir
- name: tmp-dir
- configMap:
defaultMode: 0555
name: kong-declarative
name: kong-custom-dbless-config-volume
</code></pre>
<p>I applied this YAML in <strong>GKE</strong>. Then i ran <code>kubectl describe</code> on its pod.</p>
<pre class="lang-yaml prettyprint-override"><code>➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-678598ffc6-ll9s8 1/1 Running 0 25m
➜ kubectl describe pod/local-test-kong-678598ffc6-ll9s8
Name: local-test-kong-678598ffc6-ll9s8
Namespace: local-test-kong
Priority: 0
Node: gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95
Start Time: Wed, 23 Nov 2022 00:12:56 +0800
Labels: app=local-test-kong
pod-template-hash=678598ffc6
Annotations: kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00
Status: Running
IP: 10.128.96.104
IPs:
IP: 10.128.96.104
Controlled By: ReplicaSet/local-test-kong-678598ffc6
Containers:
proxy:
Container ID: containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a
Image: kong:2.6
Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:12:58 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl
Normal Pulled 25m kubelet Container image "kong:2.6" already present on machine
Normal Created 25m kubelet Created container proxy
Normal Started 25m kubelet Started container proxy
➜
</code></pre>
<p>I applied the same YAML in my localhost's <strong>MicroK8S</strong> (on MacOS) and then I ran <code>kubectl describe</code> on its pod.</p>
<pre class="lang-yaml prettyprint-override"><code>➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-54cfc585cb-7grj8 1/1 Running 0 86s
➜ kubectl describe pod/local-test-kong-54cfc585cb-7grj8
Name: local-test-kong-54cfc585cb-7grj8
Namespace: local-test-kong
Priority: 0
Node: microk8s-vm/192.168.64.5
Start Time: Wed, 23 Nov 2022 00:39:33 +0800
Labels: app=local-test-kong
pod-template-hash=54cfc585cb
Annotations: cni.projectcalico.org/podIP: 10.1.254.79/32
cni.projectcalico.org/podIPs: 10.1.254.79/32
kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00
Status: Running
IP: 10.1.254.79
IPs:
IP: 10.1.254.79
Controlled By: ReplicaSet/local-test-kong-54cfc585cb
Containers:
proxy:
Container ID: containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d
Image: kong:2.6
Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:39:37 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 92s default-scheduler Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm
Normal Pulled 90s kubelet Container image "kong:2.6" already present on machine
Normal Created 90s kubelet Created container proxy
Normal Started 89s kubelet Started container proxy
Warning Unhealthy 68s kubelet Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 68s kubelet Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
➜
</code></pre>
<p>It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures.</p>
<h2>What could i be missing here while deploying to microk8s (on MacOS)?</h2>
| <p>I had exactly the same problem using Microk8s. I had the following plugins enabled: <code>hostpath-storage</code> and <code>dns</code>. I don't deploy Kong, <a href="https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html" rel="nofollow noreferrer">but RabbitMQ</a> (here's my example project).</p>
<p>I got the following error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 53s default-scheduler Successfully assigned default/hello-rabbit-server-0 to microk8s-vm
Normal Pulled 52s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine
Normal Created 52s kubelet Created container setup-container
Normal Started 52s kubelet Started container setup-container
Normal Pulled 21s kubelet Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine
Normal Created 21s kubelet Created container rabbitmq
Normal Started 21s kubelet Started container rabbitmq
Warning Unhealthy 3s kubelet Readiness probe failed: dial tcp 10.1.254.78:5672: connect: connection refused
</code></pre>
<p>What fixed the issue for me was to <a href="https://microk8s.io/docs/addon-host-access" rel="nofollow noreferrer">enable the <code>host-access</code> addon</a> in Microk8s:</p>
<pre><code>microk8s enable host-access
</code></pre>
<p>Now the readiness probes are working fine.</p>
|
<p>Suppose I have bellow code snippet which setups a reconciler that watches external resource "External":</p>
<pre><code>// SetupWithManager sets up the controller with the Manager.
func (r *SomethingReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&api.Something{}).
WithOptions(controller.Options{
MaxConcurrentReconciles: stdruntime.NumCPU(),
RecoverPanic: true,
}).
Watches(
&source.Kind{Type: &somev1.External{}},
handler.EnqueueRequestsFromMapFunc(r.findInternalObjectsForExternal),
builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(ue event.UpdateEvent) bool { return true },
DeleteFunc: func(de event.DeleteEvent) bool { return true },
}),
).
Complete(r)
}
</code></pre>
<p>My problem is that I can not import <code>somev1.External</code> type into my project because importing the go module containing this type would break my current project's dependencies. <br>
Is there a way in kubebuilder to watch for external resources without having to explicitly importing their types? like GVK or something?</p>
| <p>Yes it's possible.</p>
<p>You can create a REST client for your resource in <code>main.go</code> as below:</p>
<pre class="lang-golang prettyprint-override"><code>gvkExternal := schema.GroupVersionKind{
Group: "some.group.io",
Version: "v1",
Kind: "External",
}
restClient, err := apiutil.RESTClientForGVK(gvkExternal, false, mgr.GetConfig(), serializer.NewCodecFactory(mgr.GetScheme()))
if err != nil {
setupLog.Error(err, "unable to create REST client")
}
</code></pre>
<p>Then add a field for this REST client (<code>rest.Interface</code>) to your reconciler (<code>yournativeresource_controller.go</code>) struct such as:</p>
<pre class="lang-golang prettyprint-override"><code>type YourNativeResourceReconciler struct {
client.Client
Scheme *runtime.Scheme
// add this
RESTClient rest.Interface
}
</code></pre>
<p>Last, initialize your reconciler with this REST client (<code>main.go</code>):</p>
<pre class="lang-golang prettyprint-override"><code>if err = (&controllers.YourNativeResourceReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
RESTClient: restClient,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "YourNativeResource")
os.Exit(1)
}
</code></pre>
<p>Do not forget to add RBAC marker to your project (reconciler preferably) that will generate RBAC rules allowing you to manipulate <code>External</code> resource:</p>
<pre><code>//+kubebuilder:rbac:groups=some.group.io,resources=externals,verbs=get;list;watch;create;update;patch;delete
</code></pre>
<p>After these steps, you can use REST client for manipulating <code>External</code> resource over <code>YourNativeResource</code> reconciler using <code>r.RESTClient</code>.</p>
<p><strong>EDIT:</strong></p>
<p>If you want to watch resources, dynamic clients may help. Create a dynamic client in <code>main.go</code>:</p>
<pre class="lang-golang prettyprint-override"><code>dynamicClient, err := dynamic.NewForConfig(mgr.GetConfig())
if err != nil {
setupLog.Error(err, "unable to create dynamic client")
}
</code></pre>
<p>Apply above steps, add it to your reconciler etc. Then you will be able to watch <code>External</code> resource as below:</p>
<pre class="lang-golang prettyprint-override"><code>resourceInterface := r.DynamicClient.Resource(schema.GroupVersionResource{
Group: "some.group.io",
Version: "",
Resource: "externals",
})
externalWatcher, err := resourceInterface.Watch(ctx, metav1.ListOptions{})
if err != nil {
return err
}
defer externalWatcher.Stop()
select {
case event := <-externalWatcher.ResultChan():
if event.Type == watch.Deleted {
logger.Info("FINALIZER: An external resource is deleted.")
}
}
</code></pre>
|
<p>I am trying to migrate a dashboard which shows the count of Readiness and Liveness Probe Failures, from Kibana(ElasticSearch) to a Grafana Dashboard(Sauron). In kibana the we can get both the probe failures separately using <code>kubernetes.event.message : Liveness probe failed</code> for Liveness failure and similar event message for Readiness, but in Sauron or Thanos (which acts as the datasource for Grafana) k8's event messages are not picked up. So I am unable to find a suitable promQL which will give me the count of both the probe failures individually.</p>
<p>The closest promQL I have found is <code>kube_event_count{reason="Unhealthy"}</code> which is giving me the sum of the count of both the probe failures. I need the count of the probe failures individually. Another promQL that I have tried is <code>kube_pod_container_status_ready</code> which probably gives the readiness status of the containers but I am not sure about it.</p>
| <p>The following two queries will do the trick for you:</p>
<pre><code>prober_probe_total{probe_type="Readiness",result="failed"}
</code></pre>
<pre><code>prober_probe_total{probe_type="Liveness",result="failed"}
</code></pre>
|
<p>I have a small hobby website I want to release on my server. I chose to use kubernetes for that, as I partly use it at work as well, so I'd like to get familiar. I bought hetzner debian server and installed k3s. Now are deploying PostgreSQL container (version 15.2, not cluster) as per this <a href="https://www.containiq.com/post/deploy-postgres-on-kubernetes" rel="nofollow noreferrer">tutorial</a>. (I did a minor changes I saw from other tutorials, should not be relevant).</p>
<p>It runs fine, I was happy with it. But I tried to restart deployment to make sure data are not lost if server goes down for some reason. After a few restarts, database is corrupted.</p>
<p>Once I saw:</p>
<pre><code>PANIC: invalid magic number 0000 in log segment 000000010000000000000000, offset 0
</code></pre>
<p>another time:</p>
<pre><code>invalid contrecord length 1174 (expected 48430224) at 0/195BC90
</code></pre>
<p>another time:</p>
<pre><code>PANIC: could not locate a valid checkpoint record
</code></pre>
<p>When I tried to google how to recover from this, I did not find any safe options and mostly the suggestions were to restore backup.</p>
<p>So my question is, how do I safely restart/shutdown PostgreSQL container? Am I missing some shutdown config for PostgreSQL pod in k8s?</p>
<p>Update 1:</p>
<p>I was restarting deployment from k9s with <code>r</code> command. I think UI made it look like it was rotated right away, but it probably takes some time. So I think I triggered multiple restarts every 10 seconds and that might have corrupted the DB. Anyway I added <code>terminationGracePeriodSeconds: 60</code> and used <code>preStop</code> hook from answer. Thanks</p>
<p>Update 2:
I imported DB, did a restart and again same issue:</p>
<blockquote>
<p>could not locate a valid checkpoint record</p>
</blockquote>
<p>Update 3:
<strong>I replaced <code>Deployment</code> with <code>StatefulSet</code> and it seems to be handling restarts better. Tried over 10 restarts and no issues. Whereas before it crashed around 4th restart.</strong></p>
| <p>Of course the best practice is using an operator like <a href="https://cloudnative-pg.io/" rel="nofollow noreferrer">cloudnative-pg</a> or <a href="https://postgres-operator.readthedocs.io/en/latest/" rel="nofollow noreferrer">postgres-operator</a> but they are pretty big and probably have way more feature for a simple workload. Here is a simple solution for your problem.</p>
<h2>Solution</h2>
<p>Add below to your pod spec</p>
<pre class="lang-yaml prettyprint-override"><code>preStop:
exec:
command: ["/usr/local/bin/pg_ctl stop -D /var/lib/postgresql/data -w -t 60 -m fast"]
</code></pre>
<h2>Explanation</h2>
<p>Basically when you kill a pod, Kubernetes signals <code>SIGTERM</code> and gives 30 seconds for your pod, after that time it sends <code>SIGKILL</code>. When postgres receive <code>SIGTERM</code> it won't accept net connections but it won't terminate existing terminations, so any client will block db's termination, and after 30 seconds pod will receive <code>SIGKILL</code> which is very bad for postgres <a href="https://www.postgresql.org/docs/current/server-shutdown.html" rel="nofollow noreferrer">doc</a>. So you need to safely shutdown postgres somehow, with <code>preStop</code> hook you can.</p>
<h3>Kubernetes</h3>
<p>This is the exact chronological order of your pod:</p>
<ol>
<li>Set <code>state=Terminating</code> from Pod controller</li>
<li><code>terminationGracePeriodSeconds</code> timer starts (default is 30 seconds)</li>
<li><code>preStop</code> hook: <code>pg_cli ...</code></li>
<li><code>SIGTERM</code> is sent: Postgres won't accept new connections</li>
<li>k8s waits until <code>terminationGracePeriods</code> (configurable from yaml)</li>
<li>If app is still alive <code>SIGKILL</code> is sent</li>
</ol>
<p>Also you need to set <code>.spec.strategy.type==Recreate</code> in Deployment.</p>
<h2>Postgres</h2>
<p>For the <code>pg_cli</code> commands you can refer this summary, most useful one for you looks like <code>-m fast</code>.</p>
<p><code>SIGTERM</code>:</p>
<ul>
<li>"Smart Shutdown Mode"</li>
<li>Disallows new connections</li>
<li>Let's existing connections continue</li>
</ul>
<p><code>SIGINT</code>:</p>
<ul>
<li>"Fast Shutdown Mode"</li>
<li>Disallow new connections</li>
<li>Send <code>SIGTERM</code> to existing servers processes (?) they'll exit promptly</li>
</ul>
<p><code>SIGQUIT</code>:</p>
<ul>
<li>"Immediate Shutdown Mode"</li>
<li>Send <code>SIGQUIT</code> to all child processes, if they don't terminate in 5 secs sends <code>SIGKILL</code></li>
</ul>
|
<p>Is it possible to make the k3s ingress route a certain path to a certain IP or port of a service which is not running inside the Kubernetes, but on same physical machine?</p>
<p><strong>My use-case</strong></p>
<p>Using single node k3s setup.</p>
<p>I have a special server running on the same host that the k3s is running on.
I'd like to expose it as an HTTP endpoint in the ingress.</p>
<p>e.g:</p>
<pre><code>foo.example.com --> k3s ingress --> 127.0.0.1:99 (port on k3s machine)
</code></pre>
<p>or</p>
<pre><code>foo.example.com --> k3s ingress --> 192.168.1.7:99 (something in the local network)
</code></pre>
<p>Is something like this possible or should there be some reverse proxy before the k3s server?</p>
| <blockquote>
<p>Is it possible to make the k3s ingress route a certain path to a
certain IP or port of a service which is not running inside the
Kubernetes, but on same physical machine?</p>
</blockquote>
<p>Yes you can do it with the <strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a></strong> service.</p>
<p>You define one <strong>service</strong> and it points to specific physical <strong>IP</strong> outside of the cluster.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: my.database.example.com//192.168.4.5
</code></pre>
<p>Further you can use this <strong>service</strong> as normal service of Kubernetes but the request will get forwarded to <strong>external</strong> service.</p>
|
<p>After I uninstalled a release(with --keep-history), there will remain a release history with "uninstalled status".</p>
<p>Then if I want to install this release again, <code>install</code> and <code>upgrade --install</code> are both failed.</p>
<p><code>install</code> failed because of "cannot re-use a name that is still in use"
but <code>upgrade --install</code> failed because of "xxx has no deployed releases"</p>
<p>Is the only way that I have to remove the history or uninstall without history?</p>
<p>I tried to use <code>install</code> and <code>upgrade --install</code> command, both failed</p>
| <p>As described in this <a href="https://phoenixnap.com/kb/helm-has-no-deployed-releases" rel="nofollow noreferrer">doc</a> by phoenixnap.</p>
<p><strong>There are several ways to fix the “helm has no deployed releases” error, one ways is by running the following command</strong>:</p>
<pre><code>kubectl -n kube-system patch configmap [release name].[release version] --type=merge -p '{"metadata":{"labels":{"STATUS":"DEPLOYED"}}}'
</code></pre>
<p>[release name] is the name of the release you want to update.</p>
<p>[release version] is the current version of your release.</p>
<p>Since Helm 3 stores the deployment history as <a href="https://phoenixnap.com/kb/kubernetes-secrets" rel="nofollow noreferrer">Kubernetes secrets</a>. Check the deployment secrets:</p>
<pre><code>kubectl get secrets
</code></pre>
<p>Find the secret referring to the failed deployment, then use the following command to change the deployment status:</p>
<pre><code>kubectl patch secret [name-of-secret-related-to-deployment] --type=merge -p '{"metadata":{"labels":{"status":"deployed"}}}'
</code></pre>
<p>You can also refer this <a href="https://jacky-jiang.medium.com/how-to-fix-helm-upgrade-error-has-no-deployed-releases-mystery-3dd67b2eb126" rel="nofollow noreferrer">blog</a> by Jacky Jiang for more information about how to upgrade helm</p>
|
<p>I have a Google Cloud Composer environment set up that has 3 nodes in the worker pool. Each node has memory of 16GB (using n1-standard-4) instance. I have tasks inside a DAG that takes around 7-8GB of memory. The allocable memory for the worker nodes is roughly 12GB and hence, these tasks should run without encountering a OOM error.</p>
<p>B
My intuition is that each worker node has some (variable)number of pods and when airflow scheduler queues tasks, the tasks run inside a worker pod and not the node itself. And there might be memory limits set for the pods that you can have inside worker nodes. Note that the worker node are in kubernetes node pool exclusively used by my composer instance.</p>
<p>How can I make sure that all my tasks run without encountering OOM given that my nodes have enough memory? Is there a way to set the pod memory limits to be higher?</p>
<p>I looked at the workloads for the worker inside Kubernetes workloads and I can see that memory limit is 3.7GB which I guess is the limit for the pods.</p>
<p>What should I do?</p>
<p>Thanks in advance.</p>
| <p>It's certainly good practice to pre-assess the resources available in your node-pool and know in advance how "big" your Pods will be.</p>
<p>Once you know how many resources you have available, you can do 2 things:</p>
<p>1.set up resources/limits for all your Pods, making sure you never reach the maximum available in your node-pool;</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
<p>2.try to understand, in case of malfunction or deployment of other NOT estimated resources in your K8s cluster, which are the Pods you would be willing to sacrifice and in which order to do it.</p>
<p>This will allow you to shutdown critical services ONLY after all other services have already been shut down.</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/</a></p>
<p>Obviously there is a 3 and 4 point in case you want to scale horizontally and vertically, with autoscaling setup for the node-pool and for your Deployments, but they are not strictly related to your question.</p>
|
<p>When releasing a Helm chart, Kubernetes resources have Helm's <code>.Release.Name</code> prepended to their names. In the Chart I'm writing separate namespaces with default resource naming is enough, and the additional prefix is verbose and unwanted. Vault is a dependency of my Chart.</p>
<p>Is there a way I can achieve this using Helm? Ideally, I should only be able to deploy this chart once per namespace, supplying a namespace and release name also feel redundant.</p>
<pre class="lang-bash prettyprint-override"><code># Specify namespace and release name
helm install --create-namespace -n my-namespace my-release hashicorp/vault
# Vault is named "my-release-vault-..."
kubectl get svc -n my-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release-vault ClusterIP 10.98.169.234 <none> 8200/TCP,8201/TCP 8s
</code></pre>
| <p>I took a quick look at the Chart and I don't think there is the possibility of doing what is requested.</p>
<p>What you can do is modify the templates yourself, adding the override parameter for names/namespaces and try doing a PR against the Hashicorp repo; maybe they didn't think this feature could be useful.</p>
<p>Otherwise, you simply create a custom CHANGELOG where you track all these changes that remind you that you have to make changes to the repository every time you download a new version. It's a maintenance job you won't ignore, but it often happens in Enterprise and highly customized environments that you have to modify the Vendor templates.</p>
<p>This is where the definition of the name happens:
<a href="https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl" rel="nofollow noreferrer">https://raw.githubusercontent.com/hashicorp/vault-helm/main/templates/_helpers.tpl</a>
(First block)</p>
|
<p>I deployed a Spring Boot app on AWS Elastic Kubernetes Service. I am facing a 502 Bad Gateway error. I cannot find anything useful from the logs, there is no event to check, it works fine locally and the docker image is also running without any issue.</p>
<p>Right now its just a simple hello world app,
Here are the yaml files files or reference.</p>
<p>deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
namespace: my-namespace
labels:
app: backend-java
spec:
replicas: 1
selector:
matchLabels:
app: backend-java
template:
metadata:
labels:
app: backend-java
spec:
containers:
- name: backend-java
image: <docker-image-location>
ports:
- containerPort: 81
resources:
limits:
cpu: "4000m"
memory: "2048Mi"
requests:
cpu: "100m"
memory: "1024Mi"
</code></pre>
<p>service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: my-namespace
name: backend-service
spec:
type: NodePort
selector:
app: backend-java
ports:
- port: 81
targetPort: 8080
nodePort: 30019
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "my-app-ingress"
namespace: "my-namespace"
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
spec:
ingressClassName: alb
rules:
- host: myapp.aws.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: "backend-service"
port:
number: 81
</code></pre>
<p>Similar configuration has worked for deploying a react app, which works as expected. Only while deploying backend it give '502 Bad Gateway'</p>
| <p>Your <code>targetPort</code> in the <code>Service</code> and the <code>containerPort</code> in the <code>Deployment</code> do not match. You can fix it by changing the targetPort in the Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: my-namespace
name: backend-service
spec:
type: NodePort
selector:
app: backend-java
ports:
- port: 81
targetPort: 81
nodePort: 30019
</code></pre>
<p>Read more about the difference between port and targetPort <a href="https://stackoverflow.com/questions/49981601/difference-between-targetport-and-port-in-kubernetes-service-definition">here</a>.</p>
|
<p>I am using k9s tool for managing kubernetes cluster(GKE on Google Cloud Platform).</p>
<p>Recently k9s has stopped working and stopped connecting to k8s cluster.</p>
<p>When I launch k9s(i.e. <code>k9s -n default</code>) it shows me all clusters's context and when I click on one of the contexts then</p>
<p>Error is</p>
<pre><code>Unable to connect to context "XXXXXXXXXX"
</code></pre>
<p>it shows me below error as displayed in screenshot.</p>
<p><a href="https://i.stack.imgur.com/0zNdg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0zNdg.jpg" alt="enter image description here" /></a></p>
<p>However with <code>kubectl</code> client, it is working fine and showing all the data of cluster.</p>
<p>I tried reinstalling k9s and update its version but problem persists.</p>
<p>How can I debug the issue and fix the problem ?</p>
| <p>After investigating, checked logs, you can get logs location by running</p>
<pre class="lang-bash prettyprint-override"><code>$ k9s info
____ __.________
| |/ _/ __ \______
| < \____ / ___/
| | \ / /\___ \
|____|__ \ /____//____ >
\/ \/
Configuration: /Users/xyx/Library/Applications/k9s/config.yml
Logs: /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-shubcbsj.log
Screen Dumps: /var/folders/8r/t5bx6ckdchjdacj3nz7qyq0b4ys7mwh0000gp/T/k9s-screens-chakhcahkcha
</code></pre>
<p>Logs showed me this errors.</p>
<pre class="lang-bash prettyprint-override"><code>9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details"
9:08PM ERR ClusterUpdater failed error="Conn check failed (1/5)"
9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details"
9:08PM ERR ClusterUpdater failed error="Conn check failed (2/5)"
9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details"
9:08PM ERR ClusterUpdater failed error="Conn check failed (3/5)"
9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details"
9:08PM ERR ClusterUpdater failed error="Conn check failed (4/5)"
9:08PM ERR Unable to connect to api server error="The gcp auth plugin has been removed.\nPlease use the \"gke-gcloud-auth-plugin\" kubectl/client-go credential plugin instead.\nSee https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details"
9:08PM ERR Conn check failed (5/5). Bailing out!
</code></pre>
<p>I realized it is because of my kubectl client was recently updated and k9s stopped connecting to k8s because of that.</p>
<p>Followed the link as there has been some changes in kubectl authentication for gke in newer kubectl versions.</p>
<p><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke" rel="nofollow noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke</a></p>
<p>Did auth with my clusters again</p>
<pre><code>gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION_NAME --project $PROJECT_NAME
</code></pre>
<p>It worked again.</p>
|
<p>I have installed microk8s, traefik and cert-manager. When I try to receive a letsencrypt certificate, a new pod for answering the challenge is created, but the request from the letsencryt server does not reach this pod. Instead, the request is forwarded to the pod that serves the website.</p>
<p>It looks like the ingressroute routing the traffic to the web pod has higher priority then the ingress that routes the <code>/.well-known/acme-challenge/...</code> requests to the correct pod. What am I missing?</p>
<p><code>kubectl edit clusterissuer letsencrypt-prod</code>:</p>
<pre><code>kind: ClusterIssuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"office@mydomain.com","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"traefik"}}}]}}}
creationTimestamp: "2022-07-11T14:32:15Z"
generation: 11
name: letsencrypt-prod
resourceVersion: "49979842"
uid: 40c4e26d-9c94-4cda-aa3a-357491bdb25a
spec:
acme:
email: office@mydomain.com
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress: {}
status:
acme:
lastRegisteredEmail: office@mydomain.com
uri: https://acme-v02.api.letsencrypt.org/acme/acct/627190636
conditions:
- lastTransitionTime: "2022-07-11T14:32:17Z"
message: The ACME account was registered with the ACME server
observedGeneration: 11
reason: ACMEAccountRegistered
status: "True"
type: Ready
</code></pre>
<p><code>kubectl edit ingressroute webspace1-tls</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"traefik.containo.us/v1alpha1","kind":"IngressRoute","metadata":{"annotations":{},"name":"w271a19-tls","namespace":"default"},"spec":{"entryPoints":["websecure"],"routes":[{"kind":"Rule","match":"Host(`test1.mydomain.com`)","middlewares":[{"name":"test-compress"}],"priority":10,"services":[{"name":"w271a19","port":80}]}],"tls":{"secretName":"test1.mydomain.com-tls"}}}
creationTimestamp: "2022-10-05T20:01:38Z"
generation: 7
name: w271a19-tls
namespace: default
resourceVersion: "45151920"
uid: 77e9b7ac-33e7-4810-9baf-579f00e2db6b
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`test1.mydomain.com`)
middlewares:
- name: test-compress
priority: 10
services:
- name: w271a19
port: 80
tls:
secretName: test1.mydomain.com-tls
</code></pre>
<p><code>kubectl edit ingress cm-acme-http-solver-rz9mm</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
creationTimestamp: "2023-03-22T13:00:18Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
acme.cert-manager.io/http-domain: "2306410973"
acme.cert-manager.io/http-token: "1038683769"
acme.cert-manager.io/http01-solver: "true"
name: cm-acme-http-solver-rz9mm
namespace: default
ownerReferences:
- apiVersion: acme.cert-manager.io/v1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: test1.mydomain.com-glnrn-2096762198-4162956557
uid: db8b5c78-8549-4f13-b43d-c6c7bba7468d
resourceVersion: "52806119"
uid: 6b27e02a-ee65-4809-b391-95c03f9ebb36
spec:
ingressClassName: traefik
rules:
- host: test1.mydomain.com
http:
paths:
- backend:
service:
name: cm-acme-http-solver-ll2zr
port:
number: 8089
path: /.well-known/acme-challenge/9qtVY8FjfMIWd_wBNhP3PEPJZo4lFTw8WfWLMucRqAQ
pathType: ImplementationSpecific
status:
loadBalancer: {}
</code></pre>
<p><code>get_cert.yaml</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test1.mydomain.com
namespace: default
spec:
secretName: test1.mydomain.com-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: test1.mydomain.com
dnsNames:
- test1.mydomain.com
</code></pre>
<p>In the webserver log of the web pod I see the reqests to /.well-known... coming in.</p>
| <p>Shouldn't this annotation be added to ingress?</p>
<pre><code>cert-manager.io/cluster-issuer=letsencrypt-production
</code></pre>
|
<p>I have scenario when my cluster consist of two microservices.</p>
<p>In service <strong>A</strong> i have .CSV(15MB) file, which also is needed in service <strong>B</strong>.</p>
<p>I don't want to place this file two times in each repo.</p>
<p>During deployment of service <strong>A</strong> I want to place this .csv file in some kind of shared volume, that pod containing service <strong>B</strong> can consume and process it.
Any ideas and best practices how to do it?</p>
<p>Best regards</p>
| <p>The easiest solution would be to build the file into the docker image.</p>
|
<p>So, according to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">Kubernetes documentation</a>, when you have an external LoadBalancer service set with <code>externalTrafficPolicy=Local</code>, you can explicitly define a healthCheckNodePort.</p>
<p>If I understood correctly, since LB services with this specific externalTrafficPolicy are unable to determine if a pod is running inside of a node, this healthCheckNodePort should be used to evaluate just that. As a result, a properly set healthCheckNodePort should avoid situations in which our request is routed to improper nodes.</p>
<p>An example would be the image below, extracted from <a href="https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies" rel="nofollow noreferrer">https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies</a>. If I properly understood this concept, the healthCheckNodePort should let the LB know that all traffic should be directed to the left node, as the right one doesn't have a pod to handle it.</p>
<p><a href="https://i.stack.imgur.com/0HjGf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0HjGf.png" alt="enter image description here" /></a></p>
<p>My question here is: how does the proxy attached to healthCheckNodePort checks for the pod existence? Is this something I should implement from my side, or is Kubernetes auto-assigned port able to handle this health-check on its own? It seems weirdly convenient that any port number would do, so I would like some more insight on how this works.</p>
| <p>It's actually very simple; I report a part of OpenShift documentation:</p>
<blockquote>
<p>healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. <strong>External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type).</strong></p>
</blockquote>
<p><a href="https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.10/rest_api/network_apis/service-v1.html</a></p>
<p>From the point of view of the <strong>kube-proxy</strong>, this always responds to packets sent to the health check node port.</p>
<p>I suggest you take a look at this example:
<a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer</a></p>
|
<p>I have a Kubernetes cluster running a Docker image. My web app is containerized and uses an environment variable that changes every couple of days.</p>
<p>I am wondering if there is a way to externally store the variables or if I just have to enter the new variables and push a new image every couple of days?</p>
| <p>Store the <code>Env</code> in a <code>configmap</code> and mount it on your <code>deployment</code>. Just rollout your deployment if you want to update the <code>Env</code>s in the <code>pod</code>s(containers).</p>
|
<p>Say I have an RLS policy on a table, and I realize I goofed and forgot a command in the policy, so can't <code>ALTER</code> it; it must be <code>DROP</code>/<code>CREATE</code>ed. I'm using rolling updates in k8s, and my DB migrations happen in an init container. I deploy my new application version that <code>DROP</code>s the policy and <code>CREATE</code>s the new one with the correct set of commands. Is there a window, however brief between <code>CREATE</code> and <code>DROP</code>, where the still running old pod now has access to everything? Intuition says "definitely", but testing a fraction of a second race condition is hard. Can I prevent this window by wrapping the <code>DROP</code>/<code>CREATE</code> in a transaction? Are there other methods to make this safe?</p>
<p>Real world, the correct answer is "your deployed version has a security vulnerability. Scale to 0 and deploy the correct version", but I'm trying to think through the vulnerabilities this set up brings, and others doing the deploy in the future may not be so careful.</p>
<p>My current setup is running 9.6, but there's a plan in the medium-term to work it up to 15.1, so answers for either version are welcome.</p>
| <p>A row level security policy allows a role to do something, so dropping the policy will <em>reduce</em> what the user can do. If you are worried that this can cause errors or bad results for concurrent queries, wrap the <code>DROP POLICY</code> and CREATE POLICY` statements in a single transaction, then all concurrent queries get blocked. That shouldn't be a problem, because both statements are fast.</p>
|
<p>I am just learning containers and kubernetes and everything around it. There has been a use case to build a reliable setup, where we can store all our python scripts(small, usecase defined scripts that do one only job each). There are some scripts in other languages like perl too.</p>
<p>Not sure if this is the correct place to ask, but I will ask anyway.</p>
<p>The requirement is to build a solution that will have less to no dependency on the underlying operating system so even if we were to switch operating systems/servers in the future, the scripts can remain/run as it is.</p>
<p>Was thinking if I can build a 2 node kubernetes cluster and run each script in a container and trigger them using a cron job. Not sure if this is an optimal and efficient approach. The python virtual environments is not our way to go given the python version is symmlinked back to the python version on the server, causing a server/os dependency.</p>
<p>Appreciate any ideas and advice if someone else has done something similar. I've google enough for such usecases. Didn't find solutions that match specifically to my need. But please feel free to share, ideas, thoughts any good reads too. Thanks!</p>
<p>Note: The server operating system is RHEL 8 and above</p>
| <p>The idea of containerizing your scripts allows you to have a highly customized "environment" that doesn't change wherever you deploy it.</p>
<p>For the management of these containers then, you decide according to your needs... If they are management scripts, you can think of creating a management Pod that always stays up&running (I'll link you below a couple of examples on how Do).</p>
<p><a href="https://stackoverflow.com/questions/31870222/how-can-i-keep-a-container-running-on-kubernetes">How can I keep a container running on Kubernetes?</a></p>
<p>Otherwise, it may be a good idea to prepare a Job and run it as needed.</p>
<p>In case of PROD, remember to have at least 3 nodes (HA), do a pre-assessment to understand how many resources you can assign to your Pods (assuming that the resource consumption of these scripts has already been tested), think about the roles assigned to Worker nodes, so as to avoid Pods being randomly scheduled everywhere (perhaps where there is a business critical workload that risks saturating resources), autoscaling, etc.</p>
|
<p>The following pod definition <em>successfully</em> executes a <code>readinessProbe</code>, which makes a request to the service <code>service-am-i-ready</code> that connects to pods on the same cluster.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80'
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>However, if I change the <code>readinessProbe.exec.command</code> to <code>readinessProbe.httpGet</code> it doesn't work anymore:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
httpGet: # Only changed this method
host: service-am-i-ready
path: /
port: 80
scheme: HTTP
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>This is the error message I get running <code>kubectl po describe ready-if-service-ready</code>:</p>
<pre><code>Warning Unhealty 3m10s (x139 over 23m) kubelet Readiness probe failed: Get "http://service-am-i-ready:80/": dial tcp: lookup service-am-i-ready: no such host
</code></pre>
<p>Running <code>kubectl get po ready-if-service-ready</code> gives:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ready-if-service-ready 0/1 Running 0 27m
</code></pre>
<p>Why is the first readinessProbe working, but not the second one? It looks like the second readinessProbe makes a request to the same endpoint as the <code>wget -T2 -O- http://service-am-i-ready:80</code> command.</p>
| <p>@glv's answer is also correct, but let me explain you why it is not working and what is the otherway</p>
<p>The reason is <strong>wget</strong> uses the <strong>DNS resolver configured in the pod</strong>, which is set to the Kubernetes DNS service by default.</p>
<p><strong>httpGet probe</strong>, which is executed by the kubelet on the node hosting the pod. The kubelet <strong>does not use the DNS resolver configured in the pod</strong>, but rather the host's DNS resolver configuration</p>
<p>You can modify your readiness probe to use the IP address of the pod providing the service instead of the hostname.</p>
<pre><code>readinessProbe:
httpGet:
host: (POD_IP)
path: /
port: 80
scheme: HTTP
</code></pre>
<p>You can dyamically assign IP by this way as well</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
httpGet:
host: $(MY_POD_IP)
path: /
port: 80
scheme: HTTP
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
|
<p>We currently have several microservices running on-premises, and we use shell scripts to monitor their performance, including JVM, heap size, and number of running threads. However, we are planning to move to the cloud using Kubernetes. We want to monitor service level stats like the number of threads open, JVM, heap size, etc., in the cloud. We are planning to use Prometheus, Grafana, and Elasticsearch for this purpose.</p>
<p>Can anyone suggest a way to monitor microservice performance in a Kubernetes cluster using Prometheus, Grafana, and Elasticsearch?</p>
<p>We believe this can be done using Spring Boot Actuator , but don't have clear idea on how it works on kubernetes for multiple services.</p>
| <p>Your question requires a series of reflections that you don't do.. for example, which Cloud provider are you going to? Why do you want to create a monitoring (Prometheus/Alertmanager/Grafana) and a logging stack (Elasticsearch/fluentd)?
Is there a particular reason why you want to remain "untied" from the Provider's products?
On most Cloud Service Providers, you already have these tools as a service.</p>
<p>Anyway, for the "monitoring" stack you can use the Prometheus operator; this provides all the tools you need in one solution.</p>
<p><a href="https://prometheus-operator.dev/docs/prologue/introduction/" rel="nofollow noreferrer">https://prometheus-operator.dev/docs/prologue/introduction/</a></p>
<p>On your applications side you will have to export the metrics you want to monitor and add the various "scrape-jobs" to your Prometheus. After that you can have fun creating dashboards of all kinds (you will find a lot of docs online).</p>
<p>For the logging stack, you'll need a tool like fluentd to "fetch" and collect logs from your Kubernetes cluster, and a tool that allows you to intelligently view and process this information like Elasticsearch.</p>
<p>The tools in question are not as closely related as the monitoring ones, so it's up to you to decide how to install them. Surely I would create a single namespace for Logging and consider using the Helm Charts provided by the Vendors.</p>
|
<p>When join node :
<code>sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2</code></p>
<p>The outputs:</p>
<pre><code>[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
</code></pre>
<p>And <code>systemctl status kubelet</code>:</p>
<pre><code>node@node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 26716 (kubelet)
Tasks: 16 (limit: 1111)
CGroup: /system.slice/kubelet.service
└─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found
</code></pre>
<p>To <code>Unauthorized</code> I checked at master with <code>kubeadm token list</code>, token is valid.
So what's the problem? Thanks a lot.</p>
| <p>on worker nodes doing</p>
<pre><code>sudo kubeadm reset
</code></pre>
<p>and then rejoining will solve this issue</p>
|
<p>I have scenario when my cluster consist of two microservices.</p>
<p>In service <strong>A</strong> i have .CSV(15MB) file, which also is needed in service <strong>B</strong>.</p>
<p>I don't want to place this file two times in each repo.</p>
<p>During deployment of service <strong>A</strong> I want to place this .csv file in some kind of shared volume, that pod containing service <strong>B</strong> can consume and process it.
Any ideas and best practices how to do it?</p>
<p>Best regards</p>
| <p>If you need both microservice that can read and write to a file, then you need a <code>sharedStorage</code> that supports the <code>ReadWriteMany</code> accessMode. There are a couple of options in Kubernetes like:</p>
<ul>
<li>NFS</li>
<li>CephFS</li>
<li>Glusterfs</li>
</ul>
<p>You could find more on this topic <a href="https://stackoverflow.com/questions/31693529/how-to-share-storage-between-kubernetes-pods">here</a>.</p>
<p>Another solution would be using object storages like <code>s3</code>.</p>
|
<p>I'm trying to integrate Grafana with managed prometheus service provided by Kubernetes Engine in GCP.</p>
<p>I configured Managed Prometheus service and I'm able see the metrics well, but I'm not able to integrate the managed prometheus service with Grafana <strong>on the same kubernetes cluster</strong>.</p>
<p>Below one is the managed prometheus metrics that are available.</p>
<p><a href="https://i.stack.imgur.com/IM9lJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IM9lJ.png" alt="enter image description here" /></a></p>
<p>I believe without endpoint URL, we cannot create Grafana dashboard.
The issue is with creating the endpoint for managed prometheus on GKE.</p>
<p><a href="https://i.stack.imgur.com/AxKXH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AxKXH.png" alt="enter image description here" /></a></p>
<p>I crawled all the web but couldn't find a way to create endpoint for managed prometheus for GKE.
Can you please confirm me that is it actually possible to create an endpoint for Managed Prometheus for GKE cluster? if yes, can you please guide me how to grab that endpoint URL.</p>
<p>Thanks in advance.</p>
| <p>You need to deploy the <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/manifests#expandable-6" rel="nofollow noreferrer">frontend</a> service. More details on using Grafana with Managed Prometheus can be found <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/query#ui-grafana" rel="nofollow noreferrer">here</a>, but here's the manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
automountServiceAccountToken: true
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
containers:
- name: frontend
image: "gke.gcr.io/prometheus-engine/frontend:v0.5.0-gke.0"
args:
- "--web.listen-address=:9090"
- "--query.project-id=$PROJECT_ID"
ports:
- name: web
containerPort: 9090
readinessProbe:
httpGet:
path: /-/ready
port: web
livenessProbe:
httpGet:
path: /-/healthy
port: web
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
clusterIP: None
selector:
app: frontend
ports:
- name: web
port: 9090
</code></pre>
|
<p>I have scenario when my cluster consist of two microservices.</p>
<p>In service <strong>A</strong> i have .CSV(15MB) file, which also is needed in service <strong>B</strong>.</p>
<p>I don't want to place this file two times in each repo.</p>
<p>During deployment of service <strong>A</strong> I want to place this .csv file in some kind of shared volume, that pod containing service <strong>B</strong> can consume and process it.
Any ideas and best practices how to do it?</p>
<p>Best regards</p>
| <p>If you dont want to set it in the docker image, you can use the <strong>initcontainer</strong> to download when POD starts. You can set the file into the <strong>hostPath</strong> when <strong>service A</strong> starting.</p>
<p>When you say <strong>service A</strong> has file not sure if it's in the <strong>repo</strong> or it's part of <strong>Docker image</strong> build. If not part of the repo and it's stored at some central place like Bucket you can download it when <strong>service B</strong> starts with <strong>initcontainer</strong>.</p>
<p>Example downloading file and set & share to hostpath:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: save-data
mountPath: /data-dir
initContainers:
- name: install
image: ubuntu
command: ['curl', 'download-link-path >', '/tmp/file.csv']
volumeMounts:
- name: save-data
mountPath: "/data-dir"
dnsPolicy: Default
volumes:
- name: save-data
hostPath:
path: /data-dir
type: Directory
</code></pre>
<p>other option is to set the shared file system which multiple PODs can attach and use same time. <strong><a href="https://stackoverflow.com/a/57798369/5525824">ReadWriteMany</a></strong> or <strong><a href="https://stackoverflow.com/a/57798369/5525824">ReadOnlyMany</a></strong> option.</p>
<ul>
<li><a href="https://min.io/" rel="nofollow noreferrer">Minio</a></li>
</ul>
<p>is very good option to use for above scenario.</p>
|
<p>I wrote a <strong>CronJob</strong> that periodically takes volume snapshot of my <strong>PVC</strong>. Below is the <strong>CronJob</strong> file:</p>
<pre><code>kind: CronJob
metadata:
name: mycronjob
spec:
schedule: "*/2 * * * *" # Run the job every 2 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: webserver-container
image: kubectl:latest
command:
- /bin/sh
- -c
- /app/script.sh
volumeMounts:
- name: script-volume
mountPath: /app
restartPolicy: OnFailure
volumes:
- name: script-volume
configMap:
name: script-configmap
</code></pre>
<p>The <strong>Volume Snapshot</strong> file looks like this:</p>
<pre><code>kind: VolumeSnapshot
metadata:
name: snap-shot-test
spec:
volumeSnapshotClassName: csi-snapclass
source:
persistentVolumeClaimName: my-test-apps
</code></pre>
<p>And here is the <strong>script</strong> file:</p>
<pre><code>#!/bin/bash
kubectl apply -f volumesnapshot.yml
</code></pre>
<p>First time the cronjob is successfully executed, but after that it says
<em>volumesnapshot.snapshot.storage.k8s.io/snap-shot-test unchanged</em></p>
<p>How can I periodically take volumesnapshot of a PVC with having latest 2 copies of the snapshot?</p>
| <p>It seems to me that what you describe mirrors the YAML files you shared; there is nothing that changes the name of the backup and therefore it will never create a new one.</p>
<p>If you want to make a backup of one PVC to another in a "raw" way with scripts, maybe think of a job that mounts 2 Volumes (source and destination) and executes a simple <code>cp -Rp /volume-source/* /volume-destination/</code></p>
<p>Otherwise, if you want to get the job done right, consider using a tool like Velero.</p>
<p><a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p>
|
<p>I have an EksNodeGroup with the following taints:</p>
<pre class="lang-js prettyprint-override"><code> const ssdEksNodeGroupPublicLargeSubnet = new aws.eks.EksNodeGroup(
this,
"ssdEksNodeGroupPublicLargeSubnet",
{
// ... other stuff...
taint: [
{
key: "app",
value: "strick",
effect: "NO_SCHEDULE",
},
],
}
);
</code></pre>
<p>Elsewhere in my code, I'm trying to iterate over my nodeGroup taints to dynamically create kubernetes pod tolerations.</p>
<pre class="lang-js prettyprint-override"><code> const nodeGrouop = ssdEksNodeGroupPublicLargeSubnet
const tolerations: k8s.DeploymentSpecTemplateSpecToleration[] = [];
for (let i = 0; i < Fn.lengthOf(nodeGroup.taint); i++) {
const taint = nodeGroup.taint.get(i);
tolerations.push({
key: taint.key,
value: taint.value,
effect: taint.effect,
operator: "Equal"
});
}
console.log("##################", tolerations)
</code></pre>
<p>However, when I try to run this, I see the log statement prints an empty array and when my pod/deployment is created it is created with no tolerations.</p>
<p>Here's full declaration of my kubernetes deployment</p>
<pre class="lang-js prettyprint-override"><code> const pausePodDeployment = new k8s.Deployment(
this,
pausePodDeploymentName,
{
metadata: {
name: pausePodDeploymentName,
namespace: namespace.metadata.name,
},
spec: {
replicas: "1",
selector: {
matchLabels: {
app: pausePodDeploymentName,
},
},
template: {
metadata: {
labels: {
app: pausePodDeploymentName,
},
},
spec: {
priorityClassName: priorityClass.metadata.name,
terminationGracePeriodSeconds: 0,
container: [
{
name: "reserve-resources",
image: "k8s.gcr.io/pause",
resources: {
requests: {
cpu: "1",
},
},
},
],
toleration: tolerations,
nodeSelector: {
...nodeGroupLabels,
},
},
},
},
}
);
</code></pre>
<p>and here's the full output from CDK (note that there aren't any tolerations):</p>
<pre><code># kubernetes_deployment.overprovisioner_strick-overprovisioner-pause-pods_B5F26972 (overprovisioner/strick-overprovisioner-pause-pods) will be created
+ resource "kubernetes_deployment" "overprovisioner_strick-overprovisioner-pause-pods_B5F26972" {
+ id = (known after apply)
+ wait_for_rollout = true
+ metadata {
+ generation = (known after apply)
+ name = "strick-overprovisioner-pause-pods"
+ namespace = "overprovisioner"
+ resource_version = (known after apply)
+ uid = (known after apply)
}
+ spec {
+ min_ready_seconds = 0
+ paused = false
+ progress_deadline_seconds = 600
+ replicas = "1"
+ revision_history_limit = 10
+ selector {
+ match_labels = {
+ "app" = "strick-overprovisioner-pause-pods"
}
}
+ strategy {
+ type = (known after apply)
+ rolling_update {
+ max_surge = (known after apply)
+ max_unavailable = (known after apply)
}
}
+ template {
+ metadata {
+ generation = (known after apply)
+ labels = {
+ "app" = "strick-overprovisioner-pause-pods"
}
+ name = (known after apply)
+ resource_version = (known after apply)
+ uid = (known after apply)
}
+ spec {
+ automount_service_account_token = true
+ dns_policy = "ClusterFirst"
+ enable_service_links = true
+ host_ipc = false
+ host_network = false
+ host_pid = false
+ hostname = (known after apply)
+ node_name = (known after apply)
+ node_selector = {
+ "diskType" = "ssd"
}
+ priority_class_name = "overprovisioner"
+ restart_policy = "Always"
+ service_account_name = (known after apply)
+ share_process_namespace = false
+ termination_grace_period_seconds = 0
+ container {
+ image = "k8s.gcr.io/pause"
+ image_pull_policy = (known after apply)
+ name = "reserve-resources"
+ stdin = false
+ stdin_once = false
+ termination_message_path = "/dev/termination-log"
+ termination_message_policy = (known after apply)
+ tty = false
+ resources {
+ limits = (known after apply)
+ requests = {
+ "cpu" = "1"
}
}
}
+ image_pull_secrets {
+ name = (known after apply)
}
+ readiness_gate {
+ condition_type = (known after apply)
}
+ volume {
+ name = (known after apply)
+ aws_elastic_block_store {
+ fs_type = (known after apply)
+ partition = (known after apply)
+ read_only = (known after apply)
+ volume_id = (known after apply)
}
+ azure_disk {
+ caching_mode = (known after apply)
+ data_disk_uri = (known after apply)
+ disk_name = (known after apply)
+ fs_type = (known after apply)
+ kind = (known after apply)
+ read_only = (known after apply)
}
+ azure_file {
+ read_only = (known after apply)
+ secret_name = (known after apply)
+ secret_namespace = (known after apply)
+ share_name = (known after apply)
}
+ ceph_fs {
+ monitors = (known after apply)
+ path = (known after apply)
+ read_only = (known after apply)
+ secret_file = (known after apply)
+ user = (known after apply)
+ secret_ref {
+ name = (known after apply)
+ namespace = (known after apply)
}
}
+ cinder {
+ fs_type = (known after apply)
+ read_only = (known after apply)
+ volume_id = (known after apply)
}
+ config_map {
+ default_mode = (known after apply)
+ name = (known after apply)
+ optional = (known after apply)
+ items {
+ key = (known after apply)
+ mode = (known after apply)
+ path = (known after apply)
}
}
+ csi {
+ driver = (known after apply)
+ fs_type = (known after apply)
+ read_only = (known after apply)
+ volume_attributes = (known after apply)
+ node_publish_secret_ref {
+ name = (known after apply)
}
}
+ downward_api {
+ default_mode = (known after apply)
+ items {
+ mode = (known after apply)
+ path = (known after apply)
+ field_ref {
+ api_version = (known after apply)
+ field_path = (known after apply)
}
+ resource_field_ref {
+ container_name = (known after apply)
+ divisor = (known after apply)
+ resource = (known after apply)
}
}
}
+ empty_dir {
+ medium = (known after apply)
+ size_limit = (known after apply)
}
+ fc {
+ fs_type = (known after apply)
+ lun = (known after apply)
+ read_only = (known after apply)
+ target_ww_ns = (known after apply)
}
+ flex_volume {
+ driver = (known after apply)
+ fs_type = (known after apply)
+ options = (known after apply)
+ read_only = (known after apply)
+ secret_ref {
+ name = (known after apply)
+ namespace = (known after apply)
}
}
+ flocker {
+ dataset_name = (known after apply)
+ dataset_uuid = (known after apply)
}
+ gce_persistent_disk {
+ fs_type = (known after apply)
+ partition = (known after apply)
+ pd_name = (known after apply)
+ read_only = (known after apply)
}
+ git_repo {
+ directory = (known after apply)
+ repository = (known after apply)
+ revision = (known after apply)
}
+ glusterfs {
+ endpoints_name = (known after apply)
+ path = (known after apply)
+ read_only = (known after apply)
}
+ host_path {
+ path = (known after apply)
+ type = (known after apply)
}
+ iscsi {
+ fs_type = (known after apply)
+ iqn = (known after apply)
+ iscsi_interface = (known after apply)
+ lun = (known after apply)
+ read_only = (known after apply)
+ target_portal = (known after apply)
}
+ local {
+ path = (known after apply)
}
+ nfs {
+ path = (known after apply)
+ read_only = (known after apply)
+ server = (known after apply)
}
+ persistent_volume_claim {
+ claim_name = (known after apply)
+ read_only = (known after apply)
}
+ photon_persistent_disk {
+ fs_type = (known after apply)
+ pd_id = (known after apply)
}
+ projected {
+ default_mode = (known after apply)
+ sources {
+ config_map {
+ name = (known after apply)
+ optional = (known after apply)
+ items {
+ key = (known after apply)
+ mode = (known after apply)
+ path = (known after apply)
}
}
+ downward_api {
+ items {
+ mode = (known after apply)
+ path = (known after apply)
+ field_ref {
+ api_version = (known after apply)
+ field_path = (known after apply)
}
+ resource_field_ref {
+ container_name = (known after apply)
+ divisor = (known after apply)
+ resource = (known after apply)
}
}
}
+ secret {
+ name = (known after apply)
+ optional = (known after apply)
+ items {
+ key = (known after apply)
+ mode = (known after apply)
+ path = (known after apply)
}
}
+ service_account_token {
+ audience = (known after apply)
+ expiration_seconds = (known after apply)
+ path = (known after apply)
}
}
}
+ quobyte {
+ group = (known after apply)
+ read_only = (known after apply)
+ registry = (known after apply)
+ user = (known after apply)
+ volume = (known after apply)
}
+ rbd {
+ ceph_monitors = (known after apply)
+ fs_type = (known after apply)
+ keyring = (known after apply)
+ rados_user = (known after apply)
+ rbd_image = (known after apply)
+ rbd_pool = (known after apply)
+ read_only = (known after apply)
+ secret_ref {
+ name = (known after apply)
+ namespace = (known after apply)
}
}
+ secret {
+ default_mode = (known after apply)
+ optional = (known after apply)
+ secret_name = (known after apply)
+ items {
+ key = (known after apply)
+ mode = (known after apply)
+ path = (known after apply)
}
}
+ vsphere_volume {
+ fs_type = (known after apply)
+ volume_path = (known after apply)
}
}
}
}
}
</code></pre>
<p>How can I dynamically iterate over the taints of a nodegroup to construct the correct pod tolerations for that node group?</p>
| <p>This is due to the <a href="https://developer.hashicorp.com/terraform/cdktf/concepts/tokens" rel="nofollow noreferrer">Token System</a> powering CDKTF. Instead of real run-time values accessing <code>ssdEksNodeGroupPublicLargeSubnet.taint</code> gives you a token that is a reference to the taints. This reference is later used by Terraform for e.g. ordering the deployments of the individual resources.</p>
<p>If your input is static you can use the input directly by accessing <code>ssdEksNodeGroupPublicLargeSubnet.taintInput</code>.</p>
<p>If you want to loop over the taints on the terraform side instead of during synthetisation you can use <a href="https://developer.hashicorp.com/terraform/cdktf/concepts/iterators#using-iterators-for-list-attributes" rel="nofollow noreferrer">dynamic blocks</a>, e.g.</p>
<pre><code>const iterator = TerraformIterator.fromList(ssdEksNodeGroupPublicLargeSubnet.taint);
const tolerations = iterator.dynamic({
key: cdktf.propertyAccess(iterator.value, "key"),
value: cdktf.propertyAccess(iterator.value, "value"),
effect: cdktf.propertyAccess(iterator.value, "effect"),
operator: "Equal"
})
</code></pre>
|
<p>I set up a local kubernetes cluster with minikube. On my cluster I have only one deployment runnning and one service attached to it. I used a NodePort on port 30100 to expose the service, so I can access it from my browser or via curl.</p>
<p>here is the <code>python-server.yml</code> file I use to setup the cluster:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: python-server-deployment
namespace: kubernetes-hello-world
labels:
app: python-server
spec:
replicas: 1
selector:
matchLabels:
app: python-server
template:
metadata:
labels:
app: python-server
spec:
containers:
- name: python-hello-world
image: hello-world-python:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: python-server-internal-service
namespace: kubernetes-hello-world
spec:
type: NodePort
selector:
app: python-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30100
</code></pre>
<p>my <code>python-hello-world</code> image is based on this python file:</p>
<pre class="lang-py prettyprint-override"><code>from http.server import BaseHTTPRequestHandler, HTTPServer
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
html = """
<!DOCTYPE html>
<html>
<head>
<title>Hello World</title>
<meta charset="utf-8">
</head>
<body>
<h1>Hello World</h1>
</body>
</html>
"""
self.send_response(200)
self.send_header('Access-Control-Allow-Origin', '*')
self.send_header('Content-type', 'text/html')
self.end_headers()
self.wfile.write(bytes(html, "utf-8"))
def run():
addr = ('', 5000)
httpd = HTTPServer(addr, MyServer)
httpd.serve_forever()
if __name__ == '__main__':
run()
</code></pre>
<p>When I run the cluster I can as expected receive the hello world html with <code>curl {node_ip}:30100</code>. But when I try to access my service via my browser with the same ip:port I get a time out.
I read that that can be caused by missing headers but I think I have all necessary ones covered in my python file, so what else could cause this?</p>
| <p>It is not said that you reach the IP of your node (you should provide some more information about the environment if necessary).</p>
<p>But you could port forward the service and reach it easily.</p>
<p>Take a look here:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</a></p>
<p>Some other doc:
<a href="https://stackoverflow.com/questions/51468491/how-kubectl-port-forward-works">How kubectl port-forward works?</a></p>
|
<p>I have write a server side sse test api like this:</p>
<pre><code>public SseEmitter testStream(String question) {
SseEmitter emitter = new SseEmitter();
// Start asynchronous processing
new Thread(() -> {
try {
for (int i = 0; i < 10; i++) {
// Generate some event data
String eventData = "Event data " + i;
// Create Server-Sent Event object
ServerSentEvent event = ServerSentEvent.builder()
.event("message")
.data(eventData)
.build();
// Serialize event to string and send to client
String serializedEvent = JSON.toJSONString(event);
emitter.send(serializedEvent);
// Wait for one second before sending the next event
Thread.sleep(1000);
}
// Complete the SSE stream
emitter.complete();
} catch (Exception e) {
emitter.completeWithError(e);
}
}).start();
return emitter;
}
</code></pre>
<p>then expose the api as a rest api:</p>
<pre><code>@GetMapping(path="/test",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
SseEmitter testStream(@RequestParam(required = true) String question);
</code></pre>
<p>when I invoke this api in local machine, the sse message returned one by one every seconds, that's what I want. But after I deploy this api to remote cloud server, the api wait a long time and return the result onece. Am I missing something? what should I do to make the remote server return the message one by one not onece and waiting for a long time. This is my test command:</p>
<pre><code># localhost test
curl -X GET -H 'Content-Type: application/json' -N http://localhost:11014/ai/stream/chat/test\?question\=1
# remote server test
curl -X GET -H 'Content-Type: application/json' -H 'x-request-id:1' -H 'x-access-token: eyJhbGciOiJIUzUx.balabala.sDM53g' -N https://ai.example.top/ai/stream/chat/test\?question\=1
</code></pre>
<p>the route to remote service like this:</p>
<p>client user>dns>nginx>kubernetes traefik>spring gateway in kubernetes>kubernetes service>kubernetes pod</p>
<p>I have tried to add cache control for request header in curl command like this <code>-H 'Cache-Control:no-cache'</code> and response header:</p>
<pre><code>response.setContentType("text/event-stream");
response.setCharacterEncoding("UTF-8");
response.setHeader("Cache-Control", "no-cache");
response.setHeader("Connection", "keep-alive");
</code></pre>
<p>seems did not work. I also tried to let nginx no cache the content like this:</p>
<pre><code>server {
listen 443 ssl;
server_name ai.example.top;
ssl_certificate /etc/letsencrypt/live/poemhub/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/poemhub/privkey.pem;
if ($allowed = 0) {
return 403;
}
location / {
client_max_body_size 1m;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 3600;
proxy_pass http://127.0.0.1:8000;
}
location ^~ /post/websocket {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location ^~ /ai/stream {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
proxy_cache_bypass $http_pragma;
}
}
</code></pre>
| <p>Finally I found the nginx cached the sse message, configure the nginx like this to fixed this issue:</p>
<pre><code>location ^~ /ai/stream {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8000;
proxy_cache_bypass $http_pragma;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
gzip off;
chunked_transfer_encoding off;
}
</code></pre>
|
<p>When running the Kubernetes Dashboard in a Windows Docker Desktop when I click on "pods" either nothing is shown</p>
<blockquote>
<p>There is nothing to display here No resources found.</p>
</blockquote>
<p>or I get this error:</p>
<blockquote>
<p>deployments.apps is forbidden: User
"system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard"
cannot list resource "deployments" in API group "apps" in the
namespace "default"</p>
</blockquote>
<p>Was there anything running? Yes.</p>
<p><a href="https://i.stack.imgur.com/GAgsW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GAgsW.png" alt="enter image description here" /></a></p>
<blockquote>
<p><strong>How can I get an overview of my pods?</strong></p>
</blockquote>
<p>What's the config? In the Windows Docker Desktop environment, I stared with a fresh Kubernetes. I removed any old user "./kube/config" file.</p>
<p>To get the Kubernetes dashboard runnig, I did the procedure:</p>
<ol>
<li><p>Get the dashboard: kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml</a></p>
</li>
<li><p>Because generating tokens via a standard procedure (as found on many places) did not work, I took the alternative short-cut:</p>
</li>
</ol>
<p>kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-skip-login"}]'</p>
<ol start="3">
<li><p>After typing "kubectl proxy" the result is: Starting to serve on 127.0.0.1:8001</p>
</li>
<li><p>In a browser I started the dashboard:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default</p>
</li>
</ol>
<p>After clicking the "Skip" button, the dashboard opened.</p>
<p>Clicking on "Pods" (and nearly all other items) gave this error:</p>
<blockquote>
<p>pods is forbidden: User
"system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard"
cannot list resource "pods" in API group "" in the namespace
"kubernetes-dashboard" (could be "default" as well)</p>
</blockquote>
<p>It did not matter whether I chose the default namespace.</p>
<p><strong>ALTERNATIVE:</strong> As an alternative I tried to bind the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole.</p>
<ol>
<li>Preparations: create this file:</li>
</ol>
<blockquote>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
</blockquote>
<pre><code>$ kubectl apply -f s.yml
</code></pre>
<p>Create this file:</p>
<blockquote>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
</blockquote>
<pre><code>$ kubectl apply -f r.yml
</code></pre>
<p>Then run this command:</p>
<pre><code>$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
</code></pre>
<p>This (or similar alternative) command gives a lot of errors.</p>
<p>Breaking this command down in parts: kubectl -n kubernetes-dashboard get sa/admin-user ... gives:</p>
<p><a href="https://i.stack.imgur.com/35VdV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35VdV.png" alt="enter image description here" /></a></p>
<p>This command: kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}" gives no result.</p>
| <p>It's definitely a Permissions issue.</p>
<p>Binds the kubernetes-dashboard ServiceAccount to the cluster-admin ClusterRole.</p>
<p>Otherwise it doesn't have the privileges to be able to collect data from the cluster.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: NAMESPACE-WHERE-DASHBOARD-IS
</code></pre>
|
<p>I have a 3 node test cluster: 1 head and 2 worker nodes (all raspberry pies). I would like to expose a specific folder on the head for read/write on each pod on the cluster (irregardless of which node they are running on).</p>
<p>E.g. my head node has this folder: <code>/media/usb/test</code>. I want to mount this to my pods' filesystem to the <code>mnt/test</code> directory. Is it possible?</p>
<p>My pods are deployed with this config:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deploy
spec:
replicas: 2
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
terminationGracePeriodSeconds: 5
containers:
- name: test
image: rpi.home:5000/test
</code></pre>
| <p>You could use <code>hostPath</code>. More in <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">here</a>.
It is also recommended not to use it.</p>
<blockquote>
<p>HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.</p>
</blockquote>
|
<p>I have several image do different thing. Now, I expose them like these commands:</p>
<pre><code>kubectl create deployment work_deployment_1 --image=username/work_image_1:0.0.1-SNAPSHOT
kubectl expose deployment work_deployment_1 --type=LoadBalancer --port=8000
</code></pre>
<p>and then</p>
<pre><code>kubectl create deployment work_deployment_2 --image=username/work_image_2:0.0.1-SNAPSHOT
kubectl expose deployment work_deployment_2 --type=LoadBalancer --port=9000
</code></pre>
<p>After deployment creating and exposing, I check them by <code>kubectl get service</code>, the result of it will like:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
work_deployment_1 LoadBalancer 10.245.197.226 159.65.210.104 8000:30798/TCP 30m
work_deployment_2 LoadBalancer 10.245.168.156 159.65.129.201 9000:32105/TCP 51s
</code></pre>
<p>Can I make deployment (or deployments), expose <code>same_external-ip:8000</code> and <code>same_external-ip:9000</code>, instead of address above (<code>159.65.210.104:8000</code> and <code>159.65.129.201:9000</code>) ?</p>
| <p>You will need to have an Ingress controller installed in your cluster to handle incoming traffic and route it to the appropriate service. Examples of Ingress controllers include Nginx Ingress, Traefik, and Istio.</p>
<p>Another way around that we use in Azure and Google Cloud is that we Expose the services via App Gateway in Azure & HTTPS Global LB</p>
<p>In GCP case and the services are exposed on the LB single Anycast IP.</p>
<p>In GCP Case workflow is:
<em>Create a Kubernetes service> Create a backend service that references each Kubernetes service > Create a URL map that maps the incoming requests to the appropriate backend service based on the requested URL or hostname > Create a target HTTP proxy that references the URL map > Create a Google Cloud HTTPS load balancer and configure it to use the target HTTP proxy</em></p>
<p>Each time the Front End will be using the SAME Anycast IP & Different ports in front end...</p>
<p>In your private cloud case I will refer using Traefik you can follow their documentation on this:<a href="https://doc.traefik.io/traefik/providers/kubernetes-ingress/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/providers/kubernetes-ingress/</a></p>
|
<p>I have a Nodejs microservice and a Kafka broker running in the same cluster.</p>
<p>The kafka broker and zookeeper are running without errors, but I am not sure how to connect to them.</p>
<p><strong>kafka.yaml</strong></p>
<pre><code># create namespace
apiVersion: v1
kind: Namespace
metadata:
name: "kafka"
labels:
name: "kafka"
---
# create zookeeper service
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
namespace: kafka
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
---
# deploy kafka broker
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-service
namespace: kafka
spec:
ports:
- port: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
# value: 10.244.0.35:2181
value: zookeeper-service:2181
- name: KAFKA_LISTENERS
value: PLAINTEXT://:9092
# - name: KAFKA_ADVERTISED_HOST_NAME
# value: kafka-broker
# - name: KAFKA_ADVERTISED_PORT
# value: "9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
</code></pre>
<p><a href="https://levelup.gitconnected.com/how-to-deploy-apache-kafka-with-kubernetes-9bd5caf7694f" rel="nofollow noreferrer">source</a></p>
<p>Connecting using <code>kafka-service:9092</code> or <code>kafka-broker:9092</code> doesn't work and leads to a timeout.</p>
<p><strong>kafka.js</strong></p>
<pre><code>const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['PLAINTEXT://kafka-broker:9092'], // !!! connection string
})
async function createProducer() {
const producer = kafka.producer()
await producer.connect()
await producer.send({
topic: 'test-topic',
messages: [{ value: 'Hello KafkaJS user!' }],
})
await producer.disconnect()
}
createProducer()
</code></pre>
<pre><code>[auth-pod] {"level":"WARN","timestamp":"2023-03-24T15:35:41.511Z","logger":"kafkajs","message":"KafkaJS v2.0.0 switched default partitioner. To retain the same partitioning behavior as in previous versions, create the producer with the option \"createPartitioner: Partitioners.LegacyPartitioner\". See the migration guide at https://kafka.js.org/docs/migration-guide-v2.0.0#producer-new-default-partitioner for details. Silence this warning by setting the environment variable \"KAFKAJS_NO_PARTITIONER_WARNING=1\""}
[auth-pod] Listening on port 3000...
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.586Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":0,"retryTime":292}
[auth-pod] Connected to: mongodb://auth-mongo-srv:27017/auth
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:41.881Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":1,"retryTime":596}
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:42.479Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":2,"retryTime":1184}
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:43.665Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":3,"retryTime":2782}
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:46.449Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":4,"retryTime":5562}
[auth-pod] {"level":"ERROR","timestamp":"2023-03-24T15:35:52.015Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).","retryCount":5,"retryTime":12506}
[auth-pod] node:internal/process/promises:288
[auth-pod] triggerUncaughtException(err, true /* fromPromise */);
[auth-pod] ^
[auth-pod]
[auth-pod] KafkaJSNonRetriableError
[auth-pod] Caused by: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).
[auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11
[auth-pod] ... 8 lines matching cause stack trace ...
[auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) {
[auth-pod] name: 'KafkaJSNumberOfRetriesExceeded',
[auth-pod] retriable: false,
[auth-pod] helpUrl: undefined,
[auth-pod] retryCount: 5,
[auth-pod] retryTime: 12506,
[auth-pod] [cause]: KafkaJSConnectionError: Failed to connect: Port should be >= 0 and < 65536. Received type number (NaN).
[auth-pod] at /app/node_modules/kafkajs/src/network/connection.js:254:11
[auth-pod] at new Promise (<anonymous>)
[auth-pod] at Connection.connect (/app/node_modules/kafkajs/src/network/connection.js:167:12)
[auth-pod] at ConnectionPool.getConnection (/app/node_modules/kafkajs/src/network/connectionPool.js:56:24)
[auth-pod] at Broker.connect (/app/node_modules/kafkajs/src/broker/index.js:86:52)
[auth-pod] at async /app/node_modules/kafkajs/src/cluster/brokerPool.js:93:9
[auth-pod] at async /app/node_modules/kafkajs/src/cluster/index.js:107:14
[auth-pod] at async Cluster.connect (/app/node_modules/kafkajs/src/cluster/index.js:146:5)
[auth-pod] at async Object.connect (/app/node_modules/kafkajs/src/producer/index.js:219:7)
[auth-pod] at async createProducer (/app/src/kakfka/connect.js:11:3) {
[auth-pod] retriable: true,
[auth-pod] helpUrl: undefined,
[auth-pod] broker: 'PLAINTEXT:NaN',
[auth-pod] code: undefined,
[auth-pod] [cause]: undefined
[auth-pod] }
[auth-pod] }
[auth-pod]
[auth-pod] Node.js v18.15.0
</code></pre>
<p>If I use the IP of the pod <code>kafka-broker-5c7f7d4f77-nxlwm</code> directly <code>brokers: ['10.244.0.94:9092']</code>, I also get an error. Using the default namespace instead of a separate namespace didn't make a difference.</p>
<p>After switching to a StatefulSet based on <a href="https://stackoverflow.com/a/57261043/20898396">this</a> answer, I can connect using the IP of <code>kafka-broker-0</code> <code>'10.244.0.110:9092'</code>, but I get another error: <code>KafkaJSProtocolError: Replication-factor is invalid</code>. I don't know why the dns resolution would fail, but using the name <code>'kafka-broker-0:9092'</code>, leads to the same error as before <code>"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout"</code>.</p>
<p>Based on</p>
<blockquote>
<p>If you have multiple REST Proxy pods running, Kubernetes will route
the traffic to one of them. <a href="https://www.confluent.io/wp-content/uploads/Recommendations-for-Deploying-Apache-Kafka-on-Kubernetes.pdf" rel="nofollow noreferrer">source</a></p>
</blockquote>
<p>I should be able to use the Kubernetes service <code>kafka-service</code> to load balance requests without hard coding an IP address. (There wasn't a <code>targetPort</code>, but it still doesn't work after adding <code>targetPort: 9092</code>, although I am not sure which protocol to use)</p>
<hr />
<p>I looked at the logs of the kafka-broker pod and noticed an exception.</p>
<pre><code>[2023-03-24 18:01:25,123] WARN [Controller id=1, targetBrokerId=1] Error connecting to node kafka-broker:9092 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: kafka-broker
at java.base/java.net.InetAddress$CachedAddresses.get(Unknown Source)
at java.base/java.net.InetAddress.getAllByName0(Unknown Source)
at java.base/java.net.InetAddress.getAllByName(Unknown Source)
at java.base/java.net.InetAddress.getAllByName(Unknown Source)
at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:111)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:513)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:467)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:172)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:985)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:311)
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:65)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:292)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:246)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
</code></pre>
<p>I think that specifying <code>KAFKA_ADVERTISED_LISTENERS</code> should be sufficient (<a href="https://stackoverflow.com/a/51632885/20610346">answer</a>), so I am guessing there is a problem with dns resolution.</p>
<p>Using a headless service by adding <code>clusterIP: "None"</code> and changing the name to <code>kafka-broker</code> in case that <code>PLAINTEXT://kafka-broker:9092</code> uses the service and not the deployment didn't help.</p>
<pre><code># create namespace
apiVersion: v1
kind: Namespace
metadata:
name: "kafka"
labels:
name: "kafka"
---
# create zookeeper service
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
namespace: kafka
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
---
# deploy kafka broker
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: kafka
spec:
clusterIP: "None"
# ports:
# - protocol: TCP
# port: 9092
# targetPort: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: kafka
spec:
# replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
# value: 10.244.0.35:2181
value: zookeeper-service:2181
- name: KAFKA_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
</code></pre>
<p><a href="https://github.com/bogdan-pechounov/microservices-quiz-app/tree/f9c547cc160c2317222bd8bda99e10bff56818f6" rel="nofollow noreferrer">full code</a></p>
<p>Edit:
Not sure why I had a <code>KafkaJSProtocolError: Replication-factor is invalid</code> error, but changing the service as follows prevents it. (It might be because I was using the same name for the service and deployment. I don't fully understand headless services, but I also added a port.)</p>
<pre><code># create namespace
apiVersion: v1
kind: Namespace
metadata:
name: "kafka"
labels:
name: "kafka"
---
# create zookeeper service
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
namespace: kafka
spec:
# type: NodePort
ports:
- name: zookeeper-port
port: 2181
# nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
---
# deploy kafka broker
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-srv
name: kafka-srv
namespace: kafka
spec:
# headless service
clusterIP: "None"
ports:
- name: foo
port: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: kafka
spec:
# replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service:2181
- name: KAFKA_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker:9092
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
</code></pre>
<pre><code>const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['10.244.0.64:9092'],
})
async function createProducer() {
const producer = kafka.producer()
try {
await producer.connect()
console.log('connected', producer)
// await producer.send({
// topic: 'test-topic',
// messages: [{ value: 'Hello KafkaJS user!' }],
// })
// await producer.disconnect()
} catch (err) {
console.log("Couldn' connect to broker")
console.error(err)
}
}
</code></pre>
<pre><code>[auth-pod] connected {
[auth-pod] connect: [AsyncFunction: connect],
[auth-pod] disconnect: [AsyncFunction: disconnect],
[auth-pod] isIdempotent: [Function: isIdempotent],
[auth-pod] events: {
[auth-pod] CONNECT: 'producer.connect',
[auth-pod] DISCONNECT: 'producer.disconnect',
[auth-pod] REQUEST: 'producer.network.request',
[auth-pod] REQUEST_TIMEOUT: 'producer.network.request_timeout',
[auth-pod] REQUEST_QUEUE_SIZE: 'producer.network.request_queue_size'
[auth-pod] },
[auth-pod] on: [Function: on],
[auth-pod] send: [AsyncFunction: send],
[auth-pod] sendBatch: [AsyncFunction: sendBatch],
[auth-pod] transaction: [AsyncFunction: transaction],
[auth-pod] logger: [Function: getLogger]
[auth-pod] }
</code></pre>
<p>Edit 2: When connecting successfully using the ip address, I also get <code>java.net.UnknownHostException: kafka-broker</code> in the kafka-broker-0 pod. (The error keeps repeating as well) I thought that the pod was being reached by kafkajs and then the pod threw an error, but the error happens regardless. Matching the service name with the advertised host name prevents it.</p>
<pre><code>---
# deploy kafka broker
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-broker
# namespace: kafka
spec:
# headless service
clusterIP: "None"
ports:
- name: foo
port: 9092
selector:
app: kafka-broker
</code></pre>
<p>I can't connect to the pod <code>kafka-broker-0:9092</code> directly, but now using the service name <code>kafka-broker:9092</code> works.</p>
| <p>From outside k8s, your app cannot resolve cluster DNS names. You'll need to use an ClusterIP / NodePort address (but you've set that to None). This will also require you to advertise that. Refer <a href="https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/" rel="nofollow noreferrer">https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/</a></p>
<ol>
<li><p>Never use IPs for services. This includes Zookeeper connect property for Kafka. Ref. <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
</li>
<li><p>I highly recommend not writing your own Kafka/Zookeeper specs. Use a Helm Chart or Operator such as <a href="https://strimzi.io" rel="nofollow noreferrer">https://strimzi.io</a> - mentioned in blog above</p>
</li>
</ol>
|
<p><strong>I am deploying PostgreSQL cluster using kubernetes runnung 3 Instance How do you add postgresql connection string url by using kubernetes yaml file</strong></p>
<blockquote>
<p>postgresql://bigdata:bigdata@dbhost1:5432,dbhost2:5432/bigdata?target_session_attrs=primary</p>
</blockquote>
| <p>Try like this:</p>
<pre><code>jdbc:postgresql://<database_host>:<port>/<database_name>
</code></pre>
<p>Credentials will need to be managed via Secrets.</p>
|
<p>We need to backup PV on an on-premise Kubernetes cluster, so we installed Velero, linked to MinIO, with Velero's File System Backup. No PV are backed up and no error is shown, only this mention appears in the logs "Persistent volume is not a supported volume type for snapshots, skipping". Does someone have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers ?</p>
<h3>Details</h3>
<p>Velero was installed using the following command (credentials-minio containing the MinIO's bucket access keys):</p>
<pre><code>velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket ka-backup \
--secret-file ./credentials-minio \
--use-node-agent \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio-1679064025.minio.svc:9000 \
--snapshot-location-config region=minio
</code></pre>
<p>The result of a backup of namespaces shows no error, nor warnings as seen below and the status of the phase is completed.</p>
<pre><code>velero backup describe acme -n velero --kubeconfig ../VKATEST.config
Name: acme
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.6
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=25
Phase: Completed
Errors: 0
Warnings: 0
Namespaces:
Included: acme
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
CSISnapshotTimeout: 10m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2023-03-20 14:40:18 +0100 CET
Completed: 2023-03-20 14:40:29 +0100 CET
Expiration: 2023-04-19 15:40:18 +0200 CEST
Total items to be backed up: 437
Items backed up: 437
Velero-Native Snapshots: <none included>
</code></pre>
<p>In the logs we can read at the end of the following extract, that: "Persistent volume is not a supported volume type for snapshots, skipping".</p>
<pre><code>level=info msg="Backing up item" backup=velero/acme logSource="pkg/backup/item_backupper.go:132" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes
level=info msg="Executing takePVSnapshot" backup=velero/acme logSource="pkg/backup/item_backupper.go:412" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= resource=persistentvolumes
level=info msg="label \"topology.kubernetes.io/zone\" is not present on PersistentVolume, checking deprecated label..." backup=velero/acme logSource="pkg/backup/item_backupper.go:445" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes
level=info msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume" backup=velero/acme logSource="pkg/backup/item_backupper.go:449" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes
level=info msg="zone info not available in nodeAffinity requirements" backup=velero/acme logSource="pkg/backup/item_backupper.go:454" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes
level=info msg="No volume ID returned by volume snapshotter for persistent volume" backup=velero/acme logSource="pkg/backup/item_backupper.go:478" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes volumeSnapshotLocation=default
level=info msg="Persistent volume is not a supported volume type for snapshots, skipping." backup=velero/acme logSource="pkg/backup/item_backupper.go:489" name=cassandra-logs-local-storage-fra-vkatest-ml5 namespace= persistentVolume=cassandra-logs-local-storage-fra-vkatest-ml5 resource=persistentvolumes
</code></pre>
<p>Please let us know if you have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers.</p>
| <p>have you add option --default-volumes-to-restic(for older versions like 1.7) or --default-volumes-to-fs-backup(new versions) when doing velero backup?
could be same problem like: <a href="https://stackoverflow.com/questions/75803242/velero-volume-snapshots-contain-no-data-when-restoring-a-backup">Velero - Volume Snapshots contain no data when restoring a backup</a></p>
|
<p>We have recently introduced Karpenter autoscaler in our Kubernetes infrastructure.</p>
<p>One of its strength is the <a href="https://karpenter.sh/preview/concepts/deprovisioning/" rel="nofollow noreferrer">deprovisioning mechanism</a>.</p>
<p>However given that we have a lot of stateful workload, this can cause some serious issue at time. We are taking several measures to somewhat control it.</p>
<p>Meanwhile, we realize that we have no ways to observe how the measures we take mitigate the issue.</p>
<p>Hence the question: Is there way to monitor Kubernetes POD movements, i.e. tracking how many time a POD (not the containers) may have been re-scheduled / restarted to move from a node to another, etc....</p>
| <p>There is a metric in kubelet <code>kube_pod_status_scheduled_time</code> you could use. If the <code>start time</code> is more than <code>scheduled time</code>, it is scheduled:</p>
<pre><code>kube_pod_start_time{namespace="my-ns"} - kube_pod_status_scheduled_time{namespace="my-ns"} > 0
</code></pre>
<p>You can also find the status of pods (<code>Evicted</code>, <code>NodeAffinity</code>, <code>Shutdown</code> ...) , with the metric <code>kube_pod_status_reason</code>.</p>
|
<p>how can I do health check for cnosdb in kubernetes, as we know, health-check is important in kubernetes, as common, a binary should provider a restful api for kubernetes periodically health-checking. Does cnosdb provider such api?</p>
<p>best practise to do health-checking in kubernetes for cnosdb</p>
| <p>It's a very general question, which in my opinion leaves too much room for interpretation.</p>
<p>In general, however, you should think about configuring a Monitoring stack, perhaps based on Prometheus, Grafana and Alertmanager that allows you to extrapolate the metrics that are exposed by the DB.</p>
<p>Otherwise you can do something very rough by following the following CURL:</p>
<pre><code>curl http://127.0.0.1:21001/metrics
curl http://127.0.0.1:21002/metrics
curl http://127.0.0.1:21003/metrics
</code></pre>
<p><a href="https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process" rel="nofollow noreferrer">https://docs.cnosdb.com/en/cluster/cluster.html#meta-custer-startup-process</a></p>
|
<p>I have apache flink deployed to AWS EKS (1.21) with version 1.17-SNAPSHOT and state storage in AWS S3. This setup works great.</p>
<p>I am now trying to deploy the same version to Azure AKS (1.22 - minimum available version for AKS) and store the state in Azure Blob.</p>
<p>In both cases I use the <a href="https://github.com/apache/flink-kubernetes-operator" rel="nofollow noreferrer">apache flink kubernetes operator</a>, version 1.3.1.</p>
<p>If I disable checkpoints and savepoints my application works great.</p>
<p>Once enabled, I get the following error at job manager startup:</p>
<pre><code>Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not found
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2592) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2712) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:107) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.<init>(Groups.java:102) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:338) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:300) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:575) ~[?:?]
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1425) ~[?:?]
at org.apache.flink.fs.azurefs.AbstractAzureFSFactory.create(AbstractAzureFSFactory.java:78) ~[?:?]
at org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:62) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:508) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:67) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:324) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:333) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:248) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:524) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:321) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:163) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:210) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:136) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:348) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.lang.Thread.run(Unknown Source) ~[?:?]
</code></pre>
<p>I followed this guide: <a href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/</a> , the <code>azure-fs-hadoop</code> is properly configured.</p>
<p>I tried settings <code>hadoop.flink.flink.hadoop.hadoop.security.group.mapping: org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</code> in flink_conf.yaml</p>
<p>The checkpoint and savepoint configuration:</p>
<pre><code>state.checkpoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<cp_path>
state.savepoints.dir: wasb://checkpoints@<account>.blob.core.windows.net/<sp_path>
</code></pre>
<p>If I try to set the properties this way, I get messages about anonymous access:</p>
<pre><code>state.checkpoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path>
state.savepoints.dir: wasb://checkpoints@$<account>.blob.core.windows.net/<sp_path>
</code></pre>
<p>Message:</p>
<pre><code>The account access key is not configured for wasb://checkpoints@$<account>.blob.core.windows.net/<cp_path>. Now try anonymous access.
</code></pre>
<p>any suggestions will be appreciated.</p>
| <p>This issue should be fixed in flink 1.17.1 or higher: <a href="https://issues.apache.org/jira/browse/FLINK-31612" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/FLINK-31612</a></p>
|
<h2>Background</h2>
<p>I have two Kubernetes nodepools - <code>A</code> and <code>B</code>.</p>
<p><code>A</code> nodepool is the right hardware and the preferred one. <code>B</code> be nodepool is less preferred since it using different hardware.</p>
<p>My pods allocation preferring nodepool <code>A</code>. But, <code>A</code> is not always available. Its available 70% of the time.</p>
<p>Once a pod spawned in nodepool <code>B</code>, it will stay there forever. This is inefficient for me.</p>
<h2>Question:</h2>
<p>I want to schedule auto termination of a pod, if its living inside of nodepool <code>B</code>. The auto termination can be done after a fix time (or even better random time between X to Y minutes).</p>
<p>How to implement this?</p>
| <p>The need is very clear.</p>
<p>You can add this piece of code to your Deployment (.spec.affinity.nodeAffinity):</p>
<pre><code> nodeAffinity:
# Soft anti affinity between nodes A and B
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: nodepool-a
operator: In
values:
- "true"
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Assuming that the nodepool-a has a label "nodepool-a=true" (it could be any other), adding this configuration the Pods will be assigned to the nodepool-a; if nodepool-a is not available, they are scheduled on other nodepools that don't match the label.</p>
<p>With this configuration, you just need to schedule a rollout restart of your Deployment to move the Pods to nodepool-a when it is available.</p>
<p>Unfortunately there is no other way to achieve your goal; maybe you could automate the Deployment rollout restart triggered by a monitoring event, which after some time "notifies" that the nodepool-a is available.</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<p>Taking a look in the <em>nodeaffinity</em> code, I understand that the <em>requiredDuringSchedulingRequiredDuringExecution</em> function will be implemented which should even eviction Pods that are on nodes that do not match the label.</p>
<p>There are no details on when it will be implemented though.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.26.3/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go#L36</a> (line 258)</p>
|
<pre><code>echo "apiVersion: v1
kind: Node
metadata:
name: host-cluster-control-plane-64j47
labels:
beta.kubernetes.io/arch: amd64
" | yq -o p
</code></pre>
<p>Result:</p>
<pre><code>apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.beta.kubernetes.io/arch = amd64
</code></pre>
<p>That's almost what I want. I am looking for the key to get values.</p>
<p>I could use <code>metadata.name</code> like this:</p>
<pre><code>echo "apiVersion: v1
kind: Node
metadata:
name: host-cluster-control-plane-64j47
labels:
beta.kubernetes.io/arch: amd64
" | yq '.metadata.name'
</code></pre>
<p>But the <code>-o p</code> option of <code>yq</code> does not quote the key, if needed.</p>
<p>I can't use <code>metadata.labels.beta.kubernetes.io/arch</code> as key, since the correct syntax is <code>metadata.labels["beta.kubernetes.io/arch"]</code>.</p>
<p>Is there an automated way to get the keys of a yaml file so that I can use the keys in <code>yq</code> (or <code>jq</code>)?</p>
<p>The desired output would be something like this:</p>
<pre><code>apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels["beta.kubernetes.io/arch"] = amd64
</code></pre>
<p>I am looking for the valid key, because I want to create a second command line to select these values.</p>
<p>For example:</p>
<pre><code>❯ k get nodes -o yaml | yq '.items[].metadata.labels["beta.kubernetes.io/arch"]'
amd64
amd64
amd64
</code></pre>
| <p>You can get close by doing something like:</p>
<pre class="lang-bash prettyprint-override"><code>yq '(.. | key | select(test("\."))) |= ("[\"" + . + "\"]")' file.yaml -op
apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.["beta.kubernetes.io/arch"] = amd64
</code></pre>
<p>Or you could do:</p>
<pre><code>yq '(.. | key | select(test("\."))) |= sub("\.", "\.")' file.yaml -op
apiVersion = v1
kind = Node
metadata.name = host-cluster-control-plane-64j47
metadata.labels.beta\\.kubernetes\\.io/arch = amd64
</code></pre>
<p>BTW - I'm not sure how it's supposed be escaped in property files, I'd be willing to update yq to do it natively someone raises a bug with details on github...</p>
<p>Disclaimer: I wrote yq</p>
|
<p>I'd like to downgrade the load balancer of my GKE Service from Premium tier, to Standard tier. To do that, I added <code>cloud.google.com/network-tier: Standard</code> to the annotations of my service. The problem now is that no load balancer is getting created in the <code>Load Balancer</code> section, and I can't connect to my kubernetes service anymore.</p>
<p>The service itself was installed by helm, but here's the resulting YAML from GKE:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/network-tier: Standard
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
creationTimestamp: "2023-03-29T22:15:04Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.5.1
helm.sh/chart: ingress-nginx-4.4.2
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.70.128.216
clusterIPs:
- 10.70.128.216
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: <<REDACTED>>
ports:
- name: http
nodePort: 31109
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31245
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
| <p>I found the solution, the problem was that my reserved IP address was of premium tier. Now that I changed it, everything seems to be in working order</p>
|
<p>I'm using Grafana with Helm <a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/tree/main/charts/grafana</a>. I would like to switch from SQLite 3 to PostgreSQL as my backend database. However, I'm concerned about the security of my database credentials, which are currently stored in the values.yaml file as plain text.</p>
<p>What is the recommended way to switch to PostgreSQL and hide the database credentials in a secure way? Can I use Kubernetes secrets or some other mechanism to achieve this? (Please I need to know where, in the values.yaml file, I have to do the configuration)</p>
<p>I'm connecting Grafana with the PostgreSQL database inside the grafana.ini section in the values.yaml, E.g.:</p>
<pre><code>grafana.ini:
database:
type: "postgres"
host: "db.postgres.database.azure.com"
name: "grafana-db"
user: "grafana-db-user"
password: ""grafana-db-pass"
ssl_mode: "require"
</code></pre>
<p>Thanks in advance for your help!</p>
<p>I've tried to include use the env section but it's not working.</p>
| <blockquote>
<p>Had you already seen this section from your link? How to securely reference secrets in grafana.ini –
jordanm</p>
</blockquote>
<p>Thank you so much @jordanm :)</p>
|
<p>All my kustomize declarations for my kubernetes cluster are stored in a folder with this structure:</p>
<pre><code>.
├── apps
│ └── nginx-test
│ ├── base
│ │ ├── deployment.yaml
│ │ ├── ingress.yaml
│ │ ├── kustomization.yaml
│ │ ├── namespace.yaml
│ │ └── service.yaml
│ ├── dev
│ │ ├── ingress.yaml
│ │ └── kustomization.yaml
│ └── tollana
│ ├── ingress.yaml
│ └── kustomization.yaml
├── infra
│ ├── cert-manager
│ │ ├── base
│ │ │ ├── helmChart.yaml
│ │ │ ├── kustomization.yaml
│ │ │ ├── namespace.yaml
│ │ │ └── values.yaml
│ │ ├── dev
│ │ │ └── kustomization.yaml
│ │ └── tollana
│ │ └── kustomization.yaml
│ ├── enroute
│ │ ├── base
│ │ │ ├── helmChart.yaml
│ │ │ ├── kustomization.yaml
│ │ │ ├── namespace.yaml
│ │ │ └── values.yaml
│ │ ├── dev
│ │ │ ├── add-namespace.yaml
│ │ │ ├── externalIP.yaml
│ │ │ └── kustomization.yaml
│ │ └── tollana
│ │ ├── add-namespace.yaml
│ │ ├── externalIP.yaml
│ │ └── kustomization.yaml
</code></pre>
<p>I want to have tilt to sync all kustomizations in the dev directories. So far I got this:</p>
<pre><code>baseFolders = ["./apps", "./infra"]
for folder in baseFolders:
appFolders = listdir(folder)
for appFolder in appFolders:
if os.path.exists("kubectl kustomize --enable-helm {}/dev/kustomization.yaml").format(os.path.join(folder, appFolder)):
k8s_yaml(local("kubectl kustomize --enable-helm {}/dev").format(os.path.join(folder, appFolder)))
</code></pre>
<p>But <code>listdir("directory")</code> only lists the files in <em>directory</em>.
Is there a way to get all folders in a specific directory?</p>
<p><code>os.listdir()</code> doesn't work:</p>
<pre><code>Error: module has no .listdir field or method
</code></pre>
| <p>This behaviour is not supported (yet). There is an open issue for it: <a href="https://github.com/tilt-dev/tilt/issues/5268" rel="nofollow noreferrer">https://github.com/tilt-dev/tilt/issues/5268</a></p>
|
<p>Since Kubernetes 1.25, the <a href="https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">ephemeral containers</a> are stable.</p>
<p>I can inject ephemeral container into running pod but when the purpose of debug container ends I'd like to remove the container from the pod but I still see it with Terminated state.</p>
<p>The docs currently say to delete the container I must delete whole pod similar to copied pod but I don't think that is right.</p>
<p>How can I delete ephemeral container from running pod without destroying it?</p>
| <p>Unfortunately it isn't possible to do what you say.</p>
<blockquote>
<p>Ephemeral containers are created using a special <em>ephemeralcontainers</em> handler in the API rather than by adding them directly to <em>pod.spec</em>, so it's not possible to add an ephemeral container using <em>kubectl edit</em>.</p>
</blockquote>
<blockquote>
<p>Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#understanding-ephemeral-containers</a></p>
|
<p>I would like to know how does the AKS cluster autoscaler choses in which node pool to add a node in a multiple node pool environment</p>
<p>For instance, if I have a node pool tainted for a specific applications, will the autoscaler automatically detect the taint and only scale the node pool up if there are some pending pods which can be scheduled on the nodes ? Or will it scale a random node pool in the cluster ?</p>
<p>There is nothing about it on <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler</a>.</p>
| <p>Below are my test results:</p>
<p><strong>Scenario1:</strong></p>
<p>If there are multiple nodepools and if all those nodepools have got corresponding taints applied , then cluster autoscaler will scale only that particular nodepool for which the corresponding pods are in pending state which are specified with tolerations i.e. it will scale only that nodepool which matches with corresponding taints/tolerations</p>
<p><strong>Scenario2:</strong></p>
<p>If you have 3 nodepools , taint is applied only on one nodepool - once the corresponding nodepool is full , then the pending pods can go to other nodepools (on which taints were not applied) and there is a high possibility of auto-scaling randomly the other nodepools also!</p>
<p><strong>Please Note:</strong> Only Taints & Tolerations will not guarantee of sticking the pods to corresponding nodepools. But If you apply Taints/Tolerations along with NodeAffinity that will make sure the pods will go and deploy only on corresponding nodepools !</p>
<p>All those conclusions are based upon the tests which I did locally in my AKS cluster!</p>
|
<p>I have deployed Azure AKS using the below official terraform docs</p>
<p><a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster</a></p>
<p>I see bunch of resources created as well as Loadbalancer called <strong>Kubernetes</strong> automatically</p>
<p>After this I have deployed deployed the Run demo app with ingress as mentioned in docs below for hello world one & two</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli</a></p>
<p>When I check the ingress resource I don't see any EXTERNAL IP address allocated & the columns are blank.</p>
<p>I want to set up a sample end to end AKS cluster with load balancing & a DNS record.</p>
<p><strong>Can some one let me know what am I doing wrong or is there any other repo with end to end examples?</strong></p>
| <p>I usually recommend the following, which gives you control over the IP address etc.</p>
<ul>
<li>Deploy an Public IP address using Terraform (or ARM) alongside your AKS cluster</li>
<li>Give the AKS Kubelet identity "Network Contributor" permissions on that PIP</li>
<li>When deploying your Ingress, <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#specify-the-load-balancer-ip-address" rel="nofollow noreferrer">reference that existing PIP</a> (and its resource group). AKS will then use that IP for the deployed service.</li>
</ul>
<p>This way you can, for example, control whether the PIP is static or dynamic, if it's coming out of a given prefix range, etc.</p>
<p>Full example here: <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/static-ip</a></p>
|
<p>Below is my <code>testfile.yaml</code>:</p>
<pre><code>---
kind: Pod
metadata:
name: amazing-application
---
kind: Deployment
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: tea-service
</code></pre>
<p>My goal is to split this into 4 files where the filename is <code>.metadata.name</code> and the dir that file goes into is <code>.kind</code>.</p>
<p>I have achieved what I want with this:</p>
<pre><code>for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq);
do
mkdir "$kind"
cd "$kind"
yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name'
cd ..;
done
</code></pre>
<p>What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop.</p>
<p>Is there a way to use <code>yq</code> and <code>-s</code> or <code>select</code> to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')?</p>
<p>Because if you do <code>yq -s '.kind' testfile.yaml</code> it will yield three yaml files, not four. Same for <code>yq -s '.metadata.name' testfile.yaml</code>; we get three files as not all <code>name</code> are unique - one gets lost.</p>
| <p>According to your shell-scripted workaround, you want to store each document into its own file named after <code>.metadata.name</code> in a subdirectory named after <code>.kind</code>. However, yq's <a href="https://mikefarah.gitbook.io/yq/usage/split-into-multiple-files#split-single-document-into-files" rel="nofollow noreferrer">split option</a> <code>-s</code> cannot create subdirectories, and would fail when provided with naming schemes such as <code>-s '.kind + "/" + .metadata.name'</code>.</p>
<p>Alternatives:</p>
<ul>
<li>You can try submitting a <a href="https://github.com/mikefarah/yq/issues" rel="nofollow noreferrer">feature request</a>. The author is quite open for adaptations towards new use-cases. In fact, the <code>-s</code> option by itself also came to life <a href="https://github.com/mikefarah/yq/issues/966" rel="nofollow noreferrer">this way</a>.</li>
<li>You could (temporarily) replace the <code>/</code> character from above for something else (valid but not contained in the names otherwise), and then with a shell script just iterate over the files moving and renaming them into the right places (altogether resulting in just one call to <code>yq</code>). Example using <code>_</code> instead:
<pre class="lang-bash prettyprint-override"><code>yq -s '.kind + "_" + .metadata.name' . testfile.yaml && for f in *_*
do mkdir -p "${f%%_*}" && mv "$f" "${f%%_*}/${f#*_}"; done
</code></pre>
</li>
<li>Depending on your overall goal with the files/documents, you could consider working with copies of the original file, each reduced to just those documents "belonging together", i.e. replacing the subdirectories with files, and keeping a grouped set of documents within those files. Example using the <code>eval-all</code> (or <code>ea</code>) <a href="https://mikefarah.gitbook.io/yq/commands/evaluate-all" rel="nofollow noreferrer">command</a> in combination with <code>[.]</code> to generate an array of documents, then <code>group_by</code> on <code>.kind</code> to collect into sub-arrays those that share the criteria given, and finally <code>.[] | split_doc</code> to turn those groups into documents again, in order to be captured by <code>-s</code>.
<pre class="lang-bash prettyprint-override"><code>yq ea -s '.[0].kind' '[.] | group_by(.kind) | .[] | split_doc' testfile.yaml
</code></pre>
Given your example, here you'd intentionally get only three files, but with one containing two documents, and two containing one. Note that technically the outputs are still just arrays of (former) documents, not "real" multi-document files, They'd need another <code>.[] | split_doc</code> applied to each one of them in order to properly re-convert them as such (not covered here).</li>
</ul>
|
<p>I have a problem.
There is preStop option in manifest file and OOMKilled was happened.
pod was restarted but no heapdump is created.</p>
<p>lifecycle:
preStop:
exec:
command: ["/tmp/preStop.sh"]</p>
<p>heapdump works when I manually terminate the pod.</p>
<p>so I wonder if pod is restarted, preStop is not supposed to be executed?</p>
<p>I thought when pod is restarted, first send TermSignal to application and execute preStop and terminate and start pod again. Am I wrong?</p>
<p>Thanks
Best Regards.</p>
| <blockquote>
<p>when the pod is restarted, first send TermSignal to the application
and execute preStop and terminate and start pod again. Am I wrong?</p>
</blockquote>
<p>As per the official <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p><code>PreStop</code> hooks are not executed asynchronously from the signal to
stop the Container; the hook must complete its execution before the
<strong>TERM</strong> signal can be sent.</p>
<p>If a <code>PreStop</code> hook hangs during execution, the Pod's phase will be
<strong>Terminating</strong> and remain there until the Pod is killed after its <code>terminationGracePeriodSeconds</code> expires. This grace period applies to
the total time it takes for both the PreStop hook to execute and for
the Container to stop normally.</p>
</blockquote>
<p>Hope the above information is useful to you.</p>
|
<p>In k8s, dns name can be unchangable, the ip is unflexable. the cnosdb github repo provide ip to config the clusters, but in kubernetes the cluster should use dns name, please provide a workaround method to config.</p>
<p>know the best practise to deploy cnosdb in k8s</p>
| <p>I don't know the code of the tool you indicate in the question, but not giving the possibility to configure a DNS name in favor of a static IP is generally an anti-pattern, especially on Kubernetes.</p>
<p>However, Network plug-ins like Calico allow you to reserve a static IP address for your Pod.</p>
<p>Take a look here: <a href="https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip" rel="nofollow noreferrer">https://docs.tigera.io/calico/latest/networking/ipam/use-specific-ip</a></p>
|
<p>When I run the application locally the application is up and running but when i deploy the same application in the Kubernetes cluster i am getting the error</p>
<p><strong>Error</strong></p>
<pre><code>java.lang.NoClassDefFoundError: org/springframework/core/env/Profiles
at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$null$3(PropertySourceUtils.java:69)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:239)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:167)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:139)
at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.createProperties(YamlPropertiesFactoryBean.java:135)
at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.getObject(YamlPropertiesFactoryBean.java:115)
at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$yamlParserGenerator$4(PropertySourceUtils.java:77)
at java.util.function.Function.lambda$andThen$1(Function.java:88)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.processAllEntries(ConfigMapPropertySource.java:149)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.getData(ConfigMapPropertySource.java:100)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.<init>(ConfigMapPropertySource.java:78)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.getMapPropertySourceForSingleConfigMap(ConfigMapPropertySourceLocator.java:96)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.lambda$locate$0(ConfigMapPropertySourceLocator.java:79)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.locate(ConfigMapPropertySourceLocator.java:78)
at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94)
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:628)
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:364)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:305)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230)
at com.daimler.daivb.msl.MbappsSnapLocalSearchServiceApplication.main(MbappsSnapLocalSearchServiceApplication.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.lang.ClassNotFoundException: org.springframework.core.env.Profiles
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:93)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
</code></pre>
<p>Dependencies I am using in the application are</p>
<ol>
<li>spring-boot-starter-web - 2.0.8.RELEASE</li>
<li>gson - 2.3.1</li>
<li>json-lib - 2.3</li>
<li>spring-cloud-starter-kubernetes-config -1.1.10.RELEASE</li>
<li>json - 20230227</li>
<li>xmlrpc-client - 3.1.3</li>
<li>spring-security-oauth2-autoconfigure - 2.0.8.RELEASE</li>
<li>spring-security-config</li>
<li>spring-security-web</li>
<li>spring-cloud-starter-openfeign - 2.0.0.RELEASE</li>
<li>spring-cloud-starter-netflix-ribbon - 2.0.0.RELEASE</li>
<li>spring-boot-starter-actuator</li>
<li>commons-lang3 - 3.8.1</li>
<li>lombok</li>
<li>spring-cloud-starter-config - 2.0.3.RELEASE</li>
<li>micrometer-registry-prometheus - 1.2.2</li>
<li>micrometer-core - 1.2.2</li>
<li>spring-boot-starter-test</li>
<li>spring-cloud-dependencies - Finchley.SR3</li>
</ol>
| <p>The version of Spring Cloud Kubernetes that you are using (1.1.10.RELEASE) requires Spring Boot 2.2.x. You are using 2.0.x. This older version of Spring Boot uses an older version of Spring Framework that does not contain the <code>org.springframework.core.env.Profiles</code> class. It was introduced in Spring Framework 5.1 and Spring Boot 2.0.x uses Spring Framework 5.0.x.</p>
<p>You should update your dependency versions to ensure that they're compatible. To make it easier to do so, I would recommend using the <code>spring-cloud-dependencies</code> bom as shown on its <a href="https://spring.io/projects/spring-cloud" rel="nofollow noreferrer">project page</a>.</p>
|
<p>I am trying to update the eks add-on named "vpc-cni". This plugin, does the following:</p>
<p>"The CNI plugin allows Kubernetes Pods to have the same IP address as they do on the VPC network. More specifically, all containers inside the Pod share a network namespace, and they can communicate with each-other using local ports."</p>
<p>I am however getting the following "Conflict" when updating:</p>
<pre><code>Conflicts: ClusterRole.rbac.authorization.k8s.io aws-node - .rules DaemonSet.apps aws-node - .spec.template.spec.containers[name="aws-node"].image DaemonSet.apps aws-node - .spec.template.spec.initContainers[name="aws-vpc-cni-init"].image
</code></pre>
<p>I don't really know where to begin in remediating this, or even what this error conflict is saying is conflicting.</p>
<p>Any help appreciated.</p>
| <p>Work around: when deploying this addon in the AWS console, click on the "advanced options" in the first panel after specifying the version and the IAM role. At the very bottom is a button that can be selected to override conflicts and this allows the installation to succeed.</p>
|
<p>Can I target a K8s service to a Pod without labels?</p>
<p>That is, I have a K8s Pod created with the following configuration.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nofrills-first-pod
spec:
containers:
- name: nofrills-container
image: nofrills/to-deploy:0.0.1
ports:
- containerPort: 3000
</code></pre>
<p>I would like to expose this pod as a K8s service. Normally, I would do this by creating a Service configuration that looked something like this</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-nofrills-service
spec:
type: NodePort
selector:
## ?????? no labels to target?
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 32525
</code></pre>
<p>However, since the pod doesn't have any labels I don't know how to tell the Service which pod to use. I suppose another way of asking this questions is "Can a K8s selector target an object without any labels?"</p>
<p>I realize I could (in many scenarios) easily add labels to the Pod -- but I'm specifically interested in the abilities of K8s selectors here.</p>
| <p>You can define a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Service without specifying a selector</a> to match Pods. Because this Service has no selector, the corresponding EndpointSlice (and legacy Endpoints) objects are not created automatically.</p>
<p>You can map the Service to the network address and port where it's running, by adding an EndpointSlice object manually.</p>
|
<p>I have a kubernetes cluster on which I have deployed a opensearch cluster and opensearch dashboard using Helm, I am also able to deploy logstash using helm successfully but I am confused on how to integrate those, I want to feed data to my Opensearch using logstash as my OBJECTIVE as I am not able to find much documentation on it as well. Any help is appreciated....Thanks in advance!</p>
<p>Deployed opensearch using Helm and logstash as well but unable to integrate them</p>
<p><strong>Update here!!!</strong></p>
<p>Have made a few changes to simplify the deployment and more control over the function,</p>
<p>I am testing deployment and service files this time, I will add the files below</p>
<p>Opensearch deployment file</p>
<pre><code>
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: logging
name: opensearch
labels:
component: opensearch
spec:
selector:
matchLabels:
component: opensearch
replicas: 1
serviceName: opensearch
template:
metadata:
labels:
component: opensearch
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: opensearch
securityContext:
capabilities:
add:
- IPC_LOCK
image: opensearchproject/opensearch
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "cluster.name"
value: "opensearch-cluster"
- name: "network.host"
value: "0.0.0.0"
- name: "discovery.seed_hosts"
value: "[]"
- name: discovery.type
value: single-node
- name: OPENSEARCH_JAVA_OPTS
value: -Xmx512M -Xms512M
- name: "plugins.security.disabled"
value: "false"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: os-mount
mountPath: /data
volumes:
- name: os-mount
persistentVolumeClaim:
claimName: nfs-pvc-os-logging
</code></pre>
<p>Opensearch svc file</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: opensearch
namespace: logging
labels:
service: opensearch
spec:
type: ClusterIP
selector:
component: opensearch
ports:
- port: 9200
targetPort: 9200
</code></pre>
<p>Opensearch dashboard deployment</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: open-dash
namespace: logging
spec:
replicas: 1
selector:
matchLabels:
app: open-dash
template:
metadata:
labels:
app: open-dash
spec:
# securityContext:
# runAsUser: 0
containers:
- name: opensearch-dashboard
image: opensearchproject/opensearch-dashboards:latest
ports:
- containerPort: 80
env:
# - name: ELASTICSEARCH_URL
# value: https://opensearch.logging:9200
# - name: "SERVER_HOST"
# value: "localhost"
# - name: "opensearch.hosts"
# value: https://opensearch.logging:9200
- name: OPENSEARCH_HOSTS
value: '["https://opensearch.logging:9200"]'
</code></pre>
<p>Opensearch Dashboard svc</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: opensearch
namespace: logging
labels:
service: opensearch
spec:
type: ClusterIP
selector:
component: opensearch
ports:
- port: 9200
targetPort: 9200
</code></pre>
<p>with the above configuration I am able to get the Dashboard UI open but in Dashboard pod logs I can see a 400 code logs can anyone please try to reproduce this issue, Also I need to integrate the logstash with this stack.</p>
<blockquote>
<p>{"type":"response","@timestamp":"2023-02-20T05:05:34Z","tags":[],"pid":1,"method":"head","statusCode":400,"req":{"url":"/app/home","method":"head","headers":{"connection":"Keep-Alive","content-type":"application/json","host":"3.108.199.0:30406","user-agent":"Manticore 0.9.1","accept-encoding":"gzip,deflate","securitytenant":"<strong>user</strong>"},"remoteAddress":"10.244.1.1","userAgent":"Manticore 0.9.1"},"res":{"statusCode":400,"responseTime":2,"contentLength":9},"message":"HEAD /app/home 400 2ms - 9.0B</p>
</blockquote>
<p>When deploying a <strong>logstash</strong> pod I get an error that</p>
<blockquote>
<p>[WARN ] 2023-02-20 05:13:52.212 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"}</p>
</blockquote>
<p>Can somebody please try to help me with this puzzle</p>
<p>@Benla have made changes as per your recommendation to image now I am getting the following logs in logstash</p>
<blockquote>
<p>[2023-02-20T05:18:43,028][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-02-20T05:18:43,147][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 127 keys and 444 values
[2023-02-20T05:18:43,351][INFO ][logstash.javapipeline ] Pipeline <code>main</code> is configured with <code>pipeline.ecs_compatibility: v8</code> setting. All plugins in this pipeline will default to <code>ecs_compatibility => v8</code> unless explicitly configured otherwise.
[2023-02-20T05:18:43,370][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3bf49916@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
[2023-02-20T05:18:43,811][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.44}
[2023-02-20T05:18:43,816][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-02-20T05:18:43,821][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-02-20T05:18:43,835][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-02-20T05:18:43,869][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044</p>
</blockquote>
<p>I started getting these endless loop of logs</p>
<blockquote>
<p>[WARN ] 2023-02-20 05:13:37.191 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-opensearch-2.0.1-java/lib/logstash/outputs/opensearch/http_client/pool.rb:217] opensearch - Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>"http://logstash:xxxxxx@opensearch.logging:9200/", :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>"OpenSearch Unreachable: [http://logstash:xxxxxx@opensearch.logging:9200/][Manticore::ClientProtocolException] opensearch.logging:9200 failed to respond"}</p>
</blockquote>
| <p>for opensearch sso(keycloak) please use below steps:
opensearch:</p>
<ol>
<li><p>Make a custom iamge for opensearch, for this make 2 files as below.
i.config.yml(for opensearch security plugin)
---
_meta:
type: "config"
config_version: 2</p>
<pre><code> config:
dynamic:
http:
anonymous_auth_enabled: false
authc:
internal_auth:
order: 0
description: "HTTP basic authentication using the internal user database"
http_enabled: true
transport_enabled: true
http_authenticator:
type: basic
challenge: false
authentication_backend:
type: internal
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: openid
challenge: false
config:
subject_key: preferred_username
roles_key: roles
openid_connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration"
authentication_backend:
type: noop
---
</code></pre>
</li>
</ol>
<p>ii.
log4j2.properties(this file will start logs in opensearch so we can see logs which are otherwise turned-off)</p>
<pre><code> ---
logger.securityjwt.name = com.amazon.dlic.auth.http.jwt
logger.securityjwt.level = trace
---
</code></pre>
<p>iii. Dockerfile</p>
<pre><code>---
FROM opensearchproject/opensearch:2.5.0
RUN mkdir /usr/share/opensearch/plugins/opensearch-security/securityconfig
COPY config.yaml /usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml
COPY config.yaml /usr/share/opensearch/config/opensearch-security/config.yml
COPY log4j2.properties /usr/share/opensearch/config/log4j2.properties
---
</code></pre>
<ol start="2">
<li><p>Deploy opensearch with opensearch helm chart(change image with your customimage built using above configs).
opensearch will deploy 3 pods.now go in each pod and fire belo command to start security plugin(do this only once for each pod of opensearch).</p>
<hr />
<p>/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh<br />
-cacert /usr/share/opensearch/config/root-ca.pem <br />
-cert /usr/share/opensearch/config/kirk.pem <br />
-key /usr/share/opensearch/config/kirk-key.pem <br />
-cd /usr/share/opensearch/config/opensearch-security <br />
-h localhost</p>
<hr />
<p>make sure all 3 pods are up and in ready state.
opensearch-dashboard</p>
</li>
</ol>
<p>3.Now we will configure opensearch-dashboard
i. In values.yml of helm chart of opensearch-dashboard search for config</p>
<pre><code>---
config:
opensearch_dashboards.yml: |
opensearch.hosts: [https://localhost:9200]
opensearch.ssl.verificationMode: none
opensearch.username: admin
opensearch.password: admin
opensearch.requestHeadersWhitelist: [authorization, securitytenant]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: [Private, Global]
opensearch_security.readonly_mode.roles: [kibana_read_only]
opensearch_security.cookie.secure: false
server.host: '0.0.0.0'
opensearch_security.auth.type: "openid"
opensearch_security.openid.connect_url: "https://keycloak-url/realms/realm-name/.well-known/openid-configuration"
opensearch_security.openid.client_id: "admin"
opensearch_security.openid.client_secret: "asgduasdjsadk"
opensearch_security.openid.scope: "email openid"
opensearch_security.openid.base_redirect_url: "https://opensearch_dashboards-url.io"
---
</code></pre>
<p>ii. deploy opensearch_dashboards.</p>
<pre><code>Now once opensearch_dashboards is deployed and pod is in up and ready state you can go to https://opensearch_dashboards-url.io (your opensearch_dashboards url ) and you will see keycloak login form.
</code></pre>
|
<p>I'm trying to delete resources of a particular kind in a k8s cluster using client-go.</p>
<p>I'm using this code but it requires a specific namespace to be declared, but i want to delete this resource in all namespaces.</p>
<pre><code> u.SetName("test")
u.SetNamespace(v1.NamespaceAll)
u.SetGroupVersionKind(schema.GroupVersionKind{
Group: "group",
Kind: "kind",
Version: "v1",
})
err := k8sClient.Delete(context.TODO(), u)
if err != nil {
fmt.Println(err.Error())
return err
}
</code></pre>
<p>Found the example here - <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client" rel="noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client</a>
but it doesn't mention anything about all namespaces.
Could someone plz provide a way to figure this out.</p>
<p>NOTE:
This is custom resource. not default kind such as pod or deployment etc</p>
| <p>Use the <code>List</code> method to get a list of all resources in <code>all namespaces</code> and then loop through the list and delete each resource using the <code>Delete</code> method.</p>
<pre class="lang-js prettyprint-override"><code>cr := &v1alpha1.CustomResource{}
// Get a list of all instances of your custom resource in all namespaces
listOpts := []client.ListOption{
client.InNamespace(v1.NamespaceAll),
}
err := k8sClient.List(context.Background(), cr, listOpts...)
if err != nil {
return err
}
// Loop through the list and delete each instance of your custom resource
for _, item := range cr.Items {
err = k8sClient.Delete(context.Background(), &item)
if err != nil {
return err
}
}
</code></pre>
|
<p>I wanted to know if there is any way that I can set a DNS for the Pod in the StatefulSet, so I can call them directly.</p>
<p>I have read the Kubernetes documentation, but I haven't seen anything useful.</p>
| <p>You can directly hit the POD if using the statefulset with <strong>headless</strong> service</p>
<p>So if you have three replicas running web-0, web-1, web-2 you can use curl</p>
<pre><code>web-0.<service-name>.<namespace-name>.svc.cluster.local
</code></pre>
<p>POD name</p>
<pre><code><pod-name>.<service-name>.<namespace-name>.svc.cluster.local
</code></pre>
<p>But the important part is your service should be <strong>headless</strong></p>
<p><strong>Example</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
minReadySeconds: 10
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx
ports:
- containerPort: 80
name: web
</code></pre>
<p>Official doc ref : <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id</a></p>
|
<p>I am facing an issue in kubernetes. I have a deployment and in replicaset we have given value as 2. After updating my release it is showing 3 replicas. 2 of them are running properly but one is in CrashLoopBackOff. I tried deleting it but it again comes up with same error.</p>
<p>There are 2 containers running in the po. In one container I am able to login but not able to login into nginx-cache container</p>
<pre class="lang-none prettyprint-override"><code>deployment-5bd9ff7f9d 1/2 CrashLoopBackOff 297 (2m19s ago) 24h (this is the error)
deployment-ffbf89fcd 2/2 Running 0 36d
deployment-ffbf89fcd 2/2 Running 0 36d
</code></pre>
<p>Kubectl describe pod</p>
<pre><code>Warning Failed 44m (x4 over 44m) kubelet Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "107374182400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podc22d1a88-befe-4680-8eec-2ad69a4cc890/nginx-cache/cpu.cfs_quota_us: invalid argument: unknown
Normal Pulled 43m (x5 over 44m) kubelet Container image "abcd2.azurecr.io/ab_cde/nginx-cache:0.2-ROOT" already present on machine
</code></pre>
<p>How to remove that error</p>
| <p>As seen from your <em>get pods</em>, the Deployment in <strong>CrashLoopBackOff</strong> state has a different hash from the other 2; it would appear that it is being handled by a different ReplicaSet than the other 2.</p>
<blockquote>
<p>The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.</p>
</blockquote>
<blockquote>
<p>This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label</a></p>
<p>Try running <code>kubectl -n YOUR-NAMESPACE get replicasets</code>; if you find 2, delete the one that corresponds to the Pod with the error.</p>
|
<p>The below container.conf works fine in Kubernetes 1.23 but fails after migrating to 1.25. I have also specified the deamonset that I have used to push the logs to cloudwatch. When I look into the logs of the fluentd deamonset I could see a lot of below errors</p>
<p>2023-04-03 01:32:06 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-04-03T01:32:02.9256618Z stdout F [2023-04-03T01:32:02.925Z] DEBUG transaction-677fffdfc4-tc4rx-18/TRANSPORTER: NATS client pingTimer: 1"</p>
<pre><code>
container.conf
==============
<source>
@type tail
@id in_tail_container_logs
@label @containers
path /var/log/containers/*.log
exclude_path ["/var/log/containers/fluentd*"]
pos_file /var/log/fluentd-containers.log.pos
tag *
read_from_head true
<parse>
@type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
<label @containers>
<filter **>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter **>
@type record_transformer
@id filter_containers_stream_transformer
<record>
stream_name ${tag_parts[3]}
</record>
</filter>
<match **>
@type cloudwatch_logs
@id out_cloudwatch_logs_containers
region "#{ENV.fetch('AWS_REGION')}"
log_group_name "/k8s-nest/#{ENV.fetch('AWS_EKS_CLUSTER_NAME')}/containers"
log_stream_name_key stream_name
remove_log_stream_name_key true
auto_create_stream true
<buffer>
flush_interval 5
chunk_limit_size 2m
queued_chunks_limit_size 32
retry_forever true
</buffer>
</match>
</label>
Deamonset
==========
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: fluentd-cloudwatch
name: fluentd-cloudwatch
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: fluentd-cloudwatch
template:
metadata:
labels:
k8s-app: fluentd-cloudwatch
annotations:
iam.amazonaws.com/role: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- env:
- name: AWS_REGION
value: us-west-1
- name: AWS_EKS_CLUSTER_NAME
value: dex-eks-west
#image: 'fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch'
image: 'fluent/fluentd-kubernetes-daemonset:v1.15.3-debian-cloudwatch-1.1'
imagePullPolicy: IfNotPresent
name: fluentd-cloudwatch
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
- mountPath: /var/log
name: varlog
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /run/log/journal
name: runlogjournal
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- '-c'
- cp /config-volume/..data/* /fluentd/etc
image: busybox
imagePullPolicy: Always
name: copy-fluentd-config
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-volume
name: config-volume
- mountPath: /fluentd/etc
name: fluentdconf
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: fluentd-config
name: config-volume
- emptyDir: {}
name: fluentdconf
- hostPath:
path: /var/log
type: ''
name: varlog
- hostPath:
path: /var/lib/docker/containers
type: ''
name: varlibdockercontainers
- hostPath:
path: /run/log/journal
type: ''
name: runlogjournal
</code></pre>
| <p>I had the same problem a while ago.</p>
<blockquote>
<p>It seems to be an issue between the logs being emitted from the container and what is being written to the log file. Something is prefixing all logs with the <stdout/stderr> <?> </p>
</blockquote>
<p>Ref. <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434#issuecomment-747173567</a></p>
<p>Try following the discussion in the link I pasted you above; I solved it like this:</p>
<pre><code> <parse>
@type regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
</parse>
</code></pre>
|
<p>I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend.</p>
<p>I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ...</p>
<p>And i want something generic not hardcoded in each endpoint's controller.</p>
<p>I am using <code>kubernetes/client-go</code>.</p>
<ol>
<li><strong>First error</strong>:</li>
</ol>
<p>For example lets say i want to add a hotel to the <code>etcd</code>, when i try to add the hotel's name: hotel123, that's already exist.</p>
<ul>
<li>I get this error message: <code>\"hotel123\" already exists</code>.</li>
<li>What i want : <code>hotel123 already exists</code>.</li>
</ul>
<ol start="2">
<li><strong>second error</strong>:</li>
</ol>
<p>For example lets say i want to add a hotel to the <code>etcd</code>, when i try to add the hotel name: hotel_123, that's alerady exist.</p>
<ul>
<li>I get this error message: <code>\"hotel_123\" is invalid, Invalid value: \"hotel_123\"...</code></li>
<li>What i want: <code>hotel_123 is invalid</code></li>
</ul>
<p>How to return a custom user friendly error message ?</p>
<p>PS: i have multiple functions, so the validation should be generic.</p>
| <p>In general (although there are workarounds), if you want to trap an error in order to return a more useful error, you want to ensure the following conditions are met:</p>
<ol>
<li>The error you're trapping has a <a href="https://sourcegraph.com/github.com/kubernetes/client-go/-/blob/tools/clientcmd/validation.go" rel="nofollow noreferrer">meaningful type</a></li>
<li>You're using go version >= 1.13 which ships with <a href="https://go.dev/blog/go1.13-errors" rel="nofollow noreferrer">useful helper functions</a></li>
</ol>
<p>In the following example I'm trying to read a config file that doesn't exist. My code checks that the error returned is a <code>fs.PathError</code> and then throws it's own more useful error. You can extend this general idea to your use case.</p>
<pre><code>package main
import (
"errors"
"fmt"
"io/fs"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
var myError error
config, originalError := clientcmd.BuildConfigFromFlags("", "/some/path/that/doesnt/exist")
if originalError != nil {
var pathError *fs.PathError
switch {
case errors.As(originalError, &pathError):
myError = fmt.Errorf("there is no config file at %s", originalError.(*fs.PathError).Path)
default:
myError = fmt.Errorf("there was an error and it's type was %T", originalError)
}
fmt.Printf("%#v", myError)
} else {
fmt.Println("There was no error")
fmt.Println(config)
}
}
</code></pre>
<p>In your debugging, you will find the <code>%T</code> formatter <a href="https://pkg.go.dev/fmt" rel="nofollow noreferrer">useful</a>.</p>
<p>For your specific use-case, you can use a Regex to parse out the desired text.</p>
<p>The regex below says:</p>
<ol>
<li><code>^\W*</code> start with any non-alhpanumeric characters</li>
<li><code>(\w+)</code> capture the alphanumeric string following</li>
<li><code>\W*\s?</code> match non-alphanumeric characters</li>
<li><code>(is\sinvalid)</code> capture "is invalid"</li>
</ol>
<pre class="lang-golang prettyprint-override"><code>func MyError(inError error) error {
pattern, _ := regexp.Compile(`^\W*(\w+)\W*\s?(is\sinvalid)(.*)$`)
myErrorString := pattern.ReplaceAll([]byte(inError.Error()), []byte("$1 $2"))
return errors.New(string(myErrorString))
}
</code></pre>
<p>As seen on this playground:</p>
<p><a href="https://goplay.tools/snippet/bcZO7wa8Vnl" rel="nofollow noreferrer">https://goplay.tools/snippet/bcZO7wa8Vnl</a></p>
|
<p>I have a local cluster running using minikube. I installed elastic operator with official helm chart and have a elastic cluster running with this yaml:</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: es-qs
spec:
version: 8.6.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>I have never worked with elasticsearch on ECK so my question is: What is the easiest way to manually add datasets to my elastic search cluster for testing purposes?</p>
| <p>You can use the simple <strong>Curl</strong> command manually to configure the <strong>Elasticsearch</strong> that will be an easy way.</p>
<p>Ref doc : <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html</a></p>
<p>You can use the <strong>postman</strong> also to manage these <strong>curl</strong> or might found a <strong>collection</strong> of necessary <strong>API</strong> online for <strong>ES</strong>.</p>
|
<p>I have a backend with golang that talks to k8s. I want to reformulate the error response that i get from k8s and send it to the frontend.</p>
<p>I want to return a meaningful validation error messages for the user, when he add a non valid name, something already exist ...</p>
<p>And i want something generic not hardcoded in each endpoint's controller.</p>
<p>I am using <code>kubernetes/client-go</code>.</p>
<ol>
<li><strong>First error</strong>:</li>
</ol>
<p>For example lets say i want to add a hotel to the <code>etcd</code>, when i try to add the hotel's name: hotel123, that's already exist.</p>
<ul>
<li>I get this error message: <code>\"hotel123\" already exists</code>.</li>
<li>What i want : <code>hotel123 already exists</code>.</li>
</ul>
<ol start="2">
<li><strong>second error</strong>:</li>
</ol>
<p>For example lets say i want to add a hotel to the <code>etcd</code>, when i try to add the hotel name: hotel_123, that's alerady exist.</p>
<ul>
<li>I get this error message: <code>\"hotel_123\" is invalid, Invalid value: \"hotel_123\"...</code></li>
<li>What i want: <code>hotel_123 is invalid</code></li>
</ul>
<p>How to return a custom user friendly error message ?</p>
<p>PS: i have multiple functions, so the validation should be generic.</p>
| <p>String <code>err.Error()</code> is the original, meaningful and best error message you can get from Kubernetes server for the user (Or you have to translate it by yourself).</p>
<p><strong>Explains:</strong></p>
<p>You need to look beyond the surface of <code>kubernetes/client-go</code> client library.</p>
<p>Each client talks to k8s server through <strong>HTTP REST APIs</strong>, which sends back response in <code>json</code>. It's the <code>client-go</code> library that decodes the response body and stores the result into object, if possible.</p>
<p>As for your case, let me give you some examples through the <code>Namespace</code> resource:</p>
<ol>
<li><strong>First error:</strong></li>
</ol>
<pre class="lang-json prettyprint-override"><code>POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create
Response Status: 409 Conflict
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces \"hotel123\" already exists",
"reason": "AlreadyExists",
"details": {
"name": "hotel123",
"kind": "namespaces"
},
"code": 409
}
</code></pre>
<ol start="2">
<li><strong>second error:</strong></li>
</ol>
<pre class="lang-json prettyprint-override"><code>POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create
Response Status: 422 Unprocessable Entity
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Namespace \"hotel_123\" is invalid: metadata.name: Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]\r\n([-a-z0-9]*[a-z0-9])?')",
"reason": "Invalid",
"details": {
"name": "hotel_123",
"kind": "Namespace",
"causes": [
{
"reason": "FieldValueInvalid",
"message": "Invalid value: \"hotel_123\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')",
"field": "metadata.name"
}
]
},
"code": 422
}
</code></pre>
<ol start="3">
<li><strong>normal return:</strong></li>
</ol>
<pre class="lang-json prettyprint-override"><code>POST https://xxx.xx.xx.xx:6443/api/v1/namespaces?fieldManager=kubectl-create
Response Status: 201 Created
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "hotel12345",
"uid": "7a301d8b-37cd-45a5-8345-82wsufy88223456",
"resourceVersion": "12233445566",
"creationTimestamp": "2023-04-03T15:35:59Z",
"managedFields": [
{
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2023-04-03T15:35:59Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:phase": {}
}
}
}
]
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
</code></pre>
<p>In a word, <strong>if the HTTP Status is not 2xx, the returned object is of type Status and has .Status != StatusSuccess, the additional information(<code>message</code> in this case) in Status will be used to enrich the error</strong>, just as the code snippets below:</p>
<pre class="lang-golang prettyprint-override"><code>createdNamespace, err := clientset.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
if err != nil {
// print "namespaces \"hotel123\" already exists" or so
fmt.Println(err.Error())
return err.Error()
}
fmt.Printf("Created Namespace %+v in the cluster\n", createdNamespace)
return ""
</code></pre>
|
<p>I have a service for which i dont want to use autoscaling, so i created a copy of it and now i want to have it behind an internal LB using nginx ingress, one of them will be always active and other one will be passive which can be get through probes status, ingress is as below</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: appcj-ingress
spec:
ingressClassName: nginx
rules:
- host: "{{ .Values.ingress.appcjudomain }}"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: {{ .Values.Name }}-appcjuengineprihttp
port:
number: 81
service:
name: {{ .Values.Name }}-appcjuenginesechttp
port:
number: 81
</code></pre>
<p>This is more like a traditional load balancer, like having multiple servers behind a loadbalancer, having multiple backend services for same host, is it possible?</p>
| <p>Kubernetes Ingress does NOT allow you to do this.</p>
<p>You must necessarily distinguish by host or by path.</p>
<p>The only thing you could do is implement a structure like this: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout</a></p>
<p>Take a look here: <a href="https://stackoverflow.com/questions/46373616/how-do-i-map-multiple-services-to-one-kubernetes-ingress-path">How do I map multiple services to one Kubernetes Ingress path?</a></p>
<p>Otherwise consider replacing the Ingress Kubernetes with a level 7 Load Balancer.</p>
|
<br/>
I have a cluster on Digital Ocean. The API of my project is in DO. My image is from google/cloud-sdk:alpine and also I installed gke-gcloud-auth-plugin with the following command
<pre><code>gcloud components install gke-gcloud-auth-plugin
</code></pre>
<p>I have another cluster on Google Kubernetes Engine. I want to create, list, delete pods on gke cluster from my api which is in DO. I use kubernetes go client. I give the kubeconfig file of gke cluster to go client. But when I try to do an operation (like listing pods) I get the following error:</p>
<pre><code> cred.go:145] print credential failed with error: Failed to retrieve access token:: failure while executing gcloud, with args [config config-helper --format=json]: exit status 1
2023/04/04 07:28:21 code: get_pod_error, error: Get "https://..../api/v1/namespaces/default/pods/cloud-flow-80117068-9715-4374-b91b-799472d647be": getting credentials: exec: executable gke-gcloud-auth-plugin failed with exit code 1
</code></pre>
<p>I used the same method to create or delete pods on another cluster in DO and worked successfuly.</p>
<p>Briefly I want to connect GKE cluster from DO cluster by using kubeconfig file of GKE
Is there anyone who encounter with the same problem?
Thanks..</p>
| <p>Finally I solved like following:</p>
<p>1: Create a secret with a token and service account. Then associate them.<br/>
2: Create a clusterrole (give necessary permissions) and clusterrolebinding. Associate them. <br/>
3: Associate the service account and clusterrolebinding.</p>
<p>Now you can use the token to connect gke cluster without using gke-gcloud-auth-plugin. <br/>
If I dont use the token argument, the output is: <br/></p>
<p><a href="https://i.stack.imgur.com/v31lx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v31lx.png" alt="enter image description here" /></a></p>
<p>But , if I use the token, it works successfully
<a href="https://i.stack.imgur.com/fYyfo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fYyfo.png" alt="enter image description here" /></a></p>
<p><br/><br/></p>
<p>Also the go code that uses token and prints the pods is like following:</p>
<pre><code>package main
import (
"context"
"fmt"
"os"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
// Path to the kubeconfig file
kubeconfigPath := "/home/latif/.kube/config"
// Build the configuration from the kubeconfig file
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
fmt.Printf("Error building config from kubeconfig: %v", err)
os.Exit(1)
}
// Set the authentication token
config.BearerToken = "my_token"
// Create a new Kubernetes client
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Printf("Error creating Kubernetes client: %v", err)
os.Exit(1)
}
// Get the list of pods in the default namespace
pods, err := clientset.CoreV1().Pods("default").List(context.Background(), v1.ListOptions{})
if err != nil {
fmt.Printf("Error getting pods: %v", err)
os.Exit(1)
}
// Print the name of each pod
for _, pod := range pods.Items {
fmt.Println(pod.Name)
}
}
</code></pre>
|
<p>I have a kubernetes cronjob object defined as follows:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cloud-backup
namespace: test-backup
spec:
schedule: "0 */50 * * *"
.....
</code></pre>
<p>However, it's running every single day at 8pm which makes no sense to me. Isn't this cron format configured to run once every 50 hours? I'm using this exact cron schedule in a concourse pipeline at it runs when expected there so not sure what I'm missing.</p>
<p>Thanks!</p>
| <p>A cron expression allows certain values per field. For the hour field the values 0-23 are allowed. (See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax</a> )</p>
<p>If you want to achieve something that does not run in an interval that can be expressed using the cron expressions you might want to have a cronjob that runs once per hour and uses a configmap to store the execution state and schedule a regular job programmatically.</p>
|
<p>I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder.</p>
<p>Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file.</p>
<p>How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file?</p>
<p>Does defining the CM and Secret dependency in application deployment.yaml does everything?</p>
| <p>First you need to enable <code>spring.cloud.kubernetes.secrets.enabled</code>
then you can use Kubernetes Secrets via <code>SecretsPropertySource</code>.</p>
<p>You can use secret name via: <code>-Dspring.cloud.kubernetes.secrets.name=my-secret</code>
where <code>my-sercret</code> is name of a secret which is defined in kubernetes.
or you can access it in multiple other methods described in the <a href="https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/1.0.0.M2/multi/multi__secrets_propertysource.html" rel="nofollow noreferrer">Documentation</a>.</p>
|
<p>I have to inject DB URL, DB Username and Password variables into a Java Spring Boot Application. I know that these properties reside in the application.properties file under res/ folder.</p>
<p>Now as a design change I want to adopt K8s ConfigMaps and Secrets to be used in place of hardcoding in the the application.properties file.</p>
<p>How can I link the CMs and Secrets from K8s to the application. How should I set placeholders in application.properties file?</p>
<p>Does defining the CM and Secret dependency in application deployment.yaml does everything?</p>
| <p>You have two options: one can be achieved without extra dependencies and one with Spring Boot Cloud.</p>
<h2>Plain way</h2>
<p>You define enviroment variable placeholders in your <code>application.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>spring:
datasource:
driver-class-name: org.postgresql.Driver
url: jdbc:postgresql://${DB_HOST}:${DB_PORT}/${DB_NAME}
username: ${DB_USER}
password: ${DB_PASSWORD}
</code></pre>
<p>You then define the environment variables in your k8s deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: your-config-map
key: dbHost
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: your-config-map
key: dbPort
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: your-config-map
key: dbName
- name: DB_USER
valueFrom:
secretKeyRef:
name: your-secret
key: dbUser
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: your-secret
key: dbPassword
</code></pre>
<p>More on defining environment variables for containers can be found in the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">k8s documentation</a>.</p>
<h2>Spring Boot Cloud Kubernetes</h2>
<p>There is a whole section in the reference called <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#configmap-propertysource" rel="nofollow noreferrer">Using a ConfigMap PropertySource</a> and <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#secrets-propertysource" rel="nofollow noreferrer">Secrets PropertySource</a>. I suggest you go and look it up there.</p>
|
<p>I have several PHP applications which I am trying to configure inside K8s.
Currently I have issue with one custom service - it is using own server (listening 8800 port) and nginx server which is redirecting traffic from 80/443 (we can use 80 this time to simplify example) to 8800.</p>
<p>Here is part of nginx conf:</p>
<pre><code>location / {
add_header Access-Control-Allow-Origin $http_origin always;
add_header Access-Control-Allow-Credentials true always;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache off;
proxy_pass http://localhost:8800;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_hide_header Access-Control-Allow-Origin;
proxy_hide_header Access-Control-Allow-Credentials;
}
</code></pre>
<p>I already configured deployments + services for my pods but I am new in NGINX Ingres.
Currently I am working with this ingress but I am getting 500 error (502 Bad Gateway) and looks like my traffic does not reach my service.</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-[service]
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: [my domain name]
http:
paths:
- path: /api/
pathType: Prefix
backend:
service:
name: [service]
port:
number: 8800
ingressClassName: nginx
</code></pre>
<p>Did I convert nginx to ingres config correctly?</p>
<p>UPDATE: deployment + service (my last variant)</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-service
spec:
replicas: 1
...
spec:
containers:
- name: [some-name]
image: [image]
imagePullPolicy: Always
env:
...
ports:
- containerPort: 8800
name: some-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
type: ClusterIP
ports:
- name: some-service-port
protocol: TCP
port: 8800
targetPort: some-web-svc
# - name: tcp
# port: 8800
# targetPort: 8800
selector:
app: some-service
</code></pre>
| <ul>
<li>Run the command below to check whether your service is mapped to your
podIP:port properly.</li>
<li>If the ENDPOINTS column in the output of the command below does not show podIP:port, then your service is most likely misconfigured - Label in the deployment is not matching with the selector in the service</li>
<li>Fix the selectors to solve the issue</li>
</ul>
<p>--</p>
<pre><code>kubectl get ep -n <namespace>
</code></pre>
|
<p>I am using aws EKS with a managed node group. Twice in the passed couple of weeks I had a case where the Kubelet in one of the nodes crashed or stopped reporting back to the control plane.</p>
<p>In this case I would expect the Autoscaling group to identify this node as unhealthy, and replace it. However, this is not what happens. I have recreated the issue by creating a node and manually stopping the Kubelet, see image below:</p>
<p><a href="https://i.stack.imgur.com/nleip.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nleip.png" alt="enter image description here" /></a></p>
<p>My first thought was to create an Event Bus alert that would trigger a lambda to take care of this but I couldn't find the EKS service in the list of services in Event Bus, so …</p>
<p>Does anyone know of a tool or configuration that would help with this?
To be clear I am looking for something that would:</p>
<ol>
<li>Detect that that kubelet isn't connecting to the control plane</li>
<li>Delete the node in the cluster</li>
<li>Terminate the EC2</li>
</ol>
<p>THANKS!!</p>
| <p>I would suggest looking at the <a href="https://github.com/kubernetes/node-problem-detector" rel="nofollow noreferrer">node-problem-detector</a> or this <a href="https://blog.cloudflare.com/automatic-remediation-of-kubernetes-nodes/" rel="nofollow noreferrer">blog</a> by Cloudflare. There is an <a href="https://github.com/aws/containers-roadmap/issues/928" rel="nofollow noreferrer">issue</a> on the EKS roadmap for automated node health checking. I would upvote the issue if it's important to you.</p>
|
<p>My team is experiencing an issue with longhorn where sometimes our RWX PVCs are indefinitely terminating after running <code>kubectl delete</code>. A symptom of this is that the finalizers never get removed.</p>
<p>It was explained to me that the longhorn-csi-plugin containers should execute <code>ControllerUnpublishVolume</code> when no workload is using the volume and then execute <code>DeleteVolume</code> to remove the finalizer. Upon inspection of the logs when this issue occurs, the <code>ControllerUnpublishVolume</code> event looks unsuccessful and <code>DeleteVolume</code> is never called. It looks like the response to <code>ControllerUnpublishVolume</code> is <code>{}</code> which does not seem right to me. The following logs are abridged and only include lines relevant to the volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1:</p>
<pre><code>2023-04-04T19:28:52.993226550Z time="2023-04-04T19:28:52Z" level=info msg="CreateVolume: creating a volume by API client, name: pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1, size: 21474836480 accessMode: rwx"
...
2023-04-04T19:29:01.119651932Z time="2023-04-04T19:29:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume created at 2023-04-04 19:29:01.119514295 +0000 UTC m=+2789775.717296902"
2023-04-04T19:29:01.123721718Z time="2023-04-04T19:29:01Z" level=info msg="CreateVolume: rsp: {\"volume\":{\"capacity_bytes\":21474836480,\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}}"
...
2023-04-04T19:29:01.355417228Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
...
2023-04-04T19:29:01.362958346Z time="2023-04-04T19:29:01Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com"
2023-04-04T19:29:01.363013363Z time="2023-04-04T19:29:01Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com"
...
2023-04-04T19:29:13.477036437Z time="2023-04-04T19:29:13Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:29:13.476922567 +0000 UTC m=+2789788.074705223"
2023-04-04T19:29:13.479320941Z time="2023-04-04T19:29:13Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com"
...
2023-04-04T19:31:59.230234638Z time="2023-04-04T19:31:59Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:31:59.233597451Z time="2023-04-04T19:31:59Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
...
2023-04-04T19:32:01.242531135Z time="2023-04-04T19:32:01Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:01.242373423 +0000 UTC m=+2789955.840156051"
2023-04-04T19:32:01.245744768Z time="2023-04-04T19:32:01Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T19:32:01.268399507Z time="2023-04-04T19:32:01Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:32:01.270584270Z time="2023-04-04T19:32:01Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
...
2023-04-04T19:32:02.512117513Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
...
2023-04-04T19:32:02.528810094Z time="2023-04-04T19:32:02Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node2.example.com"
2023-04-04T19:32:02.528829340Z time="2023-04-04T19:32:02Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node2.example.com"
...
2023-04-04T19:32:03.273890290Z time="2023-04-04T19:32:03Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 19:32:03.272811565 +0000 UTC m=+2789957.870594214"
2023-04-04T19:32:03.289152604Z time="2023-04-04T19:32:03Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T19:32:03.760644399Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":5}},\"volume_context\":{\"fromBackup\":\"\",\"fsType\":\"ext4\",\"numberOfReplicas\":\"3\",\"recurringJobSelector\":\"[{\\\"name\\\":\\\"backup-1-c9964a87-77074ba4\\\",\\\"isGroup\\\":false}]\",\"share\":\"true\",\"staleReplicaTimeout\":\"30\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1677846786942-8081-driver.longhorn.io\"},\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T19:32:03.770050254Z time="2023-04-04T19:32:03Z" level=debug msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 is ready to be attached, and the requested node is node1.example.com"
2023-04-04T19:32:03.770093689Z time="2023-04-04T19:32:03Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx requesting publishing to node1.example.com"
...
2023-04-04T19:32:04.654700819Z time="2023-04-04T19:32:04Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:04.654500435 +0000 UTC m=+2789959.252283106"
2023-04-04T19:32:04.657991819Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node2.example.com"
2023-04-04T19:32:04.658583043Z time="2023-04-04T19:32:04Z" level=info msg="ControllerPublishVolume: rsp: {}"
...
2023-04-04T19:32:05.822264526Z time="2023-04-04T19:32:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume published at 2023-04-04 19:32:05.82208573 +0000 UTC m=+2789960.419868382"
2023-04-04T19:32:05.826506892Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 with accessMode rwx published to node1.example.com"
2023-04-04T19:32:05.827051042Z time="2023-04-04T19:32:05Z" level=info msg="ControllerPublishVolume: rsp: {}"
...
2023-04-04T20:07:03.798730851Z time="2023-04-04T20:07:03Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node1.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:03.802360032Z time="2023-04-04T20:07:03Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node1.example.com"
2023-04-04T20:07:05.808796454Z time="2023-04-04T20:07:05Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:05.808607472 +0000 UTC m=+2792060.406390073"
2023-04-04T20:07:05.811653301Z time="2023-04-04T20:07:05Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node1.example.com"
...
2023-04-04T20:07:11.017524059Z time="2023-04-04T20:07:11Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:11.024127188Z time="2023-04-04T20:07:11Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com"
...
2023-04-04T20:07:13.047834933Z time="2023-04-04T20:07:13Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com"
2023-04-04T20:07:13.047839690Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}"
2023-04-04T20:07:13.378731066Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: req: {\"node_id\":\"node2.example.com\",\"volume_id\":\"pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1\"}"
2023-04-04T20:07:13.384575838Z time="2023-04-04T20:07:13Z" level=debug msg="requesting Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 detachment for node2.example.com"
...
2023-04-04T20:07:13.385792532Z time="2023-04-04T20:07:13Z" level=info msg="ControllerUnpublishVolume: rsp: {}"
2023-04-04T20:07:15.386784410Z time="2023-04-04T20:07:15Z" level=debug msg="Polling volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 state for volume unpublished at 2023-04-04 20:07:15.386596264 +0000 UTC m=+2792069.984378910"
2023-04-04T20:07:15.391059508Z time="2023-04-04T20:07:15Z" level=debug msg="Volume pvc-52d816a8-bbb2-4ac8-9e79-0b9950eafdc1 unpublished from node2.example.com"
</code></pre>
<p>We are using Longhorn v1.2.2 on Rancher RKE v2.6.5.</p>
<p>We would expect that DeleteVolume would be called, the finalizers would be removed, and the PVC would be deleted, but none of those events occur.</p>
<p>As a workaround we tried forcefully removing the finalizer using the command <code>kubectl patch pvc my-pvc -p '{"metadata":{"finalizers":null}}' --type=merge</code>. This worked, but is not ideal to do every time.</p>
<p>Any ideas about what is wrong? If not, what should be my next steps in investigating this issue?</p>
| <p>Probably some Pod mounts the volume you are trying to remove.</p>
<p>Check out this old answer of mine: <a href="https://stackoverflow.com/a/75768413/21404450">https://stackoverflow.com/a/75768413/21404450</a></p>
|
<p>I'm reading helm documentation,</p>
<blockquote>
<p>The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes.</p>
</blockquote>
<p>I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with "y" and "z" and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes.</p>
<p>I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it:</p>
<pre><code>{{- define "statement.resource"}}
{{- $statementProvider := lower ( $.Values.statementProvider ) -}}
{{- $statementFiles := list -}}
{{- range $path, $bytes := .Files.Glob "templates/**" }}
{{- if eq $statementProvider "x" -}}
{{- if not (or (hasPrefix $path "y") (hasPrefix $path "z")) -}}
{{- $statementFiles = append $statementFiles $path -}}
{{- end }}
{{- $statementFiles -}}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
| <p>It can be done with simple <code>if</code> statement.</p>
<p>Your <code>template</code> file</p>
<pre><code>{{- if .Values.serviceAccount.enabled -}}
...
{{- end }}
</code></pre>
<p>Your <code>values</code> file</p>
<pre><code>serviceAccount:
enabled: true/false
</code></pre>
<p>You also can do the same for resources that are nested and conditional add <code>volumet</code> to your <code>deployment</code>.</p>
|
<p>I am currently facing an issue with a Kubernetes configuration on my cluster running in Google Kubernetes Engine in Autopilot mode in the us-west1 region. The configuration requires 40 replicas, each with a CPU limit of 1000m. I have an Nginx load balancer with an external IP that distributes load to these pods, and its CPU limit is 250m.</p>
<p>However, when I attempt to deploy this configuration, only 26 pods are created, and the remaining 14 remain in Unschedulable status. On the cluster page, I see two warnings: "Can't scale up nodes" and "Pods unschedulable."</p>
<p>Upon checking the quota page, I discovered that Google is calculating my current usage incorrectly. Although I am using 26.25 CPUs, Google shows the current usage as 64. Additionally, while there are 27 pods in total, Google calculates it as 32.</p>
<p>Here is the screenshot from quotas page:</p>
<p><a href="https://i.stack.imgur.com/esTs6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/esTs6.png" alt="enter image description here" /></a></p>
<p>This miscalculation by Google is preventing my pods from scaling, and I am unsure how to resolve this issue. Can anyone offer guidance on how to avoid this situation?</p>
| <p>Even though Autopilot handles node management for you, behind the scenes it is still creating nodes which count against your CPU quota. While you only pay for the CPU/Memory requested by your pods, the nodes which are spun up behind the scenes actually use more CPU/Memory than that as they also run system pods which you don't pay for. Autopilot tends to provision smaller nodes to optimize for scale down without disrupting workloads.</p>
<p>So in your case what is happening is that Autopilot is provisioning nodes which each use 2 vCPUs which results in 32 nodes being provisioned. You can see the nodes / number of nodes using <code>kubectl get nodes|wc -l</code> (well technically this command will be # of nodes + 1). This results in 64 vCPUs being used, which is why you are hitting the CPU quota. Additionally, seems like your Autopilot cluster is actually a public cluster, which results in a public IP being assigned to each of the 32 nodes and that's how you hit the in-use IP address quota.</p>
<p>To avoid the in-use IP address quota, you should create a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">private Autopilot cluster</a>. Unfortunately, the best way to do this would be to create a brand new Autopilot cluster. If you are unable to create a new cluster, then you'll need to request a quota increase for in-use IP addresses (64 should probably be enough). But I'd highly recommend creating a new private cluster if at all possible.</p>
<p>To resolve issues with CPU quota, I'd recommend requesting double what you expect your total request/limits to be and rounding to the nearest power of 2, which in your case I'd suggest something like 128 vCPUs. You'll need to make sure that your total CPU quota (the one in your image) and your E2 CPU quota are both set (your E2 default quota is probably fine).</p>
|
<p>Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command?</p>
<p>Looking to exclude certain namespaces as well</p>
<p>kubectl get pods gets me a list of every pod.</p>
| <p>Please use below command:</p>
<pre><code>kubectl get pods --all-namespaces -o json | jq '.items | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})'
</code></pre>
<p>Output format:</p>
<pre><code>[
{
"namespace": "keda",
"NoOfPods": 3
},
{
"namespace": "kube-system",
"NoOfPods": 12
},
{
"namespace": "prod",
"NoOfPods": 1
},
{
"namespace": "stage",
"NoOfPods": 1
}
]
</code></pre>
|
<p>Below is my <code>testfile.yaml</code>:</p>
<pre><code>---
kind: Pod
metadata:
name: amazing-application
---
kind: Deployment
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: amazing-deployment
---
kind: Service
metadata:
name: tea-service
</code></pre>
<p>My goal is to split this into 4 files where the filename is <code>.metadata.name</code> and the dir that file goes into is <code>.kind</code>.</p>
<p>I have achieved what I want with this:</p>
<pre><code>for kind in $(yq e '.kind' testfile.yaml | awk '!/^(---)/' | uniq);
do
mkdir "$kind"
cd "$kind"
yq 'select(.kind == "'$kind'")' ../testfile.yaml | yq -s '.metadata.name'
cd ..;
done
</code></pre>
<p>What I want to know is how to get a unique together mapping, or somehow using multple criteria to split the testfile rather than through the loop.</p>
<p>Is there a way to use <code>yq</code> and <code>-s</code> or <code>select</code> to select where kind and metadata.name are unique together in that individual document (document as in separated by '---')?</p>
<p>Because if you do <code>yq -s '.kind' testfile.yaml</code> it will yield three yaml files, not four. Same for <code>yq -s '.metadata.name' testfile.yaml</code>; we get three files as not all <code>name</code> are unique - one gets lost.</p>
| <p>There are a few ways you can do this direct in yq.</p>
<p>First of, you can use string concatenation with another property to comeup with a unique filename:</p>
<pre><code>yq -s '(.kind | downcase) + "_" + .metadata.name' testfile.yaml
</code></pre>
<p>That will create files like:</p>
<pre><code>deployment_amazing-deployment.yml
pod_amazing-application.yml
service_amazing-deployment.yml
service_tea-service.yml
</code></pre>
<p>Or you can use the built in $index to make the filenames unique:</p>
<pre><code>yq -s '.metadata.name + "_" + $index'
</code></pre>
<p>Which will create:</p>
<pre><code>amazing-application_0.yml
amazing-deployment_1.yml
amazing-deployment_2.yml
tea-service_3.yml
</code></pre>
<p>Disclaimer: I wrote yq</p>
|
<p>In our kubernetes cluster we are using istio, with mutual tls for the communication between the pods inside the mesh. Everything is working fine, but now we would like to introduce a VirtualService to able to do traffic shifting for canary deployments.
We configured everything according to the istio documentation, but for some reason, the VirtualService seems just to be ignored, our canary version does not receive any traffic, even with a 50/50 traffic split.</p>
<p>Note, we are only talking about traffic <em>inside the mesh</em>, there is no external traffic, it's exclusively between pods in the same namespace.</p>
<p>Our setup:</p>
<p>Service of our application 'parser-service'</p>
<pre><code># service parser-service
spec:
clusterIP: 172.20.181.129
ports:
- name: https-web
port: 80
protocol: TCP
targetPort: 8080
selector:
service: parser-service
type: ClusterIP
</code></pre>
<p>Service of the canary version</p>
<pre><code># service parser-service-canary
spec:
clusterIP: 172.20.30.101
ports:
- name: https-web
port: 80
protocol: TCP
targetPort: 8080
selector:
service: parser-service-canary
type: ClusterIP
</code></pre>
<p>This is what we tried, a VirtualService that should split traffic 50/50</p>
<pre><code>spec:
gateways:
- mesh
hosts:
- parser-service
tls:
- match:
- port: 80
sniHosts:
- parser-service
route:
- destination:
host: parser-service
port:
number: 80
weight: 50
- destination:
host: parser-service-canary
port:
number: 80
weight: 50
</code></pre>
<p>I think we misunderstood something, but we can't figure out what it is. The traffic is still routed 100% to parser-service and <code>istioctl x describe pod parser-service-xxx-xxx</code> also shows no VirtualService, which suggests to me that the VirtualService is just ignored.</p>
<pre><code>Pod: parser-service-7cfd596dbb-hjqd9
Pod Revision: 1-14-6
Pod Ports: 8080 (parser-service), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: parser-service
Port: https-web 80/HTTPS targets pod port 8080
DestinationRule: istio-mutual for "*.mynamespace.svc.cluster.local"
Traffic Policy TLS Mode: ISTIO_MUTUAL
--------------------
Effective PeerAuthentication:
Workload mTLS mode: PERMISSIVE
</code></pre>
<p>I think it has something to do with the fact that we named our ports <code>https-web</code> so that the traffic between the istio-sidecars is encrypted. When we use 'http-web' as port names and HTTP Match instead of tls in the VirtualService, the traffic split works fine.</p>
<p>Would appreciate any hints or pointers in the right direction</p>
| <p>As you suggested, Istio derives protocol information from the protocol name and will try to use HTTPS on your port 80/8080. You should name them http-web.
Additionally, your VS wants to match traffic via TLS-SNI-header on a port 80. Don't use a tls matcher but a http matcher and let it use the host-header to capture your traffic and then distribute to both versions.
Istio will take care to add mTLS between two Istio-enabled pods (given that your trafficPolicy is set to ISTIO_MUTUAL which seems to be true in your istioctl output.</p>
|
<p>Trying to export vault secrets as an environment variable to k8s pod using vault injector. Following vault documentation <a href="https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example" rel="nofollow noreferrer">https://developer.hashicorp.com/vault/docs/platform/k8s/injector/examples#environment-variable-example</a></p>
<p>as mention in example, you need to source config file inside a container and it will override ENTRYPOINT script/command that you are passing in dockerfile.</p>
<pre><code> containers:
- name: web
image: alpine:latest
command:
['sh', '-c']
args:
['source /vault/secrets/config && <entrypoint script>']
</code></pre>
<p>in my setup, I don't have a static entry point script that I can put here in args. docker file has its own command/script running as entrypoint script.</p>
<p>Trying to find if there is any alternative to source this vault config inside a container that allow me to not change anything in entrypoint script in dockerfile. not sure if k8s is providing any way to do this with post-hook or something. that runs entrypoint mentioned in dockerfile first and then execute other scripts/command passed in post-hook.</p>
| <p>You can use the <a href="https://github.com/hashicorp/vault-secrets-operator/" rel="nofollow noreferrer">Vault Secrets Operator</a> to synchronize secrets from Vault to Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a> resources.</p>
<p>Once you've done that, you can then expose those secrets as environment variables using <code>envFrom</code> or <code>vaultFrom</code> directives in your deployment manifests, as described <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="nofollow noreferrer">in the documentation</a>.</p>
<p>This method does not require overriding the entrypoint or arguments of your containers.</p>
<hr />
<p>It looks like Vault Secrets Operator is relatively new and the documentation seems a bit slim. You can achieve similar functionality using the <a href="https://external-secrets.io/" rel="nofollow noreferrer">External Secrets Operator</a>, which has the added advantage that it supports a variety of secret store backends.</p>
|
<p>By default creating a managed certificate object on GKE creates a managed certificate of type "Load Balancer Authorization". How can I create one with DNS authorization through GKE?</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a></p>
<pre><code>apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- DOMAIN_NAME1
- DOMAIN_NAME2
</code></pre>
<p>I want to add wildcard domains and this only possible with DNS authorization.</p>
<p><a href="https://stackoverflow.com/questions/73734679/how-to-generate-google-managed-certificates-for-wildcard-hostnames-in-gcp">How to generate Google-managed certificates for wildcard hostnames in GCP?</a></p>
| <p>To create a google managed certificate with DNS Authorization follow this <a href="https://cloud.google.com/certificate-manager/docs/deploy-google-managed-dns-auth#create_a_google-managed_certificate_referencing_the_dns_authorization" rel="nofollow noreferrer">Google official doc</a> and <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/certificate_manager_dns_authorization" rel="nofollow noreferrer">Terraform doc</a>. `</p>
<blockquote>
<p>Each DNS authorization stores information about the DNS record that
you need to set up and covers a single domain plus its wildcard—for
example, example.com and *.example.com.</p>
</blockquote>
<ul>
<li>You need to add a domain name and wild card name in the same domain
name while creating the certificate.</li>
<li>By using the certificate map and certificate Mapping entry, you need
to map this domain and wild card domain.</li>
<li>Create two certificate map entries one for domain and other for wild card domain. This will help the certificate to be active. You can also refer to this <a href="https://github.com/hashicorp/terraform-provider-google/issues/11037#issuecomment-1362628852" rel="nofollow noreferrer">gitlink</a> by fbozic for relevant info</li>
</ul>
<p>Already a <a href="https://issuetracker.google.com/issues/123290919" rel="nofollow noreferrer">feature request</a> is raised on this for more wild card usage and the Google Product team is working on this.</p>
|
<p>Im looking to get the number of pods on a cluster by namespace. Is this possible with a kubectl command?</p>
<p>Looking to exclude certain namespaces as well</p>
<p>kubectl get pods gets me a list of every pod.</p>
| <p>I have just modified the command posted in the above answer owing to OP's need of excluding certain namespaces as well.</p>
<pre><code>kubectl get pods --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace != "logging")] | group_by(.metadata.namespace) | map({"namespace": .[0].metadata.namespace, "NoOfPods": (length)})'
</code></pre>
<p>Result without exclusion:</p>
<p><a href="https://i.stack.imgur.com/QJ5iv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJ5iv.png" alt="enter image description here" /></a></p>
<p>Result after exclusion:</p>
<p><a href="https://i.stack.imgur.com/YBydy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YBydy.png" alt="enter image description here" /></a></p>
<p>In here, I'm excluding the pods from <code>logging</code> namespace only but you can exclude more namespaces using the format:</p>
<pre><code>.metadata.namespace != "logging" and .metadata.namespace != "monitoring"
</code></pre>
<p>Hope this helps!</p>
|
<p>I'm new here so if this comes out stupid please forgive me 🙂 I've been using Couchbase over 10+ years on real hardware. I've been working on establishing CB in Kubernetes and that seems to be working just fine. I'm also using Couchbase Autonomous Operator. Works great, no complaints with normal functioning thus far.</p>
<p>However, I've been working through performing Velero Backup and Restore of both the Cluster and the CB Operator. I thought I finally had it working earlier last week, but a recent attempt to restore from a Velero backup once again resulted in messages like this in the CBO's logs:</p>
<pre><code>{"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"}
{"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"}
{"level":"info","ts":1680529172.0289645,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0003"}
{"level":"info","ts":1680529172.0289707,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0001"}
{"level":"info","ts":1680529172.0289757,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0004"}
</code></pre>
<p>I've tried to find what this really means. And I have some suspicions but I don't know how to resolve it.</p>
<p>Of note in the msgs above is that 'cb-dev-0000' never appears in the recurring list of msgs. These messages appear every few seconds in the couchbase-operator pod logs.</p>
<p>Additionally, if I delete one pod at a time, they will be recreated by K8s or CBO (not real sure) and then it disappears from the list that keeps repeating. Once I do that with all of them, this issue stops.</p>
<p>Any ideas, questions, comments on this would really be greatly appreciated</p>
<p>This is all just for testing at this point, nothing here is for production, I'm just trying to validate that Velero can indeed backup both Couchbase Operator and Couchbase Cluster and subsequently restore them from the below Schedule Backup.</p>
<p>I am using the default couchbase operator install using 2.4.0</p>
<p>I am using a very basic, functional couchase server cluster installation yaml</p>
<p>I tried to use this Schedule Velero Backup, and then restore from this backup, and I'm expecting that both the Couchbase Cluster, and Couchbase Operator will restore without any issues.</p>
<p>But what happens is that I get a functional CB Cluster, and a CBO which logs constantly msgs like this:</p>
<pre><code>{"level":"info","ts":1680529171.8283288,"logger":"cluster","msg":"Reconcile completed","cluster":"default/cb-dev"}
{"level":"info","ts":1680529172.0289326,"logger":"cluster","msg":"Pod ignored, no owner","cluster":"default/cb-dev","name":"cb-dev-0002"}
}
</code></pre>
<p>This might be important I dunno I never see 'cb-dev-0000' listed in these msgs tho the pod does exist. I reiterate the restore CB Cluster is functioning 'normally' near as I can tell, and CB operator is the only thing reporting these types of errors.</p>
<p>kubectl apply -f schedule.yaml</p>
<p>Where schedule.yaml contains this:</p>
<pre><code>apiVersion: velero.io/v1
kind: Schedule
metadata:
name: dev-everything-schedule
namespace: velero
spec:
schedule: 0 * * * *
template:
metadata:
labels:
velero.io/schedule-name: dev-everything-schedule
storageLocation: default
includeClusterResources: true
includedNamespaces:
- kube-public
- kube-system
- istio-system
- velero
- default
- cert-manager
- kube-node-lease
excludedResources:
includedResources:
- authorizationpolicies.security.istio.io
- backuprepositories.velero.io
- backupstoragelocations.velero.io
- backups.velero.io
- certificaterequests.cert-manager.io
- certificates.cert-manager.io
- cert-manager-webhook
- challenges.acme.cert-manager.io
- clusterissuers.cert-manager.io
- clusterrolebindings.rbac.authorization.k8s.io
- clusterroles.rbac.authorization.k8s.io
- configmaps
- controllerrevisions
- couchbaseautoscalers.couchbase.com
- couchbasebackuprestores.couchbase.com
- couchbasebackups.couchbase.com
- couchbasebuckets.couchbase.com
- couchbaseclusteroauths
- couchbaseclusters.couchbase.com
- couchbasecollectiongroups.couchbase.com
- couchbasecollections.couchbase.com
- couchbaseephemeralbuckets.couchbase.com
- couchbaseevents
- couchbasegroups.couchbase.com
- couchbasememcachedbuckets.couchbase.com
- couchbasemigrationreplications.couchbase.com
- couchbasereplications.couchbase.com
- couchbaserolebindings.couchbase.com
- couchbasescopegroups.couchbase.com
- couchbasescopes.couchbase.com
- couchbaseusers.couchbase.com
- cronjobs
- csidrivers
- csistoragecapacities
- customresourcedefinitions.apiextensions.k8s.io
- daemonsets
- deletebackuprequests
- deletebackuprequests.velero.io
- deployments
- destinationrules.networking.istio.io
- downloadrequests.velero.io
- endpoints
- endpointslices
- eniconfigs.crd.k8s.amazonaws.com
- envoyfilters.networking.istio.io
- events
- gateways
- gateways.networking.istio.io
- horizontalpodautoscalers
- ingressclassparams.elbv2.k8s.aws
- ingresses
- issuers.cert-manager.io
- istiooperators.install.istio.io
- item_istiooperators
- item_wasmplugins
- jobs
- leases
- limitranges
- namespaces
- networkpolicies
- orders.acme.cert-manager.io
- peerauthentications.security.istio.io
- persistentvolumeclaims
- persistentvolumes
- poddisruptionbudgets
- pods
- podtemplates
- podvolumebackups.velero.io
- podvolumerestores.velero.io
- priorityclasses.scheduling.k8s.io
- proxyconfigs.networking.istio.io
- replicasets
- replicationcontrollers
- requestauthentications.security.istio.io
- resourcequotas
- restores.velero.io
- rolebindings.rbac.authorization.k8s.io
- roles.rbac.authorization.k8s.io
- schedules.velero.io
- secrets
- securitygrouppolicies.vpcresources.k8s.aws
- serverstatusrequests.velero.io
- serviceaccounts
- serviceentries
- serviceentries.networking.istio.io
- services
- sidecars.networking.istio.io
- statefulsets
- targetgroupbindings.elbv2.k8s.aws
- telemetries.telemetry.istio.io
- telemetry
- validatingwebhookconfiguration.admissionregistration.k8s.io
- virtualservices.networking.istio.io
- volumesnapshotlocations.velero.io
- wasmplugins.extensions.istio.io
- workloadentries.networking.istio.io
- workloadgroups.networking.istio.io
ttl: 12h
</code></pre>
<p>I kubectl delete the cluster, and operator and subsequently restore them from the Velero backup using something like this:</p>
<p>velero restore create dev-everything-schedule-20230331160030 --from-backup dev-everything-schedule-20230331160030</p>
<p>It restores the cluster, and cbo and that's when I start seeing the logs in the couchbase-operator pods logs.</p>
<p><strong>UPDATE</strong>:</p>
<p>Digging into the JSON files of the Velero Backup under pods/namespaces/default/cb-dev-0000.json and comparing that with cb-dev-0001.json I just spotted a major difference that probably relates to this issue:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
...
"name": "cb-dev-0000",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "couchbase.com/v2",
"blockOwnerDeletion": true,
"controller": true,
"kind": "CouchbaseCluster",
"name": "cb-dev",
"uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxx"
}
],
"resourceVersion": "xxxxxxx",
"uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
}
...
}
</code></pre>
<p>and now the same thing for cb-dev-0001 (one of the ones getting logged constantly in CBO)</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
...
"name": "cb-dev-0001",
"namespace": "default",
"resourceVersion": "xxxxxxx",
"uid": "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
}
...
}
</code></pre>
<p><strong>ownerReferences</strong> is missing from the Velero backup for cb-dev-0001, 0002, 0003, 0004. Now I think I'm onto something.</p>
<p>I don't know why Velero would find this and store it in the backup for ONE POD vs all of them. But that's a clue I think...</p>
<p>Still hunting...</p>
<p><strong>UPDATE 2</strong>:</p>
<p>I've confirmed that Velero is storing the backup for couchbase objects in it's JSON files correctly every time (from what I've seen so far).</p>
<p>However, the velero restore is almost randomly not setting the metadata.ownerReferences in the restored Couchbase pods. Sometimes it's only in the Couchbase Services and the CB-dev-0000 pod. Sometimes it's not in any of them. Sometimes I've seen in (in the past) set in all of them (correctly?).</p>
<p>SO it's still a mystery, but that's where I am so far. I've seen other people mentioning on various chat/forums that they've experienced similar issues with Velero.</p>
<p>I'm secretly hoping I'll find a missing argument or annotation were I can specifically force ownerReferences to be restored for certain objects. But I haven't seen that yet...</p>
| <p>As Sathya S. noted it appears that Velero doesn't (reliably) restore metadata.OwnerReferences from it's backups.</p>
<p><strong>I will add to that, that SOMETIMES it does.</strong> And that's what throws me. It almost seems like it has a pattern when it does at least in my case. if CB-dev-0000 has it, then the Services will also. But then the remaining CB pods won't. Otherwise all of them 'might' have it set, or none of them. At least in the example I've setup here.</p>
<p>Couchbase notes in thier docs about NOT including 'pods' and 'services' in the Velero backup. This had stuck in my mind but I kinda didn't trust it.</p>
<p>Turns out THAT seems to be VITAL for Velero to properly restore my Couchbase cluster and avoid the "Pod ignored, no owner" issue seen in Couchbase Operator logs.</p>
<p>Once I removed 'pods' and 'services' from my scheduled backup and it created a backup, then I kubectl deleted my Couchbase cluster. Then I velero restore create --from-backup and wah-lah the cluster came up. Additionally I'll also note that the Indexes and Bucket documents I'd created also were restored.</p>
<p>Most importantly to this issue is that metadata.ownerReferences were all setup properly. I've done this several times now before Answering this issue. And this seems to be the important thing. Don't include pods, and services in the backup.</p>
<p>"You may have noticed that neither pods nor services were backed up. This is because <strong>the Operator will be able to recreate them from the cluster ConfigMap</strong>, metadata attached to the persistent volume claims, and the CouchbaseCluster resource itself. Likewise the deployment will be able to recreate the Operator pod." ~ <a href="https://docs.couchbase.com/operator/current/tutorial-velero-backup.html#creating-a-velero-backup" rel="nofollow noreferrer">https://docs.couchbase.com/operator/current/tutorial-velero-backup.html#creating-a-velero-backup</a></p>
<p>Ultimately all I had to do was remove pods and services from my schedule backups 'includedResources' yaml and delete/apply the schedule.</p>
|
<p>When you read about API Groups in Kubernetes, you may come across group names like <code>core/v1</code>, <code>batch/v1</code>, <code>storage.k8s.io/v1</code>, and <code>scheduling.k8s.io/v1</code>. Some of these groups have the <code>k8s.io</code> postfix, while others do not. It can be confusing when specifying the apiVersion in your Kubernetes manifests, as you need to specify it as like <code>v1</code>, <code>batch/v1</code>, and <code>storage.k8s.io/v1</code>.</p>
<p>Knowing the reason will make me less confused.</p>
<p>Thanks.</p>
| <p>I haven't found a precise explanation for your question, I believe it's an implementation choice.</p>
<p>But I'm sharing some resources that might clarify your ideas a bit.</p>
<blockquote>
<p>Resources are bound together in API groups - each group may have one or more versions that evolve independent of other API groups, and each version within the group has one or more resources. Group names are typically in domain name form - the Kubernetes project reserves use of the empty group, all single word names ("extensions", "apps"), and any group name ending in "*.k8s.io" for its sole use. When choosing a group name, we recommend selecting a subdomain your group or organization owns, such as "widget.mycompany.com".</p>
</blockquote>
<p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions</a></p>
<p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#api-conventions</a></p>
<p><a href="https://stackoverflow.com/a/57854939/21404450">https://stackoverflow.com/a/57854939/21404450</a></p>
|
<p>I have a very interesting situation here. :-) I reinstalled my Kubernetes cluster (bare metal). I have 1 master and 2 worker nodes. On worker no 2, on the host machine, I have an Apache http server listening on port 80 and serving a web page.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
kubemaster Ready control-plane 22h v1.26.3
kubenode-1 Ready <none> 21h v1.26.3
kubenode-2 Ready <none> 21h v1.26.3
</code></pre>
<p>I installed Nginx ingress controller on my cluster as a daemon set. So there are 2 controllers running on my 2 worker nodes.</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bkk-ingress-5p7b8 1/1 Running 0 31m 192.168.29.136 kubenode-1 <none> <none>
bkk-ingress-rxjw4 1/1 Running 0 31m 192.168.77.72 kubenode-2 <none> <none>
</code></pre>
<p>And here is the interesting part. When I want to visit the web page served by the Apache on the host machine the traffic is taken by the Nginx and I saw the Nginx ingress controller not found page instead of my web page. However only the apache is listening on port 80:</p>
<pre><code>[root@kubenode-2 ~]$ netstat -tulpn | grep --color :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1099870/httpd
[root@kubenode-2 ~]$ netstat -tulpn | grep --color :443
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1099870/httpd
</code></pre>
<p>The ingress service is a NodePort and it is not supposed to use port 80 or 443:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: bkk
meta.helm.sh/release-namespace: nginx
creationTimestamp: "2023-03-30T21:00:17Z"
labels:
app.kubernetes.io/instance: bkk
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: bkk-nginx-ingress
helm.sh/chart: nginx-ingress-0.16.2
name: bkk-nginx-ingress
namespace: nginx
resourceVersion: "159007"
uid: 1f45c592-80e0-4fb5-acbd-a04c76772e6a
spec:
clusterIP: 10.104.170.119
clusterIPs:
- 10.104.170.119
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 31902
port: 801
protocol: TCP
targetPort: 801
- name: https
nodePort: 31903
port: 4431
protocol: TCP
targetPort: 4431
selector:
app: bkk-ingress
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>And there is not a single ingress defined anywhere in my system:</p>
<pre><code>[root@kubemaster helm-chart]$ kubectl get ingresses --all-namespaces
No resources found
</code></pre>
<p>Does anyone have any idea what is going on here? Why the Nginx, living and running as a Kubernetes pod, steals the traffic from the host machine?</p>
| <p>I found it out... When I installed Nginx using Helm, it adds a hostPort:80 to the pod definition by default. Even it is not in the values.yaml, and not mentioned anywhere. So, if you remove that by hand, then Nginx won't steal port 80 from the Apache installed outside of Kubernetes.</p>
|
<p>I have a pod within a Kubernetes cluster that needs to send alarms via SNMP to an external network management system. However, the external system will only be able to identify the pod if it keeps a stable IP address. Considering the ephermal nature of pods, would it be possible to send/redirect requests to a system outside of the cluster with a static IP?</p>
<p>The information I could gather by now only proposed solutions on how to reach the pod from outside the cluster with e.g. Services. I found the following <a href="https://stackoverflow.com/a/59488628/11783513">answer</a> that suggests using an egress gateway, but not much information is provided on how to approach the issue.</p>
| <p>One viable solution is to utilize an Egress Router resource defined <a href="https://docs.openshift.com/container-platform/4.12/networking/openshift_sdn/using-an-egress-router.html" rel="nofollow noreferrer">here</a>, which redirects traffic to a specified IP using a dedicated source IP address:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: egress-1
labels:
name: egress-1
annotations:
pod.network.openshift.io/assign-macvlan: "true"
spec:
initContainers:
- name: egress-router
image: registry.redhat.io/openshift4/ose-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE
value: <egress_router>
- name: EGRESS_GATEWAY
value: <egress_gateway>
- name: EGRESS_DESTINATION
value: <egress_destination>
- name: EGRESS_ROUTER_MODE
value: init
containers:
- name: egress-router-wait
image: registry.redhat.io/openshift4/ose-pod
</code></pre>
<p>An example configuration looks like follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: egress-multi
labels:
name: egress-multi
annotations:
pod.network.openshift.io/assign-macvlan: "true"
spec:
initContainers:
- name: egress-router
image: registry.redhat.io/openshift4/ose-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE
value: 192.168.12.99/24
- name: EGRESS_GATEWAY
value: 192.168.12.1
- name: EGRESS_DESTINATION
value: |
203.0.113.25
- name: EGRESS_ROUTER_MODE
value: init
containers:
- name: egress-router-wait
image: registry.redhat.io/openshift4/ose-pod
</code></pre>
<p>The Egress Router pod is exposed by a Service and linked to the application that needs to send outbound SNMP traps:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: egress-1
spec:
ports:
- name: snmp
port: 162
type: ClusterIP
selector:
name: egress-1
</code></pre>
<p>The application sends the SNMP trap to the ClusterIP/Service-Name of the Service exposing the Egress Router pod, and the pod redirects the request to the specified remote server. Once redirected, the source IP is changed to the Source IP specified in the Egress Router resource. For more information on implementing the egress router in redirection mode, see <a href="https://docs.openshift.com/container-platform/4.12/networking/openshift_sdn/deploying-egress-router-layer3-redirection.html#deploying-egress-router-layer3-redirection" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Note that depending on your network configuration, you might need to configure the <em>assign-macvlan</em> field to a different NIC interface and set it to the name of that interface, e.g. <em>eth1</em></strong>.</p>
|
<p>I have in my home env two systems:</p>
<ul>
<li>a Docker server on 10.0.0.10/16, using Traefik reverse proxy with one of my public domains</li>
<li>and a kubernetes cluster with VIP on 10.0.0.201/16, with Nginx and supposedly with another of my public domains
my firewall is pfsense, I have reverse proxy installed; certificates are managed by nginx /traefik on docker /k8s</li>
</ul>
<p>How can I configure pfsense to direct domainA traffic to the Docker host and DomainB to the k8s-one? (no need for it do do ACME, just directing the traffic...)</p>
<p>A step by step guide, or maybe some pointer to a good guide would be much appreciated...</p>
<p>Thanks,
G</p>
<p>Tried setting up reverse proxy on pfsense, but clearly missing some bits</p>
| <p>You need to:</p>
<ol>
<li>Enable <a href="https://docs.netgate.com/pfsense/en/latest/packages/haproxy.html" rel="nofollow noreferrer">Haproxy on pfsense</a></li>
<li>Create a frontend on your 80,443 ports.</li>
<li>Create two backends pointing to docker and k8s environments.</li>
<li>Write ACLs in your frontend to point traffic to specific backend, according to hostname and SNI(for ssl).</li>
</ol>
|