input
stringclasses 1
value | output
stringlengths 30
28.3k
| text
stringlengths 151
31.8k
| instruction
stringlengths 49
30.3k
|
---|---|---|---|
As you are using the NLB you can achieve End to end HTTPS also however it forces the service also to use.
You can terminate at the LB level if you have multiple LB backed by clusters, leveraging the AWS cert manage with LB will be an easy way to manage the multiple setups.
There is no guarantee that if anyone that enters in your network won't be able to exploit a bug capable of intercepting traffic between services, Software Defined Network(SDN) in your VPC is secure and protects from spoofing but no guarantee.
So there is an advantage if you use TLS/SSL inside the VPC also.
| ###Instruction: My architecture looks like this:
Here, the HTTPS requests first go to the route53 service for DNS resolution. Route53 forwards the request to the Network Load balancer. This service redirects the traffic to HAProxy pods running inside a Kubernetes cluster.
The HAProxy servers are required to read a specific request header and based on its value, it will route the traffic to backend. To keep things simple, I have kept a single K8 Backend cluster, but assume that there are more than 1 such backend cluster running.
Considering this architecture:
What is the best place to perform TLS termination? Should we do it at NLB (green box) or implement it at HAProxy (Orange box)?
What are the advantages and disadvantages of each scenario?
### Response: As you are using the NLB you can achieve End to end HTTPS also however it forces the service also to use.
You can terminate at the LB level if you have multiple LB backed by clusters, leveraging the AWS cert manage with LB will be an easy way to manage the multiple setups.
There is no guarantee that if anyone that enters in your network won't be able to exploit a bug capable of intercepting traffic between services, Software Defined Network(SDN) in your VPC is secure and protects from spoofing but no guarantee.
So there is an advantage if you use TLS/SSL inside the VPC also.
| My architecture looks like this:
Here, the HTTPS requests first go to the route53 service for DNS resolution. Route53 forwards the request to the Network Load balancer. This service redirects the traffic to HAProxy pods running inside a Kubernetes cluster.
The HAProxy servers are required to read a specific request header and based on its value, it will route the traffic to backend. To keep things simple, I have kept a single K8 Backend cluster, but assume that there are more than 1 such backend cluster running.
Considering this architecture:
What is the best place to perform TLS termination? Should we do it at NLB (green box) or implement it at HAProxy (Orange box)?
What are the advantages and disadvantages of each scenario?
|
|
The output of flag.String itself does not expand environment variables but you can use os.ExpandEnv to do that. However os.ExpandEnv requires you to use the Unix notation for environment variables, i.e. $USERPROFILE or ${USERPROFILE}. You can get a clean file path for our specific OS (Windows in your case) using filepath.Clean.
Example:
kubeconfig := flag.String("kubeconfig", "$USERPROFILE/.kube/config", "location to the Kube config file")
fmt.Println(*kubeconfig)
fmt.Println(os.ExpandEnv(*kubeconfig))
fmt.Println(filepath.Clean(os.ExpandEnv(*kubeconfig)))
This will output the following on Windows:
$USERPROFILE/.kube/config
C:\Users\username/.kube/config
C:\Users\username\.kube\config
| ###Instruction: I am trying to specify the windows location for the .kube/config file for the Go client application without specifying the absolute path.
kubeconfig := flag.String("kubeconfig", "%USERPROFILE%/.kube/config", "location to the Kube config file")
Output :
panic: runtime error: invalid memory address or nil pointer dereference
When I use echo %USERPROFILE% in a cmd, the output is C:\Users\<username>, so I thought that this is because the different usage of \ and / in the code and path.
I tried to specify the path using \ instead of / but it gives out a syntax error.
Can anyone suggest me with a solution to use windows environmental variables to specify paths in a Go application?
Thanks in advance.
### Response: The output of flag.String itself does not expand environment variables but you can use os.ExpandEnv to do that. However os.ExpandEnv requires you to use the Unix notation for environment variables, i.e. $USERPROFILE or ${USERPROFILE}. You can get a clean file path for our specific OS (Windows in your case) using filepath.Clean.
Example:
kubeconfig := flag.String("kubeconfig", "$USERPROFILE/.kube/config", "location to the Kube config file")
fmt.Println(*kubeconfig)
fmt.Println(os.ExpandEnv(*kubeconfig))
fmt.Println(filepath.Clean(os.ExpandEnv(*kubeconfig)))
This will output the following on Windows:
$USERPROFILE/.kube/config
C:\Users\username/.kube/config
C:\Users\username\.kube\config
| I am trying to specify the windows location for the .kube/config file for the Go client application without specifying the absolute path.
kubeconfig := flag.String("kubeconfig", "%USERPROFILE%/.kube/config", "location to the Kube config file")
Output :
panic: runtime error: invalid memory address or nil pointer dereference
When I use echo %USERPROFILE% in a cmd, the output is C:\Users\<username>, so I thought that this is because the different usage of \ and / in the code and path.
I tried to specify the path using \ instead of / but it gives out a syntax error.
Can anyone suggest me with a solution to use windows environmental variables to specify paths in a Go application?
Thanks in advance.
|
|
Recently, I'm working on debuging the cpu throttling issue, with the following 5 tests, I've tested out the bug in kernel (Linux version 4.18.0-041800rc4-generic)
This test case is intended to hit 100% throttling for the test 5000ms / 100 ms periods = 50 periods. A kernel without this bug should be able to have a CPU usage stats about 500ms.
Maybe you can try these tests to check whether your kernel will be throttlled.
[Multi Thread Test 1]
./runfibtest 1; ./runfibtest
From <https://github.com/indeedeng/fibtest>
[Result]
Throttled
./runfibtest 1
Iterations Completed(M): 465
Throttled for: 52
CPU Usage (msecs) = 508
./runfibtest 8
Iterations Completed(M): 283
Throttled for: 52
CPU Usage (msecs) = 327
[Multi Thread Test 2]
docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/fibtest:latest /bin/sh -c "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
[Result]
Throttled
Iterations Completed(M): 192
Throttled for: 53
CPU Usage (msecs) = 227
nr_periods 58
nr_throttled 56
throttled_time 10136239267
267434463
209397643 2871651 8044402 4226146 5891926 5532789 27939741 4104364
[Multi Thread Test 3]
docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/stress-ng:cpu-delay /bin/sh -c "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
Result
Throttled
nr_periods 56
nr_throttled 53
throttled_time 7893876370
379589091
330166879 3073914 6581265 724144 706787 5605273 29455102 3849694
For the following kubernetes test, we can use "kubectl logs pod-name" to get the result once the job is done
[Multi Thread Test 4]
apiVersion: batch/v1
kind: Job
metadata:
name: fibtest
namespace: default
spec:
template:
spec:
containers:
- name: fibtest
image: hipeteryang/fibtest
command: ["/bin/bash", "-c", "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
Result
Throttled
Iterations Completed(M): 195
Throttled for: 52
CPU Usage (msecs) = 230
nr_periods 56
nr_throttled 54
throttled_time 9667667360
255738571
213621103 4038989 2814638 15869649 4435168 5459186 4549675 5437010
[Multi Thread Test 5]
apiVersion: batch/v1
kind: Job
metadata:
name: stress-ng-test
namespace: default
spec:
template:
spec:
containers:
- name: stress-ng-test
image: hipeteryang/stress-ng:cpu-delay
command: ["/bin/bash", "-c", "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu
"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
Result
Throttled
nr_periods 53
nr_throttled 50
throttled_time 6827221694
417331622
381601924 1267814 8332150 3933171 13654620 184120 6376208 2623172
Feel free to leave any comment, I’ll reply as soon as possible.
| ###Instruction: I am experiencing a lot of CPU throttling (see nginx graph below, other pods often 25% to 50%) in my Kubernetes cluster (k8s v1.18.12, running 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux).
Due to backports, I do not know whether my cluster contains the Linux kernel bug described in https://lkml.org/lkml/2019/5/17/581. How can I find out? Is there a simple way to check or measure?
If I have the bug, what is the best approach to get the fix? Or should I mitigate otherwise, e.g. not use CFS quota (--cpu-cfs-quota=false or no CPU limits) or reduce cfs_period_us and cfs_quota_us?
CPU Throttling Percentage for nginx (scaling horizontally around 15:00 and removing CPU limits around 19:30):
### Response: Recently, I'm working on debuging the cpu throttling issue, with the following 5 tests, I've tested out the bug in kernel (Linux version 4.18.0-041800rc4-generic)
This test case is intended to hit 100% throttling for the test 5000ms / 100 ms periods = 50 periods. A kernel without this bug should be able to have a CPU usage stats about 500ms.
Maybe you can try these tests to check whether your kernel will be throttlled.
[Multi Thread Test 1]
./runfibtest 1; ./runfibtest
From <https://github.com/indeedeng/fibtest>
[Result]
Throttled
./runfibtest 1
Iterations Completed(M): 465
Throttled for: 52
CPU Usage (msecs) = 508
./runfibtest 8
Iterations Completed(M): 283
Throttled for: 52
CPU Usage (msecs) = 327
[Multi Thread Test 2]
docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/fibtest:latest /bin/sh -c "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
[Result]
Throttled
Iterations Completed(M): 192
Throttled for: 53
CPU Usage (msecs) = 227
nr_periods 58
nr_throttled 56
throttled_time 10136239267
267434463
209397643 2871651 8044402 4226146 5891926 5532789 27939741 4104364
[Multi Thread Test 3]
docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/stress-ng:cpu-delay /bin/sh -c "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
Result
Throttled
nr_periods 56
nr_throttled 53
throttled_time 7893876370
379589091
330166879 3073914 6581265 724144 706787 5605273 29455102 3849694
For the following kubernetes test, we can use "kubectl logs pod-name" to get the result once the job is done
[Multi Thread Test 4]
apiVersion: batch/v1
kind: Job
metadata:
name: fibtest
namespace: default
spec:
template:
spec:
containers:
- name: fibtest
image: hipeteryang/fibtest
command: ["/bin/bash", "-c", "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
Result
Throttled
Iterations Completed(M): 195
Throttled for: 52
CPU Usage (msecs) = 230
nr_periods 56
nr_throttled 54
throttled_time 9667667360
255738571
213621103 4038989 2814638 15869649 4435168 5459186 4549675 5437010
[Multi Thread Test 5]
apiVersion: batch/v1
kind: Job
metadata:
name: stress-ng-test
namespace: default
spec:
template:
spec:
containers:
- name: stress-ng-test
image: hipeteryang/stress-ng:cpu-delay
command: ["/bin/bash", "-c", "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu
"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
Result
Throttled
nr_periods 53
nr_throttled 50
throttled_time 6827221694
417331622
381601924 1267814 8332150 3933171 13654620 184120 6376208 2623172
Feel free to leave any comment, I’ll reply as soon as possible.
| I am experiencing a lot of CPU throttling (see nginx graph below, other pods often 25% to 50%) in my Kubernetes cluster (k8s v1.18.12, running 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux).
Due to backports, I do not know whether my cluster contains the Linux kernel bug described in https://lkml.org/lkml/2019/5/17/581. How can I find out? Is there a simple way to check or measure?
If I have the bug, what is the best approach to get the fix? Or should I mitigate otherwise, e.g. not use CFS quota (--cpu-cfs-quota=false or no CPU limits) or reduce cfs_period_us and cfs_quota_us?
CPU Throttling Percentage for nginx (scaling horizontally around 15:00 and removing CPU limits around 19:30):
|
|
I was getting similar error when I was trying to create secret using client-go.
The error actually tells that the encoded string has invalid e at the beginning of the value (so may be it's expecting '{' this at the beginning ).To solve this, the value should not be encoded into base64. Just use it as it is and that will be encoded later.
| ###Instruction: Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?
I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:
"Secret "secrettest" is invalid: data[.dockerconfigjson]: Invalid value: "<secret contents redacted>": invalid character 'e' looking for beginning of value"
This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0=
kind: Secret
metadata:
name: secrettest
namespace: default
type: kubernetes.io/dockerconfigjson
This is the decoded value of the ".dockerconfigjson" property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:
{"auths":{"helloworld.azurecr.io":{"username":"hello","password":"world","auth":"aGVsbG86d29ybGQ="}}}
According to the documentation, my yaml is valid so Im not sure whats the issue:
Customize secret yaml
Note: I tried creating the Secret using the Kubernetes client and "PatchNamespacedSecretWithHttpMessagesAsync" in C#
Referenced documentaion: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
### Response: I was getting similar error when I was trying to create secret using client-go.
The error actually tells that the encoded string has invalid e at the beginning of the value (so may be it's expecting '{' this at the beginning ).To solve this, the value should not be encoded into base64. Just use it as it is and that will be encoded later.
| Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?
I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:
"Secret "secrettest" is invalid: data[.dockerconfigjson]: Invalid value: "<secret contents redacted>": invalid character 'e' looking for beginning of value"
This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0=
kind: Secret
metadata:
name: secrettest
namespace: default
type: kubernetes.io/dockerconfigjson
This is the decoded value of the ".dockerconfigjson" property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:
{"auths":{"helloworld.azurecr.io":{"username":"hello","password":"world","auth":"aGVsbG86d29ybGQ="}}}
According to the documentation, my yaml is valid so Im not sure whats the issue:
Customize secret yaml
Note: I tried creating the Secret using the Kubernetes client and "PatchNamespacedSecretWithHttpMessagesAsync" in C#
Referenced documentaion: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
|
You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.
First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.
| ###Instruction: As the documentation states:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.
The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used
I create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.
When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:
Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling
Am I missing something here?
### Response: You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.
First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.
| As the documentation states:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.
The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used
I create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.
When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:
Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling
Am I missing something here?
|
|
In my case, I was missing to have executed before:
kubectl create namespace n3
| ###Instruction: I see the following error when I run my deployment:
Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found
My n3deployment.yaml has no reference to n2?
Step By Step
Ensure everything is empty
c:\temp\k8s>kubectl get pods
No resources found.
c:\temp\k8s>kubectl get svc
No resources found.
c:\temp\k8s>kubectl get deployments
No resources found.
c:\temp\k8s>kubectl get namespaces
NAME STATUS AGE
default Active 20h
docker Active 20h
kube-public Active 20h
kube-system Active 20h
Create files
n3namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: n3
n3service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-n3
namespace: n3
labels:
app: my-app-n3
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n3
n3deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n3
labels:
app: my-app-n3
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n3
template:
metadata:
labels:
app: my-app-n3
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
Apply configuration
c:\temp\k8s>kubectl apply -f n3namespace.yaml
namespace "n3" created
c:\temp\k8s>kubectl apply -f n3service.yaml
service "my-app-n3" created
c:\temp\k8s>kubectl apply -f n3deployment.yaml
Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found
I used to have a namespace called n2 but, as you can see, it no longer exists.
### Response: In my case, I was missing to have executed before:
kubectl create namespace n3
| I see the following error when I run my deployment:
Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found
My n3deployment.yaml has no reference to n2?
Step By Step
Ensure everything is empty
c:\temp\k8s>kubectl get pods
No resources found.
c:\temp\k8s>kubectl get svc
No resources found.
c:\temp\k8s>kubectl get deployments
No resources found.
c:\temp\k8s>kubectl get namespaces
NAME STATUS AGE
default Active 20h
docker Active 20h
kube-public Active 20h
kube-system Active 20h
Create files
n3namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: n3
n3service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-n3
namespace: n3
labels:
app: my-app-n3
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n3
n3deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n3
labels:
app: my-app-n3
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n3
template:
metadata:
labels:
app: my-app-n3
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
Apply configuration
c:\temp\k8s>kubectl apply -f n3namespace.yaml
namespace "n3" created
c:\temp\k8s>kubectl apply -f n3service.yaml
service "my-app-n3" created
c:\temp\k8s>kubectl apply -f n3deployment.yaml
Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found
I used to have a namespace called n2 but, as you can see, it no longer exists.
|
|
Did you make sure that the pods are scheduled one on each node? Otherwise, they could be both scheduled on the same node.
For your test with one per node, a daemonset would be sufficient.
Otherwise, consider topology spread constraints:
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
| ###Instruction: I am running load tests using K6 as a tool. I am sending 1000 requests per second to a .NET API (without database) that I deployed in Kubernetes. Sending this many requests impacts the performance of my application. This is noticeable in the request duration(as expected).
The K8s cluster has two nodes and one of them has high CPU usage because of all the requests I am sending (which is also expected). However, the second node has low CPU usage. While monitoring the CPU usage of the pods I am seeing the same results. Both pods use about 10% - 20% of their CPU. How can these metrics be explained? Shouldn't the loadbalancer (ingress) balance the load evenly? And why am I seeing a decrease in performance if I still have resoures left?
Any advice/help would be greatly appreciated.
Node 1, CPU usage: https://i.stack.imgur.com/r0QPF.png
Node 2, CPU usage: https://i.stack.imgur.com/Ovub5.png
Pod 1, CPU usage: https://i.stack.imgur.com/ofUmm.png
Pod 2, CPU usage: https://i.stack.imgur.com/Jj3sC.png
### Response: Did you make sure that the pods are scheduled one on each node? Otherwise, they could be both scheduled on the same node.
For your test with one per node, a daemonset would be sufficient.
Otherwise, consider topology spread constraints:
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
| I am running load tests using K6 as a tool. I am sending 1000 requests per second to a .NET API (without database) that I deployed in Kubernetes. Sending this many requests impacts the performance of my application. This is noticeable in the request duration(as expected).
The K8s cluster has two nodes and one of them has high CPU usage because of all the requests I am sending (which is also expected). However, the second node has low CPU usage. While monitoring the CPU usage of the pods I am seeing the same results. Both pods use about 10% - 20% of their CPU. How can these metrics be explained? Shouldn't the loadbalancer (ingress) balance the load evenly? And why am I seeing a decrease in performance if I still have resoures left?
Any advice/help would be greatly appreciated.
Node 1, CPU usage: https://i.stack.imgur.com/r0QPF.png
Node 2, CPU usage: https://i.stack.imgur.com/Ovub5.png
Pod 1, CPU usage: https://i.stack.imgur.com/ofUmm.png
Pod 2, CPU usage: https://i.stack.imgur.com/Jj3sC.png
|
|
Not sure but if you look at the error it's trying to fetch the
gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be/SHORT_SHA not gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA it's not using : before the TAG are you building images ?
Update :
Just tested this file with cloud build and GKE working for me : https://github.com/harsh4870/basic-ci-cd-cloudbuild/blob/main/cloudbuild-gke.yaml
However you can use the cloudbuild.yaml
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: standard-cluster-1 # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
deployment.yaml
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: TEST_IMAGE_NAME
| ###Instruction: I'm trying to deploy my code placed in bitbucket repo to Kubernetes clusters available in google cloud. I've created a trigger which react when I push new changes to bitbucket repo.
I mean:
Based on guidance: https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments I've created: cloudbuild.yaml file with content:
steps:
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=${_TECH_RADAR_KUB_CONFIG_LOCATION}
- --image=gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:${SHORT_SHA}
- --location=${_TECH_RADAR_GKE_LOCATION}
- --cluster=${_TECH_RADAR_CLUSTER_NAME}
env:
- 'SHORT_SHA=$SHORT_SHA'
options:
logging: CLOUD_LOGGING_ONLY
In my repo I additionally I have additionally deployment.yaml file with Kuberneties config:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: "gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA"
My build is starting but got stack on issue MANIFEST UNKNOWN like below:
How can I solve this problem? Can I somehow let Kubernetes engine know that such manifest exits in additional pre step in cloudbuild.yaml file?
I would be grateful for help.
Thanks!
UPDATE:
I'm trying to use cloudbuild.yaml like below:
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/$PROJECT_ID/technology-radar-be/$master:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
The last error message is:
Is there something wrong with my Dockerfile definition? :
FROM maven:3.8.2-jdk-11 as builder
ARG JAR_FILE=target/docker-demo.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>docker-kubernetes</artifactId>
<version>0.0.1</version>
<name>docker-kubernetes</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
<finalName>docker-demo</finalName>
</build>
</project>
Screenshot from project tree:
Local deployment works:
### Response: Not sure but if you look at the error it's trying to fetch the
gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be/SHORT_SHA not gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA it's not using : before the TAG are you building images ?
Update :
Just tested this file with cloud build and GKE working for me : https://github.com/harsh4870/basic-ci-cd-cloudbuild/blob/main/cloudbuild-gke.yaml
However you can use the cloudbuild.yaml
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: standard-cluster-1 # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
deployment.yaml
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: TEST_IMAGE_NAME
| I'm trying to deploy my code placed in bitbucket repo to Kubernetes clusters available in google cloud. I've created a trigger which react when I push new changes to bitbucket repo.
I mean:
Based on guidance: https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments I've created: cloudbuild.yaml file with content:
steps:
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=${_TECH_RADAR_KUB_CONFIG_LOCATION}
- --image=gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:${SHORT_SHA}
- --location=${_TECH_RADAR_GKE_LOCATION}
- --cluster=${_TECH_RADAR_CLUSTER_NAME}
env:
- 'SHORT_SHA=$SHORT_SHA'
options:
logging: CLOUD_LOGGING_ONLY
In my repo I additionally I have additionally deployment.yaml file with Kuberneties config:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: "gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA"
My build is starting but got stack on issue MANIFEST UNKNOWN like below:
How can I solve this problem? Can I somehow let Kubernetes engine know that such manifest exits in additional pre step in cloudbuild.yaml file?
I would be grateful for help.
Thanks!
UPDATE:
I'm trying to use cloudbuild.yaml like below:
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/$PROJECT_ID/technology-radar-be/$master:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
The last error message is:
Is there something wrong with my Dockerfile definition? :
FROM maven:3.8.2-jdk-11 as builder
ARG JAR_FILE=target/docker-demo.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>docker-kubernetes</artifactId>
<version>0.0.1</version>
<name>docker-kubernetes</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
<finalName>docker-demo</finalName>
</build>
</project>
Screenshot from project tree:
Local deployment works:
|
|
In my case, I was attempting to install an older version of cert-manager onto my cluster, and Simply pulling the latest version of cert-manger (1.10.1 at time of writing) and installing that worked.
When attempting to install an older version of cert-manager I saw the following error from the cainjector pod.
error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
I assume that the API admissionregistration.k8s.io/v1beta1 has been removed between K8s versions 1.21 and 1.24, and that's why I encountered an issue.
| ###Instruction: I am new in Kubernetes and stuck on the issue. I was trying to renew letsencrypt SSL certificate. But when I try to get certificate by running following command
kubectl get certificate
System throwing this exception
Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=Certificate failed: Post https://cert-manager-webhook.default.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
I have checked the pods also
The "cert-manager-webhook" is in running state. When I check logs of this pod, I get the following response
I have also tried to apply cluster-issuer after deleting it but face same issue
kubectl apply -f cluster-issuer.yaml
I also have done R&D about this but could not find any suitable solution. Whats the issue here? Can someone please help me regarding this? Thanks.
### Response: In my case, I was attempting to install an older version of cert-manager onto my cluster, and Simply pulling the latest version of cert-manger (1.10.1 at time of writing) and installing that worked.
When attempting to install an older version of cert-manager I saw the following error from the cainjector pod.
error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
I assume that the API admissionregistration.k8s.io/v1beta1 has been removed between K8s versions 1.21 and 1.24, and that's why I encountered an issue.
| I am new in Kubernetes and stuck on the issue. I was trying to renew letsencrypt SSL certificate. But when I try to get certificate by running following command
kubectl get certificate
System throwing this exception
Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=Certificate failed: Post https://cert-manager-webhook.default.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
I have checked the pods also
The "cert-manager-webhook" is in running state. When I check logs of this pod, I get the following response
I have also tried to apply cluster-issuer after deleting it but face same issue
kubectl apply -f cluster-issuer.yaml
I also have done R&D about this but could not find any suitable solution. Whats the issue here? Can someone please help me regarding this? Thanks.
|
|
You may try this out by adding one line to the daemon.json file
The configuration file can be found at 'C:\ProgramData\Docker\config\daemon.json'. You can create this file if it doesn't already exist.
"insecure-registries": [
"192.168.1.132:5000"
]
Then restart the docker service hope solves the issue.
Raff:
Configure the Docker daemon
Configure Docker with a configuration file.
| ###Instruction: I have three servers:
1 master: 192.168.1.131 k8s
1 node: 192.168.1.132 k8s
1 rancher: 192.168.1.133 rancher 2.6
I have created a docker image (private registry docker) on the node used 192.168.1.132:5000/test.
Both master and node pushed and pulled to the image. But used rancher deploy set image 192.168.1.132:5000/test then error:
Failed to pull image “192.168.1.132:5000/test-demo”: rpc error: code = Unknown desc = failed to pull and unpack image “192.168.1.132:5000/test-demo:latest”: failed to resolve reference “192.168.1.132:5000/test-demo:latest”: failed to do request: Head “https://192.168.1.132:5000/v2/test-demo/manifests/latest”: http: server gave HTTP response to HTTPS client.
My image used
http not https. But the rancher sends HTTPS.
This is image problem
### Response: You may try this out by adding one line to the daemon.json file
The configuration file can be found at 'C:\ProgramData\Docker\config\daemon.json'. You can create this file if it doesn't already exist.
"insecure-registries": [
"192.168.1.132:5000"
]
Then restart the docker service hope solves the issue.
Raff:
Configure the Docker daemon
Configure Docker with a configuration file.
| I have three servers:
1 master: 192.168.1.131 k8s
1 node: 192.168.1.132 k8s
1 rancher: 192.168.1.133 rancher 2.6
I have created a docker image (private registry docker) on the node used 192.168.1.132:5000/test.
Both master and node pushed and pulled to the image. But used rancher deploy set image 192.168.1.132:5000/test then error:
Failed to pull image “192.168.1.132:5000/test-demo”: rpc error: code = Unknown desc = failed to pull and unpack image “192.168.1.132:5000/test-demo:latest”: failed to resolve reference “192.168.1.132:5000/test-demo:latest”: failed to do request: Head “https://192.168.1.132:5000/v2/test-demo/manifests/latest”: http: server gave HTTP response to HTTPS client.
My image used
http not https. But the rancher sends HTTPS.
This is image problem
|
|
Try to check whether the the following command is printing the below options.
>>>kubelogin -h
Login to azure active directory and populate kubeconfig with AAD tokens
Usage:
kubelogin [flags]
kubelogin [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
convert-kubeconfig convert kubeconfig to use exec auth module
get-token get AAD token
help Help about any command
remove-tokens Remove all cached tokens from filesystem
Flags:
-h, --help help for kubelogin
--logtostderr log to standard error instead of files (default true)
-v, --v Level number for the log level verbosity
--version version for kubelogin
It seems I was having a different kubelogin and missed the above specified Command Options. So I installed the new version of kubelogin using
az aks install-cli
If that doesn't work, then you can refer "" to get the kubelogin brew package
Also export the below path
export PATH="/usr/local/bin:$PATH"
Once the kubelogin is available , run the below command to convert your KUBECONFIG
kubelogin convert-kubeconfig ./kube/config
| ###Instruction: I'm using an Kubernetes server with API version (1.25.2) . When I try to do a kubectl command getting the below error
TRONBQQ2:~$ kubectl get nodes
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
From the same terminal I'm able to access the Kubernetes server with version 1.23.12.
Is this due to an old kubectl clieint version?.
TRONBQQ2:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean",
BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
TRONBQQ2:~$ sudo apt-get install -y kubectl
Reading package lists... Done
Building dependency tree
Reading state information... Done
kubectl is already the newest version (1.19.4-00).
0 upgraded, 0 newly installed, 0 to remove and 313 not upgraded.
I even tried to upgrade the kubectl . Even after upgrade, version remains in v1.19.4. Not sure this is the reason for the above mentioned error
### Response: Try to check whether the the following command is printing the below options.
>>>kubelogin -h
Login to azure active directory and populate kubeconfig with AAD tokens
Usage:
kubelogin [flags]
kubelogin [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
convert-kubeconfig convert kubeconfig to use exec auth module
get-token get AAD token
help Help about any command
remove-tokens Remove all cached tokens from filesystem
Flags:
-h, --help help for kubelogin
--logtostderr log to standard error instead of files (default true)
-v, --v Level number for the log level verbosity
--version version for kubelogin
It seems I was having a different kubelogin and missed the above specified Command Options. So I installed the new version of kubelogin using
az aks install-cli
If that doesn't work, then you can refer "" to get the kubelogin brew package
Also export the below path
export PATH="/usr/local/bin:$PATH"
Once the kubelogin is available , run the below command to convert your KUBECONFIG
kubelogin convert-kubeconfig ./kube/config
| I'm using an Kubernetes server with API version (1.25.2) . When I try to do a kubectl command getting the below error
TRONBQQ2:~$ kubectl get nodes
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
From the same terminal I'm able to access the Kubernetes server with version 1.23.12.
Is this due to an old kubectl clieint version?.
TRONBQQ2:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean",
BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
TRONBQQ2:~$ sudo apt-get install -y kubectl
Reading package lists... Done
Building dependency tree
Reading state information... Done
kubectl is already the newest version (1.19.4-00).
0 upgraded, 0 newly installed, 0 to remove and 313 not upgraded.
I even tried to upgrade the kubectl . Even after upgrade, version remains in v1.19.4. Not sure this is the reason for the above mentioned error
|
|
Run $ kubectl get pods , it shows Pending status. (kubectl get pods -o wide)
To describe the pod run $ kubectl describe pod POD_NAME, it shows a warning as part of events, if not works, try as suggested by @ Chris run kubectl get pod <name> -o yaml. There you'll find spec.affinity.
After identifying which anti-affinity rules triggers the warning. You can choose to
either rectify the rule or make some changes in the cluster to support
the rule
For example : let's take a case, try to deploy 4 replicas of Ngnix deployment with podAntiAffinity rule in a 3 Node cluster. Here the last replica cannot be scheduled because no available Nodes left.
You can choose to reduce the number of replicas, increase the number
of Nodes, adjust the rule to use soft/preference requirements or
remove the podAntiAffinity rule.
| ###Instruction: I am getting - when installing Cilium:
Warning FailedScheduling 4m21s (x17 over 84m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.
How can I see the rule and can I change it?
If I do kubectl describe node, id do not have anyy nodeAffinity settings. And the Node has Taints:<none>
### Response: Run $ kubectl get pods , it shows Pending status. (kubectl get pods -o wide)
To describe the pod run $ kubectl describe pod POD_NAME, it shows a warning as part of events, if not works, try as suggested by @ Chris run kubectl get pod <name> -o yaml. There you'll find spec.affinity.
After identifying which anti-affinity rules triggers the warning. You can choose to
either rectify the rule or make some changes in the cluster to support
the rule
For example : let's take a case, try to deploy 4 replicas of Ngnix deployment with podAntiAffinity rule in a 3 Node cluster. Here the last replica cannot be scheduled because no available Nodes left.
You can choose to reduce the number of replicas, increase the number
of Nodes, adjust the rule to use soft/preference requirements or
remove the podAntiAffinity rule.
| I am getting - when installing Cilium:
Warning FailedScheduling 4m21s (x17 over 84m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.
How can I see the rule and can I change it?
If I do kubectl describe node, id do not have anyy nodeAffinity settings. And the Node has Taints:<none>
|
|
The mechanism of service accounts and tokens has been changed (Moved to stable in v1.22).
In the past, you have created a service account. Then in the first time you ran a pod with that service account, Kubernetes created a long-lived, never expired, token that resided in secret of type kubernetes.io/service-account-token. Kubernetes attached this token to the pod via volume mount.
Due to the unsecured nature of long-lived tokens this has been changed to something called Bound Service Account Token Volumes.
In short, Kubernetes no more creates a secret with a token for the service account but instead Kubelet injects a short-lived token with a default timespan of 1 hour to the pod and refresh it when it's expired.
When you ran kubectl describe serviceaccount jenkins you saw <none> in the section of the Tokens because it represents the 'old fashioned' static tokens that as mentioned are not created by default anymore.
You can manually create such a static token with
> cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: jenkins
annotations:
kubernetes.io/service-account.name: jenkins
EOF
and then when you run describe again and you will the new token
> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: jenkins-demo
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: jenkins
Events: <none>
You can create multiple tokens with different names and you will see all of them in the describe output.
BUT This is a bad practice to create these static tokens because they never expired. You should use short-lived token that you can create with the command you mentioned kubectl create token jenkins
You can control the duration with --duration <seconds>s and create a token with an expiration of up to 48h. The default is 1h.
The creation of new token doesn't overwrite the previous one. These tokens are JWTs - meaning they are signed and distributed, and are not kept on the server. If you want to see the content of a token you can paste the output
of kubectl create token jenkins in jwt.io.
Same as with the static token. You can run
kubectl get secret jenkins --output=jsonpath='{.data.token}' | base64 -d
and paste the output in jwt.io. You can notice this token doesn't have an expiration date.
Reference:
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount
https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens
https://github.com/kubernetes/enhancements/issues/542
| ###Instruction: First, I have created a service account jenkins
> kubectl create serviceaccount jenkins
serviceaccount/jenkins created
Second, create a token for this service account.
> kubectl create token jenkins
eyJhbGc****************iQS-AVXfIzA
Then, I run kubectl describe serviceaccount jenkins command to check the tokens of newly created service account.
But the output shows None tokens.
> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none> <===== look at this!
Events: <none>
Questions
The output shows "None" tokens, how do I know there is a token associated with this service account?
If I run kubectl create token jenkins command multiple times, will kubernetes create multiple tokens for this account? or the latest one will overwrite the previous one?
# first time
kubectl create token jenkins
# second time
kubectl create token jenkins
# third time
kubectl create token jenkins
`
What's the mapping relation between the service account and the token? is it 1:n ?
### Response: The mechanism of service accounts and tokens has been changed (Moved to stable in v1.22).
In the past, you have created a service account. Then in the first time you ran a pod with that service account, Kubernetes created a long-lived, never expired, token that resided in secret of type kubernetes.io/service-account-token. Kubernetes attached this token to the pod via volume mount.
Due to the unsecured nature of long-lived tokens this has been changed to something called Bound Service Account Token Volumes.
In short, Kubernetes no more creates a secret with a token for the service account but instead Kubelet injects a short-lived token with a default timespan of 1 hour to the pod and refresh it when it's expired.
When you ran kubectl describe serviceaccount jenkins you saw <none> in the section of the Tokens because it represents the 'old fashioned' static tokens that as mentioned are not created by default anymore.
You can manually create such a static token with
> cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: jenkins
annotations:
kubernetes.io/service-account.name: jenkins
EOF
and then when you run describe again and you will the new token
> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: jenkins-demo
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: jenkins
Events: <none>
You can create multiple tokens with different names and you will see all of them in the describe output.
BUT This is a bad practice to create these static tokens because they never expired. You should use short-lived token that you can create with the command you mentioned kubectl create token jenkins
You can control the duration with --duration <seconds>s and create a token with an expiration of up to 48h. The default is 1h.
The creation of new token doesn't overwrite the previous one. These tokens are JWTs - meaning they are signed and distributed, and are not kept on the server. If you want to see the content of a token you can paste the output
of kubectl create token jenkins in jwt.io.
Same as with the static token. You can run
kubectl get secret jenkins --output=jsonpath='{.data.token}' | base64 -d
and paste the output in jwt.io. You can notice this token doesn't have an expiration date.
Reference:
https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount
https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens
https://github.com/kubernetes/enhancements/issues/542
| First, I have created a service account jenkins
> kubectl create serviceaccount jenkins
serviceaccount/jenkins created
Second, create a token for this service account.
> kubectl create token jenkins
eyJhbGc****************iQS-AVXfIzA
Then, I run kubectl describe serviceaccount jenkins command to check the tokens of newly created service account.
But the output shows None tokens.
> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none> <===== look at this!
Events: <none>
Questions
The output shows "None" tokens, how do I know there is a token associated with this service account?
If I run kubectl create token jenkins command multiple times, will kubernetes create multiple tokens for this account? or the latest one will overwrite the previous one?
# first time
kubectl create token jenkins
# second time
kubectl create token jenkins
# third time
kubectl create token jenkins
`
What's the mapping relation between the service account and the token? is it 1:n ?
|
|
k8s_api = client.CoreV1Api()
response = k8s_api.list_node()
# Loop through the nodes and check for the master node label
for node in response.items:
if "node-role.kubernetes.io/master" in node.metadata.labels:
print(f"Master node: {node.metadata.name}")
| ###Instruction: I'm currently using Kubernetes Python Client V1 API.
Since there is a way to list node details but I could not find which one is master
k8s_api = client.CoreV1Api()
logger.info("Getting k8s nodes...")
response = k8s_api.list_node()
### Response: k8s_api = client.CoreV1Api()
response = k8s_api.list_node()
# Loop through the nodes and check for the master node label
for node in response.items:
if "node-role.kubernetes.io/master" in node.metadata.labels:
print(f"Master node: {node.metadata.name}")
| I'm currently using Kubernetes Python Client V1 API.
Since there is a way to list node details but I could not find which one is master
k8s_api = client.CoreV1Api()
logger.info("Getting k8s nodes...")
response = k8s_api.list_node()
|
|
Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.
Refer the below diagram as a reference
Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.
Find below the post showing how it can be implemented
https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs
| ###Instruction: I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
### Response: Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.
Refer the below diagram as a reference
Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.
Find below the post showing how it can be implemented
https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs
| I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
|
|
TLDR
Build multi-arch images supporting multiple architectures
Always ensure that the image you're trying to run has compatible architecture
what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.
It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.
Will you really get different code for interpreted programming languages?
Different machine code, yes. But it will be functionally equivalent.
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
I recommend to always ensure you're running images meant for your machine's infrastructure.
For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.
Sources:
https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/
| ###Instruction: Intro
There is an option --platform for Docker image to be run and config platform for docker-compose.
Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.
Example, Ubuntu official image:
Most of Servers (also in Kubernetes) are linux/amd64.
I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:
For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).
But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.
I assumed that it is only the difference in interpretation of instructions between the software and the hardware.
I would like to understand the subject better. There is a lot of superficial information on the web, no details.
Questions
what "platform/architecture" for Docker image does it really means? Like core basics.
Will you really get different code for interpreted programming languages?
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
### Response: TLDR
Build multi-arch images supporting multiple architectures
Always ensure that the image you're trying to run has compatible architecture
what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.
It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.
Will you really get different code for interpreted programming languages?
Different machine code, yes. But it will be functionally equivalent.
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
I recommend to always ensure you're running images meant for your machine's infrastructure.
For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.
Sources:
https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/
| Intro
There is an option --platform for Docker image to be run and config platform for docker-compose.
Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.
Example, Ubuntu official image:
Most of Servers (also in Kubernetes) are linux/amd64.
I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:
For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).
But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.
I assumed that it is only the difference in interpretation of instructions between the software and the hardware.
I would like to understand the subject better. There is a lot of superficial information on the web, no details.
Questions
what "platform/architecture" for Docker image does it really means? Like core basics.
Will you really get different code for interpreted programming languages?
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
|
|
I had the same error and here's how I fixed it.
brew info google-cloud-sdk
which produces:
To add gcloud components to your PATH, add this to your profile:
for bash users
source "$(brew --prefix)/share/google-cloud-sdk/path.bash.inc"
for zsh users
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
source "$(brew --prefix)/share/google-cloud-sdk/completion.zsh.inc"
for fish users
source "$(brew --prefix)/share/google-cloud-sdk/path.fish.inc"
Grab the code for your terminal and then run it (e.g., for zsh)
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
Add the above line to your .zshrc profile to make sure that it is loaded every time you open a new terminal.
I had originally installed gcloud sdk with homebrew brew install google-cloud-sdk. At that time, I read the Caveats, which tell you how to add gcloud components to your PATH.
I installed both kubectl and gke-gcloud-auth-plugin, and neither of them could be found from the command line. I got the same error as the OP "command not found"
| ###Instruction: I try to install gke-gcloud-auth-plugin on a Mac M1 with zsh, following the gcloud docs.
The installation ran without issue and trying to re-run gcloud components install gke-gcloud-auth-plugin I get the All components are up to date. message.
However, gke-gcloud-auth-plugin --version returns zsh: command not found: gke-gcloud-auth-plugin. kubectl, installed the same way, works properly.
I tried to install kubectl using brew, with no more success.
### Response: I had the same error and here's how I fixed it.
brew info google-cloud-sdk
which produces:
To add gcloud components to your PATH, add this to your profile:
for bash users
source "$(brew --prefix)/share/google-cloud-sdk/path.bash.inc"
for zsh users
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
source "$(brew --prefix)/share/google-cloud-sdk/completion.zsh.inc"
for fish users
source "$(brew --prefix)/share/google-cloud-sdk/path.fish.inc"
Grab the code for your terminal and then run it (e.g., for zsh)
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
Add the above line to your .zshrc profile to make sure that it is loaded every time you open a new terminal.
I had originally installed gcloud sdk with homebrew brew install google-cloud-sdk. At that time, I read the Caveats, which tell you how to add gcloud components to your PATH.
I installed both kubectl and gke-gcloud-auth-plugin, and neither of them could be found from the command line. I got the same error as the OP "command not found"
| I try to install gke-gcloud-auth-plugin on a Mac M1 with zsh, following the gcloud docs.
The installation ran without issue and trying to re-run gcloud components install gke-gcloud-auth-plugin I get the All components are up to date. message.
However, gke-gcloud-auth-plugin --version returns zsh: command not found: gke-gcloud-auth-plugin. kubectl, installed the same way, works properly.
I tried to install kubectl using brew, with no more success.
|
|
Your command contain nested double quotes so use single quotes instead:
kubectl -n myns exec mypod -- /bin/bash -c 'err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )'
| ###Instruction: I am trying to run the following command on a pod from my local machine.
kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
The command stores the output of the command in a variable and if the output contains a "No such file or directory" it ignores it else exits with the return code.
The command itself: err=$(tar -cvzf /tmp/bludr-logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" ) runs fine when I run this manually inside the pod. But when I try to run this remotely using exec it gives it won't accept *. Here is what I get:
root:~# kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
such: Cowardly: command not found
such: -c: line 1: conditional binary operator expected
such: -c: line 1: syntax error near `*No'
such: -c: line 1: `Try 'tar --help' or 'tar --usage' for more information. || ( export ret=2; [[ == *No'
command terminated with exit code 1
I tried to replace * with &2A but that did not work.
### Response: Your command contain nested double quotes so use single quotes instead:
kubectl -n myns exec mypod -- /bin/bash -c 'err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )'
| I am trying to run the following command on a pod from my local machine.
kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
The command stores the output of the command in a variable and if the output contains a "No such file or directory" it ignores it else exits with the return code.
The command itself: err=$(tar -cvzf /tmp/bludr-logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" ) runs fine when I run this manually inside the pod. But when I try to run this remotely using exec it gives it won't accept *. Here is what I get:
root:~# kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
such: Cowardly: command not found
such: -c: line 1: conditional binary operator expected
such: -c: line 1: syntax error near `*No'
such: -c: line 1: `Try 'tar --help' or 'tar --usage' for more information. || ( export ret=2; [[ == *No'
command terminated with exit code 1
I tried to replace * with &2A but that did not work.
|
|
You should double-check with Google support (if you have a support plan), but I know from painful experience that if you're running a version of GKE that falls out of support, they may force-upgrade you to keep you within support, even if you have cluster upgrade turned off -- unless you use a maintenance exclusion.
The REGULAR channel release notes are here: https://cloud.google.com/kubernetes-engine/docs/release-notes-regular
The December 5th one is probably the one that affected you.
| ###Instruction: Auto-Scaling is OFF and Cluster-Upgrade is OFF but still dont know why cluster get restarted today morning. All Nodes got replace/change and All Pods got restarted
Reason is that it got restarted due to node got upgrade from 1.22 to 1.23 version
I have Regular Version in GKE Cluster
### Response: You should double-check with Google support (if you have a support plan), but I know from painful experience that if you're running a version of GKE that falls out of support, they may force-upgrade you to keep you within support, even if you have cluster upgrade turned off -- unless you use a maintenance exclusion.
The REGULAR channel release notes are here: https://cloud.google.com/kubernetes-engine/docs/release-notes-regular
The December 5th one is probably the one that affected you.
| Auto-Scaling is OFF and Cluster-Upgrade is OFF but still dont know why cluster get restarted today morning. All Nodes got replace/change and All Pods got restarted
Reason is that it got restarted due to node got upgrade from 1.22 to 1.23 version
I have Regular Version in GKE Cluster
|
|
The google_container_cluster resource also allows you to specify disk usage. Update the configuration as follows:
resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
node_config {
disk_size_gb = 50
}
}
| ###Instruction: Background
I have been playing around with GCP free plan. I am trying to learn how to apply gitops with IaC. To do so I'm trying to create the infrastructure for a kubernetes cluster using terraform, using google as the cloud provider.
Managed to configure the Github Actions to apply the changes when a push is made. However I get the following error:
│ Error: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '300.0' and is short '50.0'. project has a quota of '250.0' with '250.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=swift-casing-370717., forbidden
│
│ with google_container_cluster.primary,
│ on main.tf line 26, in resource "google_container_cluster" "primary":
│ 26: resource "google_container_cluster" "primary" ***
Configuration
The above mentioned terraform configuration file is the following:
# https://registry.terraform.io/providers/hashicorp/google/latest/docs
provider "google" {
project = "redacted"
region = "europe-west9"
}
# https://www.terraform.io/language/settings/backends/gcs
terraform {
backend "gcs" {
bucket = "redacted"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
resource "google_service_account" "default" {
account_id = "service-account-id"
display_name = "We still use master"
}
resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
Problem
It seems that I need to limit the resources so they use up to 250 GB, how can I do so?
What I have tried
Reducing node_pool size.
According to the documentation default size is 100GB and changed it to 50 as follows:
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
disk_size_gb = 50
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
Despite reducing the size the error message did not change at all.
### Response: The google_container_cluster resource also allows you to specify disk usage. Update the configuration as follows:
resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
node_config {
disk_size_gb = 50
}
}
| Background
I have been playing around with GCP free plan. I am trying to learn how to apply gitops with IaC. To do so I'm trying to create the infrastructure for a kubernetes cluster using terraform, using google as the cloud provider.
Managed to configure the Github Actions to apply the changes when a push is made. However I get the following error:
│ Error: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '300.0' and is short '50.0'. project has a quota of '250.0' with '250.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=swift-casing-370717., forbidden
│
│ with google_container_cluster.primary,
│ on main.tf line 26, in resource "google_container_cluster" "primary":
│ 26: resource "google_container_cluster" "primary" ***
Configuration
The above mentioned terraform configuration file is the following:
# https://registry.terraform.io/providers/hashicorp/google/latest/docs
provider "google" {
project = "redacted"
region = "europe-west9"
}
# https://www.terraform.io/language/settings/backends/gcs
terraform {
backend "gcs" {
bucket = "redacted"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
resource "google_service_account" "default" {
account_id = "service-account-id"
display_name = "We still use master"
}
resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
Problem
It seems that I need to limit the resources so they use up to 250 GB, how can I do so?
What I have tried
Reducing node_pool size.
According to the documentation default size is 100GB and changed it to 50 as follows:
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
disk_size_gb = 50
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
Despite reducing the size the error message did not change at all.
|
|
You can specify a default value for the dict.get method to make it fall back to the value when a given key does not exist.
Assuming you want an empty string when either labels or team doesn't exist, you can change:
data['metadata']['labels']['team']
to:
data['metadata'].get('labels', {}).get('team', '')
| ###Instruction: I want to write some tests for Kubernetes with python. This is a sample of my deployment file in Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-service
namespace: test
creationTimestamp: '2022-12-08T18:18:49Z'
labels:
app: test-service
team: teamA
.
.
what I want to do is get access to the items inside the deployment file:
some codes here....
result = set()
some codes here.....
with open(deoloyment_file, "r") as stream:
for data in yaml.safe_load_all(stream.read().replace('\t', ' ')):
if data and data['kind'] == 'Deployment':
result.add(f"{data['metadata']['namespace']}:{data['metadata']['name']}:{data['metadata']['labels']['team']}")
This throws an exception because in some deployments files there are no labels or team. I'm wondering how can I conditionally access items with Python.
### Response: You can specify a default value for the dict.get method to make it fall back to the value when a given key does not exist.
Assuming you want an empty string when either labels or team doesn't exist, you can change:
data['metadata']['labels']['team']
to:
data['metadata'].get('labels', {}).get('team', '')
| I want to write some tests for Kubernetes with python. This is a sample of my deployment file in Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-service
namespace: test
creationTimestamp: '2022-12-08T18:18:49Z'
labels:
app: test-service
team: teamA
.
.
what I want to do is get access to the items inside the deployment file:
some codes here....
result = set()
some codes here.....
with open(deoloyment_file, "r") as stream:
for data in yaml.safe_load_all(stream.read().replace('\t', ' ')):
if data and data['kind'] == 'Deployment':
result.add(f"{data['metadata']['namespace']}:{data['metadata']['name']}:{data['metadata']['labels']['team']}")
This throws an exception because in some deployments files there are no labels or team. I'm wondering how can I conditionally access items with Python.
|
|
I found out what the issue was: I needed to set the ipFamily.ipv6.enabled version to false:
helm upgrade -i my-release harbor/harbor --set ipFamily.ipv6.enabled=false
| ###Instruction: I'm trying to set up a Harbor Registry in Kubernetes cluster locally in Minikube. I am using https://github.com/goharbor/harbor-helm and trying to set it up with the ingress controller. At the moment, I have not changed any defaults (the default is of type 'ingress') but have installed the Nginx Ingress Controller via minikube addons enable ingress.
Right now, the helm chart installs, and I added an entry to my hosts file for 192.168.49.2 core.harbor.domain but page simply returns a 503 error.
When reviewing the pod logs for the harbor portal, I get this: ginx: [emerg] socket() [::]:8080 failed (97: Address family not supported by protocol)
I'm pretty sure that the reason why I'm getting this is because I need some default values set when installing the chart, but I'm not really sure what those may be.
### Response: I found out what the issue was: I needed to set the ipFamily.ipv6.enabled version to false:
helm upgrade -i my-release harbor/harbor --set ipFamily.ipv6.enabled=false
| I'm trying to set up a Harbor Registry in Kubernetes cluster locally in Minikube. I am using https://github.com/goharbor/harbor-helm and trying to set it up with the ingress controller. At the moment, I have not changed any defaults (the default is of type 'ingress') but have installed the Nginx Ingress Controller via minikube addons enable ingress.
Right now, the helm chart installs, and I added an entry to my hosts file for 192.168.49.2 core.harbor.domain but page simply returns a 503 error.
When reviewing the pod logs for the harbor portal, I get this: ginx: [emerg] socket() [::]:8080 failed (97: Address family not supported by protocol)
I'm pretty sure that the reason why I'm getting this is because I need some default values set when installing the chart, but I'm not really sure what those may be.
|
|
If your final goal is masters HA / users service load balancing in on-prem environment, then you can take a look on this two project:
Kubevip: can do both (HA masters + LoadBalancer type for user workload).
Metallb:
user workload LoadBalancer
| ###Instruction: I have a k8s cluster which contains 2 nodes. And in the cluster I deployed 2 pods for the same application. Due to some reason I have to deploy a service (NodePort IP) for each pod, so totally I have 2 services the application, for example the service NodePort IP is 192.142.1.11 and 192.142.1.12. And use these 2 ips I can access the application from any node.
Now I am going to use keepalived to set up HA for the application. So:
What's the best practice to install the keepalived service? On each k8s node or deploy it as pod?
How to configure the interface in the keepalived.conf file? You know the NodePort ips are configured on kube-ipvs0 interface created by k8s and its status is down. Seems it cannot be used as the interface in keepalived.conf. Should I use the Node external interface if I start keepalived service on each node?
Thanks for your help.
### Response: If your final goal is masters HA / users service load balancing in on-prem environment, then you can take a look on this two project:
Kubevip: can do both (HA masters + LoadBalancer type for user workload).
Metallb:
user workload LoadBalancer
| I have a k8s cluster which contains 2 nodes. And in the cluster I deployed 2 pods for the same application. Due to some reason I have to deploy a service (NodePort IP) for each pod, so totally I have 2 services the application, for example the service NodePort IP is 192.142.1.11 and 192.142.1.12. And use these 2 ips I can access the application from any node.
Now I am going to use keepalived to set up HA for the application. So:
What's the best practice to install the keepalived service? On each k8s node or deploy it as pod?
How to configure the interface in the keepalived.conf file? You know the NodePort ips are configured on kube-ipvs0 interface created by k8s and its status is down. Seems it cannot be used as the interface in keepalived.conf. Should I use the Node external interface if I start keepalived service on each node?
Thanks for your help.
|
|
The solution was really dead simple - when the object is not ready, then requeue it, wich for Job means to wait until it will be finished. Still I don't understand why the controller is not notified about a state change from: active=1 to active=0 and from failed=0 to failed=1
Example:
if healthStatus.Running {
return ctrl.Result{Requeue: true}, nil
}
| ###Instruction: I'm building an app that spawns Jobs (batch/v1), I need to update my Custom Resource status with the Job status.
I setup the controller with the following:
func (r *JobsManagedByRequestedBackupActionObserver) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&riotkitorgv1alpha1.RequestedBackupAction{}).
Owns(&batchv1.Job{}).
Owns(&batchv1.CronJob{}).
WithEventFilter(predicate.Funcs{
DeleteFunc: func(e event.DeleteEvent) bool {
return false
},
}).
Complete(r)
}
During the Reconcile(ctx context.Context, req ctrl.Request) I fetch my RequestedBackupAction object (basing on "req") and then I fetch Jobs from API using a dedicated tracking label.
list, err := kj.client.Jobs(namespace).List(ctx, metav1.ListOptions{LabelSelector: v1alpha1.LabelTrackingId + "=" + trackingId})
When I iterate over objects with:
for _, job := range list.Items {
logrus.Errorf("[++++++++++++] JOB name=%s, failed=%v, active=%v, succeeded=%v", job.Name, job.Status.Failed, job.Status.Active, job.Status.Succeeded)
}
Then I get multiple entries like this:
time="2022-12-12T20:00:55Z" level=error msg="[++++++++++++] JOB name=app1-backup-vmqrp, failed=0, active=1, succeeded=0"
But I don't finally get an entry, where there should be: failed=1, active=0, succeeded=0
even if the Job actually finished - the point is that the controller is not being notified.
That's the final Job status:
status:
conditions:
- lastProbeTime: "2022-12-12T20:00:56Z"
lastTransitionTime: "2022-12-12T20:00:56Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
ready: 0
startTime: "2022-12-12T20:00:50Z"
uncountedTerminatedPods: {}
What could be wrong?
### Response: The solution was really dead simple - when the object is not ready, then requeue it, wich for Job means to wait until it will be finished. Still I don't understand why the controller is not notified about a state change from: active=1 to active=0 and from failed=0 to failed=1
Example:
if healthStatus.Running {
return ctrl.Result{Requeue: true}, nil
}
| I'm building an app that spawns Jobs (batch/v1), I need to update my Custom Resource status with the Job status.
I setup the controller with the following:
func (r *JobsManagedByRequestedBackupActionObserver) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&riotkitorgv1alpha1.RequestedBackupAction{}).
Owns(&batchv1.Job{}).
Owns(&batchv1.CronJob{}).
WithEventFilter(predicate.Funcs{
DeleteFunc: func(e event.DeleteEvent) bool {
return false
},
}).
Complete(r)
}
During the Reconcile(ctx context.Context, req ctrl.Request) I fetch my RequestedBackupAction object (basing on "req") and then I fetch Jobs from API using a dedicated tracking label.
list, err := kj.client.Jobs(namespace).List(ctx, metav1.ListOptions{LabelSelector: v1alpha1.LabelTrackingId + "=" + trackingId})
When I iterate over objects with:
for _, job := range list.Items {
logrus.Errorf("[++++++++++++] JOB name=%s, failed=%v, active=%v, succeeded=%v", job.Name, job.Status.Failed, job.Status.Active, job.Status.Succeeded)
}
Then I get multiple entries like this:
time="2022-12-12T20:00:55Z" level=error msg="[++++++++++++] JOB name=app1-backup-vmqrp, failed=0, active=1, succeeded=0"
But I don't finally get an entry, where there should be: failed=1, active=0, succeeded=0
even if the Job actually finished - the point is that the controller is not being notified.
That's the final Job status:
status:
conditions:
- lastProbeTime: "2022-12-12T20:00:56Z"
lastTransitionTime: "2022-12-12T20:00:56Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
ready: 0
startTime: "2022-12-12T20:00:50Z"
uncountedTerminatedPods: {}
What could be wrong?
|
|
If you want to embed the yaml entirely, you don't need the |-
For example, I have this in values.yaml
...
probes:
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
...
Then use this in my helm deployment:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- toYaml .Values.probes | nindent 10 }}
...
...
You'll notice I need to be explicit about the indenting using nindent otherwise helm just pastes in the yaml as in the values.yaml which breaks the parsing
| ###Instruction: I am trying to pass entire set of yamls from values.yaml in helm to the templates, so that whatever yaml inputs I pass in the values.yaml section goes in the templates yaml as it is :
For example :
values.yaml
...
...
metallbConfig: |-
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
creationTimestamp: null
name: peer1
namespace: metallb-system
spec:
holdTime: 3s
keepaliveTime: 0s
myASN: 64026
passwordSecret: {}
peerASN: 65227
peerAddress: 10.252.254.194
status: {}
templates/resources.yaml :
{{ toYaml .Values.metallbConfig }}
Essentially what I want to achieve is whole BGPPeer section to be present in the resources.yaml when I deploy the chart.
Currently I am getting this error :
# helm template metallbcnf . --output-dir outputs --debug
...
...
Error: YAML parse error on metallb/templates/resources.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
helm.go:84: [debug] error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
Kindly help me resolve the same.
### Response: If you want to embed the yaml entirely, you don't need the |-
For example, I have this in values.yaml
...
probes:
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
...
Then use this in my helm deployment:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- toYaml .Values.probes | nindent 10 }}
...
...
You'll notice I need to be explicit about the indenting using nindent otherwise helm just pastes in the yaml as in the values.yaml which breaks the parsing
| I am trying to pass entire set of yamls from values.yaml in helm to the templates, so that whatever yaml inputs I pass in the values.yaml section goes in the templates yaml as it is :
For example :
values.yaml
...
...
metallbConfig: |-
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
creationTimestamp: null
name: peer1
namespace: metallb-system
spec:
holdTime: 3s
keepaliveTime: 0s
myASN: 64026
passwordSecret: {}
peerASN: 65227
peerAddress: 10.252.254.194
status: {}
templates/resources.yaml :
{{ toYaml .Values.metallbConfig }}
Essentially what I want to achieve is whole BGPPeer section to be present in the resources.yaml when I deploy the chart.
Currently I am getting this error :
# helm template metallbcnf . --output-dir outputs --debug
...
...
Error: YAML parse error on metallb/templates/resources.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
helm.go:84: [debug] error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
Kindly help me resolve the same.
|
|
The API group in kubernetes identifies which API group needs to target. This is necessary as different API groups can have the same verbs and also Kubernetes is highly extensible and allows for the addition of new APIs that can have verbs and resource names that clash with other APIs.
In the manifest file, The API group “ ” (empty string) represents the core Kubernetes API and it is used for pods : apigroups is “ ” . for Deployment, apigroups is “apps” and “extensions” is used.
Refer to this API Group official doc
| ###Instruction: `
rules:
- verbs:
- get
- list
apiGroups:
- ''
resources:
- namespaces
- pods
- pods/log
- verbs:
- get
- list
apiGroups:
- apps
resources:
- deployments
`
I want to know difference between
apiGroups:
''
and
apiGroups:
apps
whats the importance of apiGroups in manifests?
### Response: The API group in kubernetes identifies which API group needs to target. This is necessary as different API groups can have the same verbs and also Kubernetes is highly extensible and allows for the addition of new APIs that can have verbs and resource names that clash with other APIs.
In the manifest file, The API group “ ” (empty string) represents the core Kubernetes API and it is used for pods : apigroups is “ ” . for Deployment, apigroups is “apps” and “extensions” is used.
Refer to this API Group official doc
| `
rules:
- verbs:
- get
- list
apiGroups:
- ''
resources:
- namespaces
- pods
- pods/log
- verbs:
- get
- list
apiGroups:
- apps
resources:
- deployments
`
I want to know difference between
apiGroups:
''
and
apiGroups:
apps
whats the importance of apiGroups in manifests?
|
|
Prerequisites for installing kubernetes in cluster:
Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)
Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)
It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.
For creating an instance in gcp follow this guide. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.
After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: ssh root@<ip address>
From there you can follow any guide for installing kubernetes by using worker and master nodes.
example:
sudo hostnamectl set-hostname <host name>
Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.
| ###Instruction: I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
### Response: Prerequisites for installing kubernetes in cluster:
Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)
Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)
It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.
For creating an instance in gcp follow this guide. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.
After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: ssh root@<ip address>
From there you can follow any guide for installing kubernetes by using worker and master nodes.
example:
sudo hostnamectl set-hostname <host name>
Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.
| I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
|
|
You can try something like this:
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- echo "hello"
hello
pod "test4" deleted
In your example, you are just executing a command in a running container, where postgres is started already.
You need to overvrite the command which is executed when the container is started.
Executing your command: (the error comes only because i didnt install any postgres)
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
This will work if your postgres on localhost has right credentials.
Or it will return like so, because connection couldnt be established:
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
psql: error: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
pod "test4" deleted
But however the pod is beeing deleted.
Hint to not doing that always manually , you can put a initContainer in your deployment or execute a job. Both with same image and Command.
| ###Instruction: In Kubernetes, I want start a PostgreSQL pod, create database, user and grant all privileges to the database to the user. So I came up with the following command:
kubectl run somepod --rm -i --restart=Never --image postgres:alpine --tty -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
It already works, but unfortunately the pod isn't terminated after the command. In Kubernetes, the pod is still alive and it's status is "Succeeded". I already tried adding -c '\q' to the command line because I assumed that psql is still running, but it didn't helped. Any ideas how I can remove the pod after the command has been succeeded?
### Response: You can try something like this:
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- echo "hello"
hello
pod "test4" deleted
In your example, you are just executing a command in a running container, where postgres is started already.
You need to overvrite the command which is executed when the container is started.
Executing your command: (the error comes only because i didnt install any postgres)
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
This will work if your postgres on localhost has right credentials.
Or it will return like so, because connection couldnt be established:
kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
psql: error: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
pod "test4" deleted
But however the pod is beeing deleted.
Hint to not doing that always manually , you can put a initContainer in your deployment or execute a job. Both with same image and Command.
| In Kubernetes, I want start a PostgreSQL pod, create database, user and grant all privileges to the database to the user. So I came up with the following command:
kubectl run somepod --rm -i --restart=Never --image postgres:alpine --tty -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
It already works, but unfortunately the pod isn't terminated after the command. In Kubernetes, the pod is still alive and it's status is "Succeeded". I already tried adding -c '\q' to the command line because I assumed that psql is still running, but it didn't helped. Any ideas how I can remove the pod after the command has been succeeded?
|
|
i am not much fan of operators not writing unless required or keeping it as the last option.
To understand it easily consider it like there are different types of operator
Helm-based operator
Ansible operator
Go, python other language-based operators
Which use the framework to Operator framework
You have to create one Helm-based Operator which keep watch on a secret if removed or does not exist it will create the new one for the Database.
Steps :
Install the Operator SDK first
operator-sdk new db-operator --api-version=harsh.com/v1alpha1 --kind=DbSecret --type=helm --helm-chart=db-secret --helm-chart-repo=<repo url>
Add/Edit the helm chart YAML with DB secret creation as you are planning
Consider kind:DbSecret as values.yaml in helm
Build and deploy the Operator (operator-sdk build) it will watch for configmap/secret if not there it will create it
you might have to create the necessary service account, role, and role binding.
Ref document to create the helm operator : https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/
| ###Instruction: I got a task, to create a helm-chart and use a kubernetes database operator that creates automatically credentials for the database.
I don't really understand how is this possible.
I found this operator in Operator hub: https://operatorhub.io/operator/postgresql-operator-dev4devs-com
This is an operator for postgreSQL. It is mentioned that :
Allow you setup the operator to get the values required for its enviroment variables (user, password and database name) in an ConfigMap applied in the cluster already. NOTE: It is very useful to centralize and share this information accross the cluster for your solutions. Also, you are able to configure each configMap keys that contains each EnvVar required for the PostgreSQL database image.
If someone has experience with kubernetes operators, could I get a hint, which one should I use? (easiest, it doesn't have to be Postgres)
### Response: i am not much fan of operators not writing unless required or keeping it as the last option.
To understand it easily consider it like there are different types of operator
Helm-based operator
Ansible operator
Go, python other language-based operators
Which use the framework to Operator framework
You have to create one Helm-based Operator which keep watch on a secret if removed or does not exist it will create the new one for the Database.
Steps :
Install the Operator SDK first
operator-sdk new db-operator --api-version=harsh.com/v1alpha1 --kind=DbSecret --type=helm --helm-chart=db-secret --helm-chart-repo=<repo url>
Add/Edit the helm chart YAML with DB secret creation as you are planning
Consider kind:DbSecret as values.yaml in helm
Build and deploy the Operator (operator-sdk build) it will watch for configmap/secret if not there it will create it
you might have to create the necessary service account, role, and role binding.
Ref document to create the helm operator : https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/
| I got a task, to create a helm-chart and use a kubernetes database operator that creates automatically credentials for the database.
I don't really understand how is this possible.
I found this operator in Operator hub: https://operatorhub.io/operator/postgresql-operator-dev4devs-com
This is an operator for postgreSQL. It is mentioned that :
Allow you setup the operator to get the values required for its enviroment variables (user, password and database name) in an ConfigMap applied in the cluster already. NOTE: It is very useful to centralize and share this information accross the cluster for your solutions. Also, you are able to configure each configMap keys that contains each EnvVar required for the PostgreSQL database image.
If someone has experience with kubernetes operators, could I get a hint, which one should I use? (easiest, it doesn't have to be Postgres)
|
|
It is not a best practices to expose custom port over internet.
Instead, create a sub-domain (i.e https://custom.myapp.com) which point to internal service in port 8888.
Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain
Example manifest file as follow:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
Hope this helps.
| ###Instruction: I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
### Response: It is not a best practices to expose custom port over internet.
Instead, create a sub-domain (i.e https://custom.myapp.com) which point to internal service in port 8888.
Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain
Example manifest file as follow:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
Hope this helps.
| I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
|
|
I think I found the solution. I overlooked the following comment: https://github.com/kubernetes-sigs/metrics-server/issues/1061#issuecomment-1239227118
It turns out that Docker-Desktop 4.12 has this fixed. I was still on 4.9 but running a new Kubernetes version (1.24) where Dockershim has been removed. Several bugs have been fixed since 4.9. Updating Docker-Desktop to 4.15 (currently the latest version) fixed the issue for me.
So to summarize:
Update docker desktop to 4.12 or later
Reset the Kubernetes cluster
Check if cluster was upgraded using kubectl version
Apply metrics.yml
This is the final metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=40s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
| ###Instruction: Problem statement
The kubernetes metrics server starts, but doesn't collect metrics.
When running $ kubectl top pods it returns error: Metrics not available for pod <namespace>/<deployment>.
Artificats
I'm using the following metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
Once deployed on my local machine, the deployment looks like this:
$ kubectl describe deployment.apps/metrics-server -n metrics-server
Name: metrics-server
Namespace: metrics-server
CreationTimestamp: Tue, 13 Dec 2022 12:38:39 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
Port: 4443/TCP
Host Port: 4443/TCP
Args:
/metrics-server
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP
--kubelet-use-node-status-port
--kubelet-insecure-tls
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5988cd75cb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m2s deployment-controller Scaled up replica set metrics-server-5988cd75cb to 1
Findings
This exact same setup (without --kubelet-insecure-tls) is working on another DigitalOcean hosted cluster. It has the following kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:19:57Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
My machine, where the metrics server isn't working has the version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Possible solutions?
Try to install the same kubernetes version on my local machine? --> Tried this but Docker desktop let me change the server version.
### Response: I think I found the solution. I overlooked the following comment: https://github.com/kubernetes-sigs/metrics-server/issues/1061#issuecomment-1239227118
It turns out that Docker-Desktop 4.12 has this fixed. I was still on 4.9 but running a new Kubernetes version (1.24) where Dockershim has been removed. Several bugs have been fixed since 4.9. Updating Docker-Desktop to 4.15 (currently the latest version) fixed the issue for me.
So to summarize:
Update docker desktop to 4.12 or later
Reset the Kubernetes cluster
Check if cluster was upgraded using kubectl version
Apply metrics.yml
This is the final metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=40s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
| Problem statement
The kubernetes metrics server starts, but doesn't collect metrics.
When running $ kubectl top pods it returns error: Metrics not available for pod <namespace>/<deployment>.
Artificats
I'm using the following metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
Once deployed on my local machine, the deployment looks like this:
$ kubectl describe deployment.apps/metrics-server -n metrics-server
Name: metrics-server
Namespace: metrics-server
CreationTimestamp: Tue, 13 Dec 2022 12:38:39 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
Port: 4443/TCP
Host Port: 4443/TCP
Args:
/metrics-server
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP
--kubelet-use-node-status-port
--kubelet-insecure-tls
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5988cd75cb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m2s deployment-controller Scaled up replica set metrics-server-5988cd75cb to 1
Findings
This exact same setup (without --kubelet-insecure-tls) is working on another DigitalOcean hosted cluster. It has the following kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:19:57Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
My machine, where the metrics server isn't working has the version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Possible solutions?
Try to install the same kubernetes version on my local machine? --> Tried this but Docker desktop let me change the server version.
|
|
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests
| ###Instruction: I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
### Response: you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests
| I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
|
|
It depends on your update strategy. Many times it's desired to have the new pod running and healthy before you shut down the old pod, otherwise you have downtime which may not be acceptable as per business requirements. By default, it's doing rolling updates.
The defaults look like the below, so if you don't specify anything, that's what will be used.
apiVersion: apps/v1
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
So usually, you would have a moment where both pods are running. But Kubernetes will terminate the old pod as soon as the new pod becomes ready, so it will be hard, if not impossible, to literally see both in the state ready.
You can read about it in the docs: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5.
This is also explained here: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
To get the behaviour, described by you, you would set spec.strategy.type to Recreate.
All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.
| ###Instruction: From what I understand, with the above configuration, it is possible to have 2 pods that exist in the cluster associated with the deployment. However, the old Pod is guranteed to be in the 'Terminated' state. An example scenario is updating the image version associated with the deployment.
There should not be any scenario where there are 2 Pods that are associated with the deployment and both are in the 'Running' phase. Is this correct?
In the scenarios I tried, for example, Pod eviction or updating the Pod spec. The existing Pod enters 'Terminating' state and a new Pod is deployed.
This is what I expected. Just wanted to make sure that all possible scenarios around updating Pod spec or Pod eviction cannot end up with two Pods in the 'Running' state as it would violate the replica count = 1 config.
### Response: It depends on your update strategy. Many times it's desired to have the new pod running and healthy before you shut down the old pod, otherwise you have downtime which may not be acceptable as per business requirements. By default, it's doing rolling updates.
The defaults look like the below, so if you don't specify anything, that's what will be used.
apiVersion: apps/v1
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
So usually, you would have a moment where both pods are running. But Kubernetes will terminate the old pod as soon as the new pod becomes ready, so it will be hard, if not impossible, to literally see both in the state ready.
You can read about it in the docs: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5.
This is also explained here: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
To get the behaviour, described by you, you would set spec.strategy.type to Recreate.
All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.
| From what I understand, with the above configuration, it is possible to have 2 pods that exist in the cluster associated with the deployment. However, the old Pod is guranteed to be in the 'Terminated' state. An example scenario is updating the image version associated with the deployment.
There should not be any scenario where there are 2 Pods that are associated with the deployment and both are in the 'Running' phase. Is this correct?
In the scenarios I tried, for example, Pod eviction or updating the Pod spec. The existing Pod enters 'Terminating' state and a new Pod is deployed.
This is what I expected. Just wanted to make sure that all possible scenarios around updating Pod spec or Pod eviction cannot end up with two Pods in the 'Running' state as it would violate the replica count = 1 config.
|
|
A couple of issues:
~ has a special meaning on Linux; it is a shortcut to the user's home directory. I think ~ is not available for the destination path of a Dockerfile ADD (or COPY) command. You should use a specific path instead
It's not advisable to include a file that includes credentials (such as a kubeconfig file) in a container. You should instead mount data such as this.
You're renaming the file config.yml is the source but config is the destination which may be problematic.
You don't include the full Dockerfile so it's unclear what your FROM image is and you don't include the source of your App_pod.py file so it's unclear where it looks for the kubeconfig file, but...
Generalize your code's use of the kubeconfig file by providing it as an argument or using an environment variable. This way, the location is dynamic and the code can work when run outside of a container or within
Mount the file into the container when you run the container
Let's assume you change your code to accept a command-line argument for config.yml in the working directory, e.g.:
python3 App_pod.py ~/.kube/config
Then, when you run the container, you will need to mount the config.yml into the container and reference it. In the following, I'm using different source and destination folders to demonstrate the point:
docker run \
--interactive --tty --rm \
--volume=~/.kube/config:/somewhere/.kube/config \
your-container /somewhere/.kube/config
You can use ~ in the --volume flag because ~ is meaningful on your (Linux) host. The file is mapped to /somewhere/.kube/config in the container and so your Python file needs to point to the container location (!) when you invoke it.
Also, so that you may use command-line parameters, I encourage you to use ENTRYPOINT instead of CMD to run your Python program:
...
ENTRYPOINT ["python3","./App_pod.py"]
| ###Instruction: I'm trying to containerize a python application, where I used the Kubernetes package. My Dockerfile is:
FROM python:3.10.6
ADD App_pod.py .
ADD config.yml ~/.kube/config
RUN pip3 install requests beautifulsoup4
RUN pip3 install kubernetes
RUN apt-get install -y ca-certificates curl
CMD ["python3","./App_pod.py"]
when I enter docker run -i -t run, it shows me the following error:
> Traceback (most recent call last):
> File "//./App_pod.py", line 9, in <module>
> config.load_kube_config()
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 813, in load_kube_config
> loader = _get_kube_config_loader(
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 770, in _get_kube_config_loader
> raise ConfigException(
> kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
I tried to change the appearing error from raise exeption but it still shows me the same error, so I guess maybe it doesn't detect the changes I do.
I used ADD command to copy the config file to the specific path, but it still shows the same error. I also tried to mount the volume but it can't be mounted since there is no container yet, because it needs first the conf file to be created. Does anyone face this problem and know how to fix it?
### Response: A couple of issues:
~ has a special meaning on Linux; it is a shortcut to the user's home directory. I think ~ is not available for the destination path of a Dockerfile ADD (or COPY) command. You should use a specific path instead
It's not advisable to include a file that includes credentials (such as a kubeconfig file) in a container. You should instead mount data such as this.
You're renaming the file config.yml is the source but config is the destination which may be problematic.
You don't include the full Dockerfile so it's unclear what your FROM image is and you don't include the source of your App_pod.py file so it's unclear where it looks for the kubeconfig file, but...
Generalize your code's use of the kubeconfig file by providing it as an argument or using an environment variable. This way, the location is dynamic and the code can work when run outside of a container or within
Mount the file into the container when you run the container
Let's assume you change your code to accept a command-line argument for config.yml in the working directory, e.g.:
python3 App_pod.py ~/.kube/config
Then, when you run the container, you will need to mount the config.yml into the container and reference it. In the following, I'm using different source and destination folders to demonstrate the point:
docker run \
--interactive --tty --rm \
--volume=~/.kube/config:/somewhere/.kube/config \
your-container /somewhere/.kube/config
You can use ~ in the --volume flag because ~ is meaningful on your (Linux) host. The file is mapped to /somewhere/.kube/config in the container and so your Python file needs to point to the container location (!) when you invoke it.
Also, so that you may use command-line parameters, I encourage you to use ENTRYPOINT instead of CMD to run your Python program:
...
ENTRYPOINT ["python3","./App_pod.py"]
| I'm trying to containerize a python application, where I used the Kubernetes package. My Dockerfile is:
FROM python:3.10.6
ADD App_pod.py .
ADD config.yml ~/.kube/config
RUN pip3 install requests beautifulsoup4
RUN pip3 install kubernetes
RUN apt-get install -y ca-certificates curl
CMD ["python3","./App_pod.py"]
when I enter docker run -i -t run, it shows me the following error:
> Traceback (most recent call last):
> File "//./App_pod.py", line 9, in <module>
> config.load_kube_config()
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 813, in load_kube_config
> loader = _get_kube_config_loader(
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 770, in _get_kube_config_loader
> raise ConfigException(
> kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
I tried to change the appearing error from raise exeption but it still shows me the same error, so I guess maybe it doesn't detect the changes I do.
I used ADD command to copy the config file to the specific path, but it still shows the same error. I also tried to mount the volume but it can't be mounted since there is no container yet, because it needs first the conf file to be created. Does anyone face this problem and know how to fix it?
|
|
You can read the API docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#tokenrequest-v1-authentication-k8s-io
You can also inpsect what kubectl is doing by increasing the verbosity level: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging
Kubectl verbosity is controlled with the -v or --v flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described here.
| ###Instruction: I need to execute this operation from my (.NET) application. How do I figure out which k8s API to map this command to? Ideally, how do I use the .NET SDK to make this call?
### Response: You can read the API docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#tokenrequest-v1-authentication-k8s-io
You can also inpsect what kubectl is doing by increasing the verbosity level: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging
Kubectl verbosity is controlled with the -v or --v flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described here.
| I need to execute this operation from my (.NET) application. How do I figure out which k8s API to map this command to? Ideally, how do I use the .NET SDK to make this call?
|
|
volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
Instead of /etc directory, try mount as a single file:
apiVersion: v1
kind: Secret
metadata:
name: nginx
type: Opaque
stringData:
val1: asdf
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: myval
mountPath: /etc/myval
subPath: myval
volumes:
- name: myval
secret:
secretName: nginx
items:
- key: val1
path: myval
...
| ###Instruction: I am following Kubernetes documentations on secret. I have this secret.yaml file:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
val1: YXNkZgo=
stringData:
val1: asdf
and secret-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/secret
readOnly: true
volumes:
- name: myval
secret:
secretName: val1
items:
- key: val1
path: myval
I use kubectl apply -f on both of these files. Then using kubectl exec -it mysecretpod -- cat /etc/secret/myval, I can see the value asdf in the file /etc/secret/myval of mysecretpod.
However I want the mounted path to be /etc/myval. Thus I make the following change in secret-pod.yaml:
volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
After using kubectl apply -f on that file again, I check pod creation with kubectl get pods --all-namespaces. This is what I see:
NAMESPACE NAME READY STATUS RESTARTS AGE
default mysecretpod 0/1 CrashLoopBackOff 2 (34s ago) 62s
Looking into that pod using kubectl describe pods mysecretpod, this is what I see:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35s default-scheduler Successfully assigned default/mysecretpod to minikube
Normal Pulled 32s kubelet Successfully pulled image "nginx" in 2.635766453s
Warning Failed 31s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/4aaf54c61f7c80937a8edc094b27d6590538632e0209165e0b8c96e9e779a4b6/merged/etc/resolv.conf: read-only file system: unknown
Normal Pulled 28s kubelet Successfully pulled image "nginx" in 3.313846185s
Warning Failed 28s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/34af5138f14d192ade7e53211476943ea82cd2c8186d69ca79a3adf2abbc0978/merged/etc/resolv.conf: read-only file system: unknown
Warning BackOff 24s kubelet Back-off restarting failed container
Normal Pulling 9s (x3 over 34s) kubelet Pulling image "nginx"
Normal Created 7s (x3 over 32s) kubelet Created container mypod
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 2.73055072s
Warning Failed 6s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/01bfa6b2c35d5eb12ad7ad204a5acc58688c1e04d9b5891382e48c26d2e7077f/merged/etc/resolv.conf: read-only file system: unknown
Why does this fail? Is it possible to have a secret mounted at the /etc level instead of /etc/something level? If yes, how can I achieve that? Thank you so much!
### Response: volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
Instead of /etc directory, try mount as a single file:
apiVersion: v1
kind: Secret
metadata:
name: nginx
type: Opaque
stringData:
val1: asdf
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: myval
mountPath: /etc/myval
subPath: myval
volumes:
- name: myval
secret:
secretName: nginx
items:
- key: val1
path: myval
...
| I am following Kubernetes documentations on secret. I have this secret.yaml file:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
val1: YXNkZgo=
stringData:
val1: asdf
and secret-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/secret
readOnly: true
volumes:
- name: myval
secret:
secretName: val1
items:
- key: val1
path: myval
I use kubectl apply -f on both of these files. Then using kubectl exec -it mysecretpod -- cat /etc/secret/myval, I can see the value asdf in the file /etc/secret/myval of mysecretpod.
However I want the mounted path to be /etc/myval. Thus I make the following change in secret-pod.yaml:
volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
After using kubectl apply -f on that file again, I check pod creation with kubectl get pods --all-namespaces. This is what I see:
NAMESPACE NAME READY STATUS RESTARTS AGE
default mysecretpod 0/1 CrashLoopBackOff 2 (34s ago) 62s
Looking into that pod using kubectl describe pods mysecretpod, this is what I see:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35s default-scheduler Successfully assigned default/mysecretpod to minikube
Normal Pulled 32s kubelet Successfully pulled image "nginx" in 2.635766453s
Warning Failed 31s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/4aaf54c61f7c80937a8edc094b27d6590538632e0209165e0b8c96e9e779a4b6/merged/etc/resolv.conf: read-only file system: unknown
Normal Pulled 28s kubelet Successfully pulled image "nginx" in 3.313846185s
Warning Failed 28s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/34af5138f14d192ade7e53211476943ea82cd2c8186d69ca79a3adf2abbc0978/merged/etc/resolv.conf: read-only file system: unknown
Warning BackOff 24s kubelet Back-off restarting failed container
Normal Pulling 9s (x3 over 34s) kubelet Pulling image "nginx"
Normal Created 7s (x3 over 32s) kubelet Created container mypod
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 2.73055072s
Warning Failed 6s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/01bfa6b2c35d5eb12ad7ad204a5acc58688c1e04d9b5891382e48c26d2e7077f/merged/etc/resolv.conf: read-only file system: unknown
Why does this fail? Is it possible to have a secret mounted at the /etc level instead of /etc/something level? If yes, how can I achieve that? Thank you so much!
|
|
According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?
You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like https://findmeip.com you can leverage the External Name service.
An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.
For example
apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
So your application can connect to geolocation-service, which which forwards requests to the external DNS mentioned in the service.
As ExternalName service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#externalname
| ###Instruction: I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified external name.
According to my understanding service of type ExternalName should be accessed when using the specified external name. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?
This is the externalName.yaml file I used.
apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
After port forwarding using kubectl port-forward service/k8s-hello-test 3000:3000 particular deployment was accessible using http://127.0.0.1:300
But even after adding it to etc/hosts file, cannot be accessed using http://k8s-hello-test.com
### Response:
According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?
You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like https://findmeip.com you can leverage the External Name service.
An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.
For example
apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
So your application can connect to geolocation-service, which which forwards requests to the external DNS mentioned in the service.
As ExternalName service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.
Read more at : https://kubernetes.io/docs/concepts/services-networking/service/#externalname
| I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified external name.
According to my understanding service of type ExternalName should be accessed when using the specified external name. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?
This is the externalName.yaml file I used.
apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
After port forwarding using kubectl port-forward service/k8s-hello-test 3000:3000 particular deployment was accessible using http://127.0.0.1:300
But even after adding it to etc/hosts file, cannot be accessed using http://k8s-hello-test.com
|
|
I'd convert everything to seconds first and then print it as HH:MM:SS, e.g.:
$ cat tst.awk
BEGIN {
split("h m s",denoms)
fmts["s"] = fmts["m"] = fmts["h"] = "%02d"
mults["s"] = 1
mults["m"] = 60
mults["h"] = 60 * 60
}
sub(/^L/,"_L") {
print
next
}
{
time = $1
secs = 0
while ( match(time,/[0-9]+./) ) {
value = substr(time,1,RLENGTH-1)
denom = substr(time,RLENGTH)
time = substr(time,RLENGTH+1)
secs += value * mults[denom]
}
for ( i=1; i in denoms; i++ ) {
denom = denoms[i]
out = (i>1 ? out ":" : "") sprintf(fmts[denom],int(secs/mults[denom]))
secs %= mults[denom]
}
$1 = out
print | "sort -r"
}
$ awk -f tst.awk sample
_LAST ...
00:46:00 ...
00:02:22 ...
00:01:59 ...
00:01:15 ...
00:00:28 ...
00:00:07 ...
Obviously add the definitions for "d" in the BEGIN section if you want to include days and similarly for other longer durations.
| ###Instruction: edited
I find myself frequently looking to see if something has stopped happening. To do this, it helps to see events in chronological order...
This solution seems to work, but the formatting still drives me insane...
The solution I have "sort of" working -
kubectl get events |
sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' | sort -r
...this seems a bit like overkill to me.
The whitespace-delimited left-justified fields have no leading zeroes, report only seconds up to 2m as [0-9]+s, then report as [0-9]+m[0-9]+s up to 5m, after which it seems to report only [0-9]+m.
Anyone have a short, maybe even simple-ish, easier to read solution that works?
No preference of tool (sed, awk, perl, native bash, etc), as long as it works and is likely to be already installed anywhere I need to work.
It's not a high priority, but seemed like a fun little challenge I thought I'd share.
My test data:
$: cat sample
LAST ...
28s ...
2m22s ...
46m ...
7s ...
75s ...
119s ...
Result with desired output -
$: sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' sample | sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
I've arbitrarily converted to a standardized version of the existing general output format just to keep it easily transferable to other members of the team. Either way, it's only being used for "eyeballing" the data, so other formats are not a problem as long as it's easy to read.
While there could theoretically include hours and days, such old events are usually not reported by this tool and are out of scope for this problem, and if needed I can likely extrapolate whatever solutions are presented. since I can get the order from this approach I'm really only looking for elegant formatting options.
A clumsy adaptation of Daweo's awk solution with formatting -
$: awk '/^[0-9]/{ if($1!~/m/){$1="0m" $1}; split($1,arr,/m/);
t=arr[1]*60+arr[2]; m=(t-(t%60))/60; s=t-(m*60);
m=sprintf("%02dm",m); if(s){ s=sprintf("%02ds",s) } else s="";
$1=sprintf("%s%s",m,s); print; } /^L/{print "_"$0}' sample |
sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
Others still appreciated.
### Response: I'd convert everything to seconds first and then print it as HH:MM:SS, e.g.:
$ cat tst.awk
BEGIN {
split("h m s",denoms)
fmts["s"] = fmts["m"] = fmts["h"] = "%02d"
mults["s"] = 1
mults["m"] = 60
mults["h"] = 60 * 60
}
sub(/^L/,"_L") {
print
next
}
{
time = $1
secs = 0
while ( match(time,/[0-9]+./) ) {
value = substr(time,1,RLENGTH-1)
denom = substr(time,RLENGTH)
time = substr(time,RLENGTH+1)
secs += value * mults[denom]
}
for ( i=1; i in denoms; i++ ) {
denom = denoms[i]
out = (i>1 ? out ":" : "") sprintf(fmts[denom],int(secs/mults[denom]))
secs %= mults[denom]
}
$1 = out
print | "sort -r"
}
$ awk -f tst.awk sample
_LAST ...
00:46:00 ...
00:02:22 ...
00:01:59 ...
00:01:15 ...
00:00:28 ...
00:00:07 ...
Obviously add the definitions for "d" in the BEGIN section if you want to include days and similarly for other longer durations.
| edited
I find myself frequently looking to see if something has stopped happening. To do this, it helps to see events in chronological order...
This solution seems to work, but the formatting still drives me insane...
The solution I have "sort of" working -
kubectl get events |
sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' | sort -r
...this seems a bit like overkill to me.
The whitespace-delimited left-justified fields have no leading zeroes, report only seconds up to 2m as [0-9]+s, then report as [0-9]+m[0-9]+s up to 5m, after which it seems to report only [0-9]+m.
Anyone have a short, maybe even simple-ish, easier to read solution that works?
No preference of tool (sed, awk, perl, native bash, etc), as long as it works and is likely to be already installed anywhere I need to work.
It's not a high priority, but seemed like a fun little challenge I thought I'd share.
My test data:
$: cat sample
LAST ...
28s ...
2m22s ...
46m ...
7s ...
75s ...
119s ...
Result with desired output -
$: sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' sample | sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
I've arbitrarily converted to a standardized version of the existing general output format just to keep it easily transferable to other members of the team. Either way, it's only being used for "eyeballing" the data, so other formats are not a problem as long as it's easy to read.
While there could theoretically include hours and days, such old events are usually not reported by this tool and are out of scope for this problem, and if needed I can likely extrapolate whatever solutions are presented. since I can get the order from this approach I'm really only looking for elegant formatting options.
A clumsy adaptation of Daweo's awk solution with formatting -
$: awk '/^[0-9]/{ if($1!~/m/){$1="0m" $1}; split($1,arr,/m/);
t=arr[1]*60+arr[2]; m=(t-(t%60))/60; s=t-(m*60);
m=sprintf("%02dm",m); if(s){ s=sprintf("%02ds",s) } else s="";
$1=sprintf("%s%s",m,s); print; } /^L/{print "_"$0}' sample |
sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
Others still appreciated.
|
|
The hostname does nothing about node roles.
If you do kubeadm init, the node will be a master node (currently called control plane).
This node can also be used as a worker node (currently called just a node), but by default, Pods cannot be scheduled on the control plane node.
You can turn off this restriction by removing its taints with the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
and then you can use this node as both control-plane and node.
But I guess some small kubernetes like k0s, k3s, and microk8s are better options for your use case rather than kubeadm.
| ###Instruction: I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
### Response: The hostname does nothing about node roles.
If you do kubeadm init, the node will be a master node (currently called control plane).
This node can also be used as a worker node (currently called just a node), but by default, Pods cannot be scheduled on the control plane node.
You can turn off this restriction by removing its taints with the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
and then you can use this node as both control-plane and node.
But I guess some small kubernetes like k0s, k3s, and microk8s are better options for your use case rather than kubeadm.
| I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
|
|
Most likely the problem is not related to missing CRDs but to the kubernetes version. I assume you are using the latest K3S version, which is v1.25.4. PodDisruptionBudget was moved from policy/v1beta1 to policy/v1 in version v1.25. As the Envoy helm chart that you are using does not seem to be actively maintained, probably you will have to downgrade K3S or find a different chart.
| ###Instruction: My team and I are new to Kubernetes and are experimenting with running a few applications on it.
For proof of concept, we have a running Lightweight Kubernetes (K3s) install, which presumably does not have the full range of CRDs available on a standard Kubernetes. While trying to install Envoy proxy via Helm Chart, we ran into the below error:
# helm install my-envoy cloudnativeapp/envoy --version 1.5.0
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "my-envoy" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
Presumably the message ensure CRDs are installed first refers to components that are missing in K3s. Is there a way to get these components installed (via Helm or some other methods)?
### Response: Most likely the problem is not related to missing CRDs but to the kubernetes version. I assume you are using the latest K3S version, which is v1.25.4. PodDisruptionBudget was moved from policy/v1beta1 to policy/v1 in version v1.25. As the Envoy helm chart that you are using does not seem to be actively maintained, probably you will have to downgrade K3S or find a different chart.
| My team and I are new to Kubernetes and are experimenting with running a few applications on it.
For proof of concept, we have a running Lightweight Kubernetes (K3s) install, which presumably does not have the full range of CRDs available on a standard Kubernetes. While trying to install Envoy proxy via Helm Chart, we ran into the below error:
# helm install my-envoy cloudnativeapp/envoy --version 1.5.0
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "my-envoy" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
Presumably the message ensure CRDs are installed first refers to components that are missing in K3s. Is there a way to get these components installed (via Helm or some other methods)?
|
|
You can delete the POD with kubectl delete <POD name> -n <Namespace name>
If you want all pods to be restarted you can use the kubectl rollout restart deployment <deployment name> -n <Namespace name>
| ###Instruction: I deployed the consul client on my k8s cluster using helm .
sudo helm install hi-consul hashicorp/consul -n consul-client -f config.yaml
One of the pods is not working now. Is there a way to restart that pod in the helm?
Thanks
### Response: You can delete the POD with kubectl delete <POD name> -n <Namespace name>
If you want all pods to be restarted you can use the kubectl rollout restart deployment <deployment name> -n <Namespace name>
| I deployed the consul client on my k8s cluster using helm .
sudo helm install hi-consul hashicorp/consul -n consul-client -f config.yaml
One of the pods is not working now. Is there a way to restart that pod in the helm?
Thanks
|
|
Yes. Maintenance exclusions will stop upgrades to your cluster, depending on what you specify. If you specify NO upgrades allowed during the interval, then you are only allowed to set a 30-day window max (the window can be longer if you specify certain parts can be upgraded -- e.g. allow patch upgrades, allow minor upgrades, etc.)
The window can be moved but you will need to make sure you're still covering the current day or you may get upgraded.
| ###Instruction: I'm using the GKE cluster and using the release version as a Regular channel. I noticed that the pods are restarting due to the GKE's auto-upgrade feature. In my cluster, I have a deployment of NiFi in the cluster couldn't bear continuous reboots because of some internal functionality. I know that's not the expected behavior of containers, but we are working on that. But till its completion, I would like my cluster not to reboot nodes or restart pods. Is there a way for that? I found that, using 'Stable version' instead of 'Regular Release Channels' would work, but in the documentation I noticed that:
Even if the cluster is not enrolled in a release channel, GKE will
still regularly upgrade the cluster's control plane to newer versions.
GKE will upgrade control planes to the next minor version on its
scheduled upgrades start date.
from this documentation link
Is there a way, if I want to stop/pause the auto-upgrade (security/patch update) in the GKE cluster?
I also noticed 'Maintenance exclusions'. If I exclude upgrades for a couple of weeks, does that works?
### Response: Yes. Maintenance exclusions will stop upgrades to your cluster, depending on what you specify. If you specify NO upgrades allowed during the interval, then you are only allowed to set a 30-day window max (the window can be longer if you specify certain parts can be upgraded -- e.g. allow patch upgrades, allow minor upgrades, etc.)
The window can be moved but you will need to make sure you're still covering the current day or you may get upgraded.
| I'm using the GKE cluster and using the release version as a Regular channel. I noticed that the pods are restarting due to the GKE's auto-upgrade feature. In my cluster, I have a deployment of NiFi in the cluster couldn't bear continuous reboots because of some internal functionality. I know that's not the expected behavior of containers, but we are working on that. But till its completion, I would like my cluster not to reboot nodes or restart pods. Is there a way for that? I found that, using 'Stable version' instead of 'Regular Release Channels' would work, but in the documentation I noticed that:
Even if the cluster is not enrolled in a release channel, GKE will
still regularly upgrade the cluster's control plane to newer versions.
GKE will upgrade control planes to the next minor version on its
scheduled upgrades start date.
from this documentation link
Is there a way, if I want to stop/pause the auto-upgrade (security/patch update) in the GKE cluster?
I also noticed 'Maintenance exclusions'. If I exclude upgrades for a couple of weeks, does that works?
|
|
Below work around can help you resolve your issue.
You can solve this by
Open ~.kube\config in a text editor
Replace https://kubernetes.docker.internal:6443 to https://localhost:6443
Try connecting again
From this issue
Reset Docker to factory settings
Quit Docker
Set the KUBECONFIG environment variable to %USERPROFILE%.kube\config
Restart Docker and enable Kubernetes (still took a few minutes to start)
Attaching troubleshooting blog1, bolg2 for your reference.
| ###Instruction: I am trying to enable kubernetes for Docker Desktop. Kubernetes is however failing to start.
My log file shows:
cannot get lease for master node: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority
I have NO_PROXY env var set already, and my hosts file has
127.0.0.1 kubernetes.docker.internal at the end, as was suggested here
I appreciate any help
### Response: Below work around can help you resolve your issue.
You can solve this by
Open ~.kube\config in a text editor
Replace https://kubernetes.docker.internal:6443 to https://localhost:6443
Try connecting again
From this issue
Reset Docker to factory settings
Quit Docker
Set the KUBECONFIG environment variable to %USERPROFILE%.kube\config
Restart Docker and enable Kubernetes (still took a few minutes to start)
Attaching troubleshooting blog1, bolg2 for your reference.
| I am trying to enable kubernetes for Docker Desktop. Kubernetes is however failing to start.
My log file shows:
cannot get lease for master node: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority
I have NO_PROXY env var set already, and my hosts file has
127.0.0.1 kubernetes.docker.internal at the end, as was suggested here
I appreciate any help
|
|
As @CoolNetworking suggested,there is a single lib or code that will get you all the pods with their owner reference, but you can use k8s API to retrieve the owner references for each pod. You can then use the k8s API to retrieve the owner object for each owner reference. This will allow you to create a hierarchical structure of all the pods in the cluster.
The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET).
Most Kubernetes API resource types are objects: they represent a concrete instance of a concept on the cluster, like a pod or namespace.
Refer the document on kubernetes API for more information
| ###Instruction: We want to get all the pods in the cluster so we are using something like following:
pods, err := client.CoreV1().Pods("").List(context.Background(), metav1.ListOptions{})
This code will receive all the pods in the cluster.
My question is: If there is a code or lib which will bring all the pods with the owner reference .
i.e. if pods owns by deployment or statfulset etc you will get all the hierarchy ,the trick here is if I need to get additional level up like some recursion, for example statefulset which owns by controller which have custom kind
### Response: As @CoolNetworking suggested,there is a single lib or code that will get you all the pods with their owner reference, but you can use k8s API to retrieve the owner references for each pod. You can then use the k8s API to retrieve the owner object for each owner reference. This will allow you to create a hierarchical structure of all the pods in the cluster.
The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET).
Most Kubernetes API resource types are objects: they represent a concrete instance of a concept on the cluster, like a pod or namespace.
Refer the document on kubernetes API for more information
| We want to get all the pods in the cluster so we are using something like following:
pods, err := client.CoreV1().Pods("").List(context.Background(), metav1.ListOptions{})
This code will receive all the pods in the cluster.
My question is: If there is a code or lib which will bring all the pods with the owner reference .
i.e. if pods owns by deployment or statfulset etc you will get all the hierarchy ,the trick here is if I need to get additional level up like some recursion, for example statefulset which owns by controller which have custom kind
|
|
The logic is not so great, but this works for me in a non-EKS Self Hosted Kubernetes Cluster on AWS EC2s.
(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0
The above query fetches the currently up nodes and multiplies them by 0,
or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
Here, it adds all nodes that existed in the last 1 minute through the delta() function. The default value of the nodes in the delta() function output will be 1, but the existing nodes will be overridden by the value 0, because of the OR precedence. So finally, only the newly provisioned node(s) will have the value 1, and they will get filtered by the equality condition. You can also extract whether the new node is master/worker by the kubernetes_io_role label
Full Query:
(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0 or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
You can reverse this query for downscaling of nodes, although that will collide with the cases in which your Kubernetes node Shuts Down Abruptly due to reasons other than AutoScaling
| ###Instruction: I have a kubernetes cluster in Amazon EKS, Autoscaling is set. So when there is load increase a new node spin-up in the cluster and spin-down with respect to load-running. We are monitoring it with Prometheus and send desired alerts with Alertmanager.
So help me with a query that will send alerts whenever Autoscaling is performed in my Cluster.
### Response: The logic is not so great, but this works for me in a non-EKS Self Hosted Kubernetes Cluster on AWS EC2s.
(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0
The above query fetches the currently up nodes and multiplies them by 0,
or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
Here, it adds all nodes that existed in the last 1 minute through the delta() function. The default value of the nodes in the delta() function output will be 1, but the existing nodes will be overridden by the value 0, because of the OR precedence. So finally, only the newly provisioned node(s) will have the value 1, and they will get filtered by the equality condition. You can also extract whether the new node is master/worker by the kubernetes_io_role label
Full Query:
(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0 or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
You can reverse this query for downscaling of nodes, although that will collide with the cases in which your Kubernetes node Shuts Down Abruptly due to reasons other than AutoScaling
| I have a kubernetes cluster in Amazon EKS, Autoscaling is set. So when there is load increase a new node spin-up in the cluster and spin-down with respect to load-running. We are monitoring it with Prometheus and send desired alerts with Alertmanager.
So help me with a query that will send alerts whenever Autoscaling is performed in my Cluster.
|
|
I think the simple solution for this, is to use one ingress resource with this domain (app.my-domain.com) and reference the two services with their fqdn : first-service.my-namespace-1.svc.cluster.local, second-service.my-namespace-2.svc.cluster.local.
I am very interested in another solution.
| ###Instruction: Since 3 days, I have a problem with a deployment in kubernetes. I would like to deploy two react-js application : the first in http://app.my-domain.com and the other in http://app.my-domain.com/shop
NB: the two url is fake, it's just for example.
The first react application was very well deployed in http://app.my-domain.com . Now when i want to put the second react application in http://app.my-domain.com/shop, it's not working.
NB: The two react applications are in different namespaces and exposed with two ingress configurations.
Here are my two ingress configuration which expose my clusterIP services relative to the two react-js applications
:
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /
backend:
serviceName: first-service
servicePort: 80
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /shop
backend:
serviceName: second-service
servicePort: 80
So I have a blank page with the second application.
Please I would like your help for deploying this two react-js applications in this same domain.
### Response: I think the simple solution for this, is to use one ingress resource with this domain (app.my-domain.com) and reference the two services with their fqdn : first-service.my-namespace-1.svc.cluster.local, second-service.my-namespace-2.svc.cluster.local.
I am very interested in another solution.
| Since 3 days, I have a problem with a deployment in kubernetes. I would like to deploy two react-js application : the first in http://app.my-domain.com and the other in http://app.my-domain.com/shop
NB: the two url is fake, it's just for example.
The first react application was very well deployed in http://app.my-domain.com . Now when i want to put the second react application in http://app.my-domain.com/shop, it's not working.
NB: The two react applications are in different namespaces and exposed with two ingress configurations.
Here are my two ingress configuration which expose my clusterIP services relative to the two react-js applications
:
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /
backend:
serviceName: first-service
servicePort: 80
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /shop
backend:
serviceName: second-service
servicePort: 80
So I have a blank page with the second application.
Please I would like your help for deploying this two react-js applications in this same domain.
|
|
If your app, works and can read the yaml as in your example you can create a configMap from your yaml file like so:
kubectl create configmap myconfigmap --from-file=myfile.yaml --dry-run=client -o yaml > output-configmap-file.yaml
where myfile.yaml is your yaml file.
this will create locally the output-configmap-file.yaml in the right format for Kubernetes.
You can edit this, add namespaces, and what evver you need
kubectl apply -f output-configmap-file.yaml, will create your configmap in the cluster.
Inside your deployment of the app, create a volume with that configMap and mount it inside your container specs.
Like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
namespace: mynamespace
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
volumeMounts:
- name: myyaml
mountPath: /path/to-your-yaml/myyaml.yaml
subPath: myyaml.yaml
image: myimage
.........
........
volumes:
- name: myyaml
configMap:
name: myconfigMap
This will take the configMap and mount it as your yaml file inside the container.
So you don't even need to put the file in your image.
Hint: Passwords don't belong inside the configMap. Secrets are better place.
| ###Instruction: I have yaml with lines:
accounts:
service:
- id: account1
username: foo
password: bar
- id: account2
username: foo1
password: bar1
It's default values for my app. But I want to change it in ConfigMap. I do something like that:
accounts.service[0].id: id0
accounts.service[0].username: username0
accounts.service[0].password: password0
accounts.service[1].id: id1
accounts.service[1].username: username1
accounts.service[1].password: password1
accounts.service[2].id: id2
accounts.service[2].username: username2
accounts.service[2].password: password2
It doesn't work. How can I do it correctly? The list of objects can be with random size.
And, of course, password I'd like to storage in secrets.
### Response: If your app, works and can read the yaml as in your example you can create a configMap from your yaml file like so:
kubectl create configmap myconfigmap --from-file=myfile.yaml --dry-run=client -o yaml > output-configmap-file.yaml
where myfile.yaml is your yaml file.
this will create locally the output-configmap-file.yaml in the right format for Kubernetes.
You can edit this, add namespaces, and what evver you need
kubectl apply -f output-configmap-file.yaml, will create your configmap in the cluster.
Inside your deployment of the app, create a volume with that configMap and mount it inside your container specs.
Like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
namespace: mynamespace
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
volumeMounts:
- name: myyaml
mountPath: /path/to-your-yaml/myyaml.yaml
subPath: myyaml.yaml
image: myimage
.........
........
volumes:
- name: myyaml
configMap:
name: myconfigMap
This will take the configMap and mount it as your yaml file inside the container.
So you don't even need to put the file in your image.
Hint: Passwords don't belong inside the configMap. Secrets are better place.
| I have yaml with lines:
accounts:
service:
- id: account1
username: foo
password: bar
- id: account2
username: foo1
password: bar1
It's default values for my app. But I want to change it in ConfigMap. I do something like that:
accounts.service[0].id: id0
accounts.service[0].username: username0
accounts.service[0].password: password0
accounts.service[1].id: id1
accounts.service[1].username: username1
accounts.service[1].password: password1
accounts.service[2].id: id2
accounts.service[2].username: username2
accounts.service[2].password: password2
It doesn't work. How can I do it correctly? The list of objects can be with random size.
And, of course, password I'd like to storage in secrets.
|
|
Your EKSAuth class works. Just checked it with my cluster.
Here is a working (simpler) snippet instead of the second one.
import base64
import tempfile
import kubernetes
import boto3
from auth import EKSAuth
cluster_name = "my-cluster"
# Details from EKS
eks_client = boto3.client('eks')
eks_details = eks_client.describe_cluster(name=cluster_name)['cluster']
# Saving the CA cert to a temp file (working around the Kubernetes client limitations)
fp = tempfile.NamedTemporaryFile(delete=False)
ca_filename = fp.name
cert_bs = base64.urlsafe_b64decode(eks_details['certificateAuthority']['data'].encode('utf-8'))
fp.write(cert_bs)
fp.close()
# Token for the EKS cluster
eks_auth = EKSAuth(cluster_name)
token = eks_auth.get_token()
# Kubernetes client config
conf = kubernetes.client.Configuration()
conf.host = eks_details['endpoint']
conf.api_key['authorization'] = token
conf.api_key_prefix['authorization'] = 'Bearer'
conf.ssl_ca_cert = ca_filename
k8s_client = kubernetes.client.ApiClient(conf)
# Doing something with the client
v1 = kubernetes.client.CoreV1Api(k8s_client)
print(v1.list_pod_for_all_namespaces())
* Most of the code is taken form here
And you also have to make sure you've granted permission for the IAM role your lambda run with in the eks cluster.
For that run:
kubectl edit -n kube-system configmap/aws-auth
Add this lines under mapRoles. rolearn is the arn of your role. username is the name you want to give to that role inside the k8s cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
# Add this #######################################
- rolearn: arn:aws:iam::111122223333:role/myLambda-role-z71amo5y
username: my-lambda-mapped-user
####################################################
And create a clusterrolebinding or rolebinding to grant this user permissions inside the cluster.
kubectl create clusterrolebinding --clusterrole cluster-admin --user my-lambda-mapped-user my-clusterrolebinding
| ###Instruction: We are using the following to authenticate:
import base64
import boto3
import string
import random
from botocore.signers import RequestSigner
class EKSAuth(object):
METHOD = 'GET'
EXPIRES = 60
EKS_HEADER = 'x-k8s-aws-id'
EKS_PREFIX = 'k8s-aws-v1.'
STS_URL = 'sts.amazonaws.com'
STS_ACTION = 'Action=GetCallerIdentity&Version=2011-06-15'
def __init__(self, cluster_id, region='us-east-1'):
self.cluster_id = cluster_id
self.region = region
def get_token(self):
"""
Return bearer token
"""
session = boto3.session.Session()
# Get ServiceID required by class RequestSigner
client = session.client("sts", region_name=self.region)
service_id = client.meta.service_model.service_id
signer = RequestSigner(
service_id,
session.region_name,
'sts',
'v4',
session.get_credentials(),
session.events
)
params = {
'method': self.METHOD,
'url': 'https://' + self.STS_URL + '/?' + self.STS_ACTION,
'body': {},
'headers': {
self.EKS_HEADER: self.cluster_id
},
'context': {}
}
signed_url = signer.generate_presigned_url(
params,
region_name=session.region_name,
expires_in=self.EXPIRES,
operation_name=''
)
return (
self.EKS_PREFIX +
base64.urlsafe_b64encode(
signed_url.encode('utf-8')
).decode('utf-8')
)
And then we call this by
KUBE_FILEPATH = '/tmp/kubeconfig'
CLUSTER_NAME = 'cluster'
REGION = 'us-east-2'
if not os.path.exists(KUBE_FILEPATH):
kube_content = dict()
# Get data from EKS API
eks_api = boto3.client('eks', region_name=REGION)
cluster_info = eks_api.describe_cluster(name=CLUSTER_NAME)
certificate = cluster_info['cluster']['certificateAuthority']['data']
endpoint = cluster_info['cluster']['endpoint']
kube_content = dict()
kube_content['apiVersion'] = 'v1'
kube_content['clusters'] = [
{
'cluster':
{
'server': endpoint,
'certificate-authority-data': certificate
},
'name': 'kubernetes'
}]
kube_content['contexts'] = [
{
'context':
{
'cluster': 'kubernetes',
'user': 'aws'
},
'name': 'aws'
}]
kube_content['current-context'] = 'aws'
kube_content['Kind'] = 'config'
kube_content['users'] = [
{
'name': 'aws',
'user': 'lambda'
}]
# Write kubeconfig
with open(KUBE_FILEPATH, 'w') as outfile:
yaml.dump(kube_content, outfile, default_flow_style=False)
# Get Token
eks = auth.EKSAuth(CLUSTER_NAME)
token = eks.get_token()
print("Token here:")
print(token)
# Configure
config.load_kube_config(KUBE_FILEPATH)
configuration = client.Configuration()
configuration.api_key['authorization'] = token
configuration.api_key_prefix['authorization'] = 'Bearer'
# API
api = client.ApiClient(configuration)
v1 = client.CoreV1Api(api)
print("THIS IS GETTING 401!!")
ret = v1.list_namespaced_pod(namespace='default')
However, this is getting the error in the Lambda:
[ERROR] ApiException: (401) Reason: Unauthorized
Is there some type of way I have to generate the ~/.aws/credentials or config? I believe this might be why it is not able to authenticate?
### Response: Your EKSAuth class works. Just checked it with my cluster.
Here is a working (simpler) snippet instead of the second one.
import base64
import tempfile
import kubernetes
import boto3
from auth import EKSAuth
cluster_name = "my-cluster"
# Details from EKS
eks_client = boto3.client('eks')
eks_details = eks_client.describe_cluster(name=cluster_name)['cluster']
# Saving the CA cert to a temp file (working around the Kubernetes client limitations)
fp = tempfile.NamedTemporaryFile(delete=False)
ca_filename = fp.name
cert_bs = base64.urlsafe_b64decode(eks_details['certificateAuthority']['data'].encode('utf-8'))
fp.write(cert_bs)
fp.close()
# Token for the EKS cluster
eks_auth = EKSAuth(cluster_name)
token = eks_auth.get_token()
# Kubernetes client config
conf = kubernetes.client.Configuration()
conf.host = eks_details['endpoint']
conf.api_key['authorization'] = token
conf.api_key_prefix['authorization'] = 'Bearer'
conf.ssl_ca_cert = ca_filename
k8s_client = kubernetes.client.ApiClient(conf)
# Doing something with the client
v1 = kubernetes.client.CoreV1Api(k8s_client)
print(v1.list_pod_for_all_namespaces())
* Most of the code is taken form here
And you also have to make sure you've granted permission for the IAM role your lambda run with in the eks cluster.
For that run:
kubectl edit -n kube-system configmap/aws-auth
Add this lines under mapRoles. rolearn is the arn of your role. username is the name you want to give to that role inside the k8s cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
# Add this #######################################
- rolearn: arn:aws:iam::111122223333:role/myLambda-role-z71amo5y
username: my-lambda-mapped-user
####################################################
And create a clusterrolebinding or rolebinding to grant this user permissions inside the cluster.
kubectl create clusterrolebinding --clusterrole cluster-admin --user my-lambda-mapped-user my-clusterrolebinding
| We are using the following to authenticate:
import base64
import boto3
import string
import random
from botocore.signers import RequestSigner
class EKSAuth(object):
METHOD = 'GET'
EXPIRES = 60
EKS_HEADER = 'x-k8s-aws-id'
EKS_PREFIX = 'k8s-aws-v1.'
STS_URL = 'sts.amazonaws.com'
STS_ACTION = 'Action=GetCallerIdentity&Version=2011-06-15'
def __init__(self, cluster_id, region='us-east-1'):
self.cluster_id = cluster_id
self.region = region
def get_token(self):
"""
Return bearer token
"""
session = boto3.session.Session()
# Get ServiceID required by class RequestSigner
client = session.client("sts", region_name=self.region)
service_id = client.meta.service_model.service_id
signer = RequestSigner(
service_id,
session.region_name,
'sts',
'v4',
session.get_credentials(),
session.events
)
params = {
'method': self.METHOD,
'url': 'https://' + self.STS_URL + '/?' + self.STS_ACTION,
'body': {},
'headers': {
self.EKS_HEADER: self.cluster_id
},
'context': {}
}
signed_url = signer.generate_presigned_url(
params,
region_name=session.region_name,
expires_in=self.EXPIRES,
operation_name=''
)
return (
self.EKS_PREFIX +
base64.urlsafe_b64encode(
signed_url.encode('utf-8')
).decode('utf-8')
)
And then we call this by
KUBE_FILEPATH = '/tmp/kubeconfig'
CLUSTER_NAME = 'cluster'
REGION = 'us-east-2'
if not os.path.exists(KUBE_FILEPATH):
kube_content = dict()
# Get data from EKS API
eks_api = boto3.client('eks', region_name=REGION)
cluster_info = eks_api.describe_cluster(name=CLUSTER_NAME)
certificate = cluster_info['cluster']['certificateAuthority']['data']
endpoint = cluster_info['cluster']['endpoint']
kube_content = dict()
kube_content['apiVersion'] = 'v1'
kube_content['clusters'] = [
{
'cluster':
{
'server': endpoint,
'certificate-authority-data': certificate
},
'name': 'kubernetes'
}]
kube_content['contexts'] = [
{
'context':
{
'cluster': 'kubernetes',
'user': 'aws'
},
'name': 'aws'
}]
kube_content['current-context'] = 'aws'
kube_content['Kind'] = 'config'
kube_content['users'] = [
{
'name': 'aws',
'user': 'lambda'
}]
# Write kubeconfig
with open(KUBE_FILEPATH, 'w') as outfile:
yaml.dump(kube_content, outfile, default_flow_style=False)
# Get Token
eks = auth.EKSAuth(CLUSTER_NAME)
token = eks.get_token()
print("Token here:")
print(token)
# Configure
config.load_kube_config(KUBE_FILEPATH)
configuration = client.Configuration()
configuration.api_key['authorization'] = token
configuration.api_key_prefix['authorization'] = 'Bearer'
# API
api = client.ApiClient(configuration)
v1 = client.CoreV1Api(api)
print("THIS IS GETTING 401!!")
ret = v1.list_namespaced_pod(namespace='default')
However, this is getting the error in the Lambda:
[ERROR] ApiException: (401) Reason: Unauthorized
Is there some type of way I have to generate the ~/.aws/credentials or config? I believe this might be why it is not able to authenticate?
|
|
Cluster API
Cluster API is one of the projects in CNCF that you can use the create and manage the Kubernetes clusters. it's more like Kubeadm, kind, k3s, minikube(not for prod usecase) with YAML config & CLI.
You can create the YAML file and provision the Kubernetes cluster and manage it with that. There are different providers(AWS, AKS, GCP) available so you can use a provider-set service account or Access key secret and once you apply YAML to cluster API it will create a K8s cluster based on config and provider.
So with this, you can manage multiple Kubernetes clusters.
With cluster API YAML config you can create/manage K8s cluster on AWS, GCP, On-prem etc.
Crossplane
Consider crossplane as a deployment or service you are running on Kubernetes first.
You pass YAML config to cross-plane service based on rules it will create/manage the resources outside on cloud providers. It will also create/Manage the RDS, CloudSQL instances, Kubernetes clusters and other resources that the cloud provider supports.
It has also concept of provider(AWS, GCP, AKS)
| ###Instruction: Can somebody explain the difference in tooling between Crossplane and Cluster API from the perspective of a managed kubernetes platform provision ?
https://github.com/kubernetes-sigs/cluster-api
I was using crossplane for sometime to create k8s clusters and recently got to know that Cluster API is also having the same capability to provision k8s clusters.
### Response: Cluster API
Cluster API is one of the projects in CNCF that you can use the create and manage the Kubernetes clusters. it's more like Kubeadm, kind, k3s, minikube(not for prod usecase) with YAML config & CLI.
You can create the YAML file and provision the Kubernetes cluster and manage it with that. There are different providers(AWS, AKS, GCP) available so you can use a provider-set service account or Access key secret and once you apply YAML to cluster API it will create a K8s cluster based on config and provider.
So with this, you can manage multiple Kubernetes clusters.
With cluster API YAML config you can create/manage K8s cluster on AWS, GCP, On-prem etc.
Crossplane
Consider crossplane as a deployment or service you are running on Kubernetes first.
You pass YAML config to cross-plane service based on rules it will create/manage the resources outside on cloud providers. It will also create/Manage the RDS, CloudSQL instances, Kubernetes clusters and other resources that the cloud provider supports.
It has also concept of provider(AWS, GCP, AKS)
| Can somebody explain the difference in tooling between Crossplane and Cluster API from the perspective of a managed kubernetes platform provision ?
https://github.com/kubernetes-sigs/cluster-api
I was using crossplane for sometime to create k8s clusters and recently got to know that Cluster API is also having the same capability to provision k8s clusters.
|
|
By default EKS compute mode group, will have the default cluster Security Group attached which is created by AWS. Even if you provide additional security group to EKS cluster during the creation, that additional security group will not be attached to compute instances. So, to get this working, you have to use Launch Templates.
| ###Instruction: I need a help with EKS managed node group.
I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?
Thanks in advance for help!
### Response: By default EKS compute mode group, will have the default cluster Security Group attached which is created by AWS. Even if you provide additional security group to EKS cluster during the creation, that additional security group will not be attached to compute instances. So, to get this working, you have to use Launch Templates.
| I need a help with EKS managed node group.
I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?
Thanks in advance for help!
|
|
You will need to set the env variable to use the new plugin before doing the get-credentials:
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.
Your kubeconfig will end up looking like this if the new auth provider is in use.
...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
| ###Instruction: In CI, with gcp auth plugin I was using gcloud auth activate-service-account ***@developer.gserviceaccount.com --key-file ***.json prior to execute kubectl commands.
Now with gke-gcloud-auth-plugin I can’t find any equivalent to use a gcp service account key file.
I've installed gke-gcloud-auth-plugin and gke-gcloud-auth-plugin --version is giving me Kubernetes v1.25.2-alpha+ae91c1fc0c443c464a4c878ffa2a4544483c6d1f
Would you know if there’s a way?
I tried to add this command:
kubectl config set-credentials my-user --auth-provider=gcp
But I still get:
error: The gcp auth plugin has been removed. Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
### Response: You will need to set the env variable to use the new plugin before doing the get-credentials:
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.
Your kubeconfig will end up looking like this if the new auth provider is in use.
...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
| In CI, with gcp auth plugin I was using gcloud auth activate-service-account ***@developer.gserviceaccount.com --key-file ***.json prior to execute kubectl commands.
Now with gke-gcloud-auth-plugin I can’t find any equivalent to use a gcp service account key file.
I've installed gke-gcloud-auth-plugin and gke-gcloud-auth-plugin --version is giving me Kubernetes v1.25.2-alpha+ae91c1fc0c443c464a4c878ffa2a4544483c6d1f
Would you know if there’s a way?
I tried to add this command:
kubectl config set-credentials my-user --auth-provider=gcp
But I still get:
error: The gcp auth plugin has been removed. Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
|
|
I received the answer to my question on this link from the above comment. I'll just explain the scenario for others to benefit.
I created a cluster on GKE with the Release channels as "Regular". Because of choosing that, GKE will automatically upgrade the cluster and nodes whenever a new security or patch update got released.
You can find the upgrade status by using this command
gcloud container operations list --filter="TYPE:UPGRADE_MASTER"
| ###Instruction: I've a GCP GKE cluster with version '1.22.12-gke.300'. There are around 20 nodes distributed in 4 node pools. The cluster was created a month before and last day I noticed that all of my nodes are rebooted/restarted. When I see the details of the node using command kubectl get nodes, I'm seeing this result (below). The age of all the nodes are 16 or 17 hours.
gke-company-name-gke-clust-company-name-default-n-97c8e50a-d63m Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-default-n-97c8e50a-l8zw Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-2uou Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-3mj1 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-doml Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-7mwd Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-gzxy Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-lgvo Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-27gf Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-7r4q Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-e680 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-m2vf Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mtvg Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mwiy Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-rwf9 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-tqe0 Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-weai Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-xmss Ready <none> 16h v1.22.12-gke.300
Is there any way to identify the reason behing this reboot/restart. Because this unexpected reboot caused some issues to my system. I just wanted to identify the reason of these reboots so that I can know that these types of reboots are expected in future.
Any help is appreciated.
### Response: I received the answer to my question on this link from the above comment. I'll just explain the scenario for others to benefit.
I created a cluster on GKE with the Release channels as "Regular". Because of choosing that, GKE will automatically upgrade the cluster and nodes whenever a new security or patch update got released.
You can find the upgrade status by using this command
gcloud container operations list --filter="TYPE:UPGRADE_MASTER"
| I've a GCP GKE cluster with version '1.22.12-gke.300'. There are around 20 nodes distributed in 4 node pools. The cluster was created a month before and last day I noticed that all of my nodes are rebooted/restarted. When I see the details of the node using command kubectl get nodes, I'm seeing this result (below). The age of all the nodes are 16 or 17 hours.
gke-company-name-gke-clust-company-name-default-n-97c8e50a-d63m Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-default-n-97c8e50a-l8zw Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-2uou Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-3mj1 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-doml Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-7mwd Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-gzxy Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-lgvo Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-27gf Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-7r4q Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-e680 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-m2vf Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mtvg Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mwiy Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-rwf9 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-tqe0 Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-weai Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-xmss Ready <none> 16h v1.22.12-gke.300
Is there any way to identify the reason behing this reboot/restart. Because this unexpected reboot caused some issues to my system. I just wanted to identify the reason of these reboots so that I can know that these types of reboots are expected in future.
Any help is appreciated.
|
|
As per the latest release notes, For Kibana 8.XX version they have enabled authentication and TLS by default. For me I already had elasticsearch installed which is serving on http port so we have to enable TLS in elasticsearch first. As the Kibana-elasticseach communication must be tls enabled as per latest kibana release.
For testing purpose(we are not working on Production anyways ) we are okay with accessing kibana using port-forward too so i have installed 7.XX version helm chart and moved ahead.
https://github.com/elastic/helm-charts/blob/7.17/kibana/README.md
If you want to enable tls for ES, kibana i found below link helpful.
https://github.com/elastic/helm-charts/tree/main/kibana/examples/security (this example is from kibana GIT repo itself)
https://www.lisenet.com/2022/deploy-elasticsearch-and-kibana-on-kubernetes-with-helm/
https://thomasdecaux.medium.com/setup-https-tls-for-kibana-with-helm-cert-manager-io-fd1a326085fe
| ###Instruction: I am not able to install Kibana with simple helm command which used to work earlier.
"helm install kibana elastic/kibana -n kibana"
Are there any recent changes in kibana helm? Do we need to create elasticseach-master-certs and elasticsearch-credentials secrets prior to kibana install now.
https://artifacthub.io/packages/helm/elastic/kibana
### Response: As per the latest release notes, For Kibana 8.XX version they have enabled authentication and TLS by default. For me I already had elasticsearch installed which is serving on http port so we have to enable TLS in elasticsearch first. As the Kibana-elasticseach communication must be tls enabled as per latest kibana release.
For testing purpose(we are not working on Production anyways ) we are okay with accessing kibana using port-forward too so i have installed 7.XX version helm chart and moved ahead.
https://github.com/elastic/helm-charts/blob/7.17/kibana/README.md
If you want to enable tls for ES, kibana i found below link helpful.
https://github.com/elastic/helm-charts/tree/main/kibana/examples/security (this example is from kibana GIT repo itself)
https://www.lisenet.com/2022/deploy-elasticsearch-and-kibana-on-kubernetes-with-helm/
https://thomasdecaux.medium.com/setup-https-tls-for-kibana-with-helm-cert-manager-io-fd1a326085fe
| I am not able to install Kibana with simple helm command which used to work earlier.
"helm install kibana elastic/kibana -n kibana"
Are there any recent changes in kibana helm? Do we need to create elasticseach-master-certs and elasticsearch-credentials secrets prior to kibana install now.
https://artifacthub.io/packages/helm/elastic/kibana
|
|
As mentioned in the offical documnentation:
The following manifest defines a PodMonitoring resource, prom-example, in the NAMESPACE_NAME namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: prom-example
spec:
selector:
matchLabels:
app: prom-example
endpoints:
- port: metrics
interval: 30s
To apply this resource, run the following command:
kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.5.0/examples/pod-monitoring.yaml
Also check the document on Obeserving your GKE clusters.
UPDATE:
After applying the manifests, the managed collection will be running but no metrics will be generated. You must deploy a PodMonitoring resource that scrapes a valid metrics endpoint to see any data in the Query UI.
Check the logs by running the below commands:
kubectl logs -f -ngmp-system -lapp.kubernetes.io/part-of=gmp
kubectl logs -f -ngmp-system -lapp.kubernetes.io/name=collector -c prometheus
If you see any error follow this link to troubleshoot.
| ###Instruction: I'm trying to follow their docs and create this pod monitoring
i apply it and i see nothing in metrics
what am i doing wrong?
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: blah
namespaceSelector:
any: true
endpoints:
- port: metrics
interval: 30s
### Response: As mentioned in the offical documnentation:
The following manifest defines a PodMonitoring resource, prom-example, in the NAMESPACE_NAME namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: prom-example
spec:
selector:
matchLabels:
app: prom-example
endpoints:
- port: metrics
interval: 30s
To apply this resource, run the following command:
kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.5.0/examples/pod-monitoring.yaml
Also check the document on Obeserving your GKE clusters.
UPDATE:
After applying the manifests, the managed collection will be running but no metrics will be generated. You must deploy a PodMonitoring resource that scrapes a valid metrics endpoint to see any data in the Query UI.
Check the logs by running the below commands:
kubectl logs -f -ngmp-system -lapp.kubernetes.io/part-of=gmp
kubectl logs -f -ngmp-system -lapp.kubernetes.io/name=collector -c prometheus
If you see any error follow this link to troubleshoot.
| I'm trying to follow their docs and create this pod monitoring
i apply it and i see nothing in metrics
what am i doing wrong?
apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: blah
namespaceSelector:
any: true
endpoints:
- port: metrics
interval: 30s
|
|
Kubernetes is already a distributed system, by design.
It is more common to run a Kubernetes cluster by using 3 data centers - since it is built upon consensus algorithms like raft. This replaces older ways to run systems by using 2 data centers in an active-passive fashion.
| ###Instruction: I'm still a Kubernetes newbie but I am already confronted with what I think is a mammoth task. My company is hosting a Kubernetes cluster. Because of internal policies, we are obliged to have everything georedundand. Which means we are supposed to build a second cluster (identical to the already existing one) in a different data center. Only ever one of them is active and the remaining one acts as a backup.
My task is to come up with different approaches on how we can synchronize those two clusters in real time, so that in case of failover operations can continue without much interruption.
From what I've heard such a replication can happen on different levels (hypervisor, storage, application...). I want to find out more about these different approaches but I lack the proper lingo to find any good online sources for my problem.
Do you guys know any sources who cover my question? Is there any literature on this? Or any personal experience you might want to share? Thanks in advance!
### Response: Kubernetes is already a distributed system, by design.
It is more common to run a Kubernetes cluster by using 3 data centers - since it is built upon consensus algorithms like raft. This replaces older ways to run systems by using 2 data centers in an active-passive fashion.
| I'm still a Kubernetes newbie but I am already confronted with what I think is a mammoth task. My company is hosting a Kubernetes cluster. Because of internal policies, we are obliged to have everything georedundand. Which means we are supposed to build a second cluster (identical to the already existing one) in a different data center. Only ever one of them is active and the remaining one acts as a backup.
My task is to come up with different approaches on how we can synchronize those two clusters in real time, so that in case of failover operations can continue without much interruption.
From what I've heard such a replication can happen on different levels (hypervisor, storage, application...). I want to find out more about these different approaches but I lack the proper lingo to find any good online sources for my problem.
Do you guys know any sources who cover my question? Is there any literature on this? Or any personal experience you might want to share? Thanks in advance!
|
|
Here is an example how you can extract the data of interest to the dataframe. The output is only an example (as you didn't specify the required output in the question):
import json
import pandas as pd
# open the Json data from file (or use os.popen):
with open("data.json", "r") as f_in:
data = json.load(f_in)
df = pd.DataFrame(data["items"])
# metadata:
df = pd.concat(
[df, df.pop("metadata").apply(pd.Series).add_prefix("meta_")], axis=1
)
# spec:
df = pd.concat(
[df, df.pop("spec").apply(pd.Series).add_prefix("spec_")], axis=1
)
# status:
df = pd.concat(
[df, df.pop("status").apply(pd.Series).add_prefix("status_")], axis=1
)
# keep only columns of interests:
df = df[["meta_name", "meta_namespace", "status_phase", "spec_containers"]]
# explode spec_containers column
df = df.explode("spec_containers")
df = pd.concat(
[
df,
df.pop("spec_containers")
.apply(pd.Series)
.add_prefix("spec_")[["spec_image", "spec_name"]],
],
axis=1,
)
print(df)
Prints:
meta_name meta_namespace status_phase spec_image spec_name
0 apache-lb-648c5cb8cb-mw5zh default Running httpd apache
0 apache-lb-648c5cb8cb-mw5zh default Running index.docker.io/istio/proxyv2:1.13.4 istio-proxy
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/sig-storage/csi-node-driver-registrar:v2.0.1 driver-registrar
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 csi-cephfsplugin
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 liveness-prometheus
...and so on.
| ###Instruction: I'd like to import the output of:
kubectl get pods -o json
into a python pandas dataframe. This should contain also all containers and there resource request and limits.
My code starts as follows:
import json
import numpy as np
import pandas as pd
import os
pods_raw = os.popen('kubectl get pods -o json').read()
pods_json = json.loads(pods_raw)['items']
from here on I struggle to get the data in a correct way in a dataframe, especially the 'spec.containers' should be split up when multiple containers exist.
### Response: Here is an example how you can extract the data of interest to the dataframe. The output is only an example (as you didn't specify the required output in the question):
import json
import pandas as pd
# open the Json data from file (or use os.popen):
with open("data.json", "r") as f_in:
data = json.load(f_in)
df = pd.DataFrame(data["items"])
# metadata:
df = pd.concat(
[df, df.pop("metadata").apply(pd.Series).add_prefix("meta_")], axis=1
)
# spec:
df = pd.concat(
[df, df.pop("spec").apply(pd.Series).add_prefix("spec_")], axis=1
)
# status:
df = pd.concat(
[df, df.pop("status").apply(pd.Series).add_prefix("status_")], axis=1
)
# keep only columns of interests:
df = df[["meta_name", "meta_namespace", "status_phase", "spec_containers"]]
# explode spec_containers column
df = df.explode("spec_containers")
df = pd.concat(
[
df,
df.pop("spec_containers")
.apply(pd.Series)
.add_prefix("spec_")[["spec_image", "spec_name"]],
],
axis=1,
)
print(df)
Prints:
meta_name meta_namespace status_phase spec_image spec_name
0 apache-lb-648c5cb8cb-mw5zh default Running httpd apache
0 apache-lb-648c5cb8cb-mw5zh default Running index.docker.io/istio/proxyv2:1.13.4 istio-proxy
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/sig-storage/csi-node-driver-registrar:v2.0.1 driver-registrar
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 csi-cephfsplugin
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 liveness-prometheus
...and so on.
| I'd like to import the output of:
kubectl get pods -o json
into a python pandas dataframe. This should contain also all containers and there resource request and limits.
My code starts as follows:
import json
import numpy as np
import pandas as pd
import os
pods_raw = os.popen('kubectl get pods -o json').read()
pods_json = json.loads(pods_raw)['items']
from here on I struggle to get the data in a correct way in a dataframe, especially the 'spec.containers' should be split up when multiple containers exist.
|
|
From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.
clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
directly in the configfile
the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.
Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
https://projectcalico.docs.tigera.io/about/about-calico
https://github.com/flannel-io/flannel
| ###Instruction: I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.
When I've connected to a cluster before on AKS, I would use az aks get-credentials and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.
I've tried to add an entry into kubeconfig but get The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?. I've also checked my antivirus on the Windows computer and no requests are getting blocked.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
I've also tried using kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods which results in the same error: The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?.
Many thanks.
### Response: From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.
clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
directly in the configfile
the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.
Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
https://projectcalico.docs.tigera.io/about/about-calico
https://github.com/flannel-io/flannel
| I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.
When I've connected to a cluster before on AKS, I would use az aks get-credentials and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.
I've tried to add an entry into kubeconfig but get The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?. I've also checked my antivirus on the Windows computer and no requests are getting blocked.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
I've also tried using kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods which results in the same error: The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?.
Many thanks.
|
|
There is also a quick way to troubleshoot such issues. You could use "AWSSupport-TroubleshootEKSWorkerNode" Runbook. This Runbook is designed to help troubleshooting EKS worker node that failed to join an EKS cluster.
You need to go to AWS Systems Manager -> Automation -> Select the Runbook -> Execute the Runbook with ClusterName and Instance-id.
This is pretty helpful in troubleshooting and provides a nice summary at the end of the execution.
You could also refer the documentation here.
| ###Instruction: I have a problem deploying with Terraform a node group in an EKS cluster. The error looks like one plugin is having problems but I do not know how to resolve it.
If I see the EC2 in the AWS console (web), I can see the instance of the cluster but I have this error in the cluster.
The error was shown in my pipeline:
Error: waiting for EKS Node Group (UNIR-API-REST-CLUSTER-DEV:node_sping_boot) creation: NodeCreationFailure: Instances failed to join the kubernetes cluster. Resource IDs: [i-05ed58f8101240dc8]
on EKS.tf line 17, in resource "aws_eks_node_group" "nodes":
17: resource "aws_eks_node_group" "nodes"
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin process exited: path=/home/ubuntu/.jenkins/workspace/shop_infraestucture_generator_pipline/shop-proyect-dev/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.64.0_x4 pid=13475
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin exited
And the error is printed in AWS console:
Link
This is the code in Terraform I use to create my project:
EKS.tf for creating the cluster and de nodes
resource "aws_eks_cluster" "CLUSTER" {
name = "UNIR-API-REST-CLUSTER-${var.SUFFIX}"
role_arn = "${aws_iam_role.eks_cluster_role.arn}"
vpc_config {
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
}
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
resource "aws_eks_node_group" "nodes" {
cluster_name = "${aws_eks_cluster.CLUSTER.name}"
node_group_name = "node_sping_boot"
node_role_arn = "${aws_iam_role.eks_nodes_role.arn}"
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# instance_types is mediumt3 by default
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
output "eks_cluster_endpoint" {
value = "${aws_eks_cluster.CLUSTER.endpoint}"
}
output "eks_cluster_certificat_authority" {
value = "${aws_eks_cluster.CLUSTER.certificate_authority}"
}
securityAndGroups.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "eks-cluster-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role" "eks_nodes_role" {
name = "eks-node-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = "${aws_iam_role.eks_nodes_role.name}"
}
VPCAndRouting.tf to create my routing, VPC, and Subnets
resource "aws_vpc" "unir_shop_vpc_dev" {
cidr_block = "${var.NET_CIDR_BLOCK}"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "UNIR-VPC-SHOP-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_route_table" "route" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.unir_gat_shop_dev.id}"
}
tags = {
Name = "UNIR-RoutePublic-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "unir_subnet_aplications" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_APLICATIONS}"
availability_zone = "${var.ZONE_SUB}"
depends_on = ["aws_internet_gateway.unir_gat_shop_dev"]
map_public_ip_on_launch = true
tags = {
Name = "UNIR-SUBNET-APLICATIONS-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_subnet" "unir_subnet_cluster_1" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
map_public_ip_on_launch = true
availability_zone = "${var.ZONE_SUB_CLUSTER_2}"
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_subnet" "unir_subnet_cluster_2" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_2}"
availability_zone = "${var.ZONE_SUB_CLUSTER_1}"
map_public_ip_on_launch = true
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_internet_gateway" "unir_gat_shop_dev" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
tags = {
Environment = "${var.SUFFIX}"
Name = "UNIR-publicGateway-${var.SUFFIX}"
}
}
My variables:
SUFFIX="DEV"
ZONE="eu-west-1"
TERRAFORM_USER_ID=
TERRAFORM_USER_PASS=
ZONE_SUB="eu-west-1b"
ZONE_SUB_CLUSTER_1="eu-west-1a"
ZONE_SUB_CLUSTER_2="eu-west-1c"
NET_CIDR_BLOCK="172.15.0.0/24"
SUBNET_CIDR_APLICATIONS="172.15.0.0/27"
SUBNET_CIDR_CLUSTER_1="172.15.0.32/27"
SUBNET_CIDR_CLUSTER_2="172.15.0.64/27"
SUBNET_CIDR_CLUSTER_3="172.15.0.128/27"
SUBNET_CIDR_CLUSTER_4="172.15.0.160/27"
SUBNET_CIDR_CLUSTER_5="172.15.0.192/27"
SUBNET_CIDR_CLUSTER_6="172.15.0.224/27"
MONGO_SSH_KEY=
KIBANA_SSH_KEY=
CLUSTER_SSH_KEY=
Will be more logs necesary?
### Response: There is also a quick way to troubleshoot such issues. You could use "AWSSupport-TroubleshootEKSWorkerNode" Runbook. This Runbook is designed to help troubleshooting EKS worker node that failed to join an EKS cluster.
You need to go to AWS Systems Manager -> Automation -> Select the Runbook -> Execute the Runbook with ClusterName and Instance-id.
This is pretty helpful in troubleshooting and provides a nice summary at the end of the execution.
You could also refer the documentation here.
| I have a problem deploying with Terraform a node group in an EKS cluster. The error looks like one plugin is having problems but I do not know how to resolve it.
If I see the EC2 in the AWS console (web), I can see the instance of the cluster but I have this error in the cluster.
The error was shown in my pipeline:
Error: waiting for EKS Node Group (UNIR-API-REST-CLUSTER-DEV:node_sping_boot) creation: NodeCreationFailure: Instances failed to join the kubernetes cluster. Resource IDs: [i-05ed58f8101240dc8]
on EKS.tf line 17, in resource "aws_eks_node_group" "nodes":
17: resource "aws_eks_node_group" "nodes"
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin process exited: path=/home/ubuntu/.jenkins/workspace/shop_infraestucture_generator_pipline/shop-proyect-dev/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.64.0_x4 pid=13475
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin exited
And the error is printed in AWS console:
Link
This is the code in Terraform I use to create my project:
EKS.tf for creating the cluster and de nodes
resource "aws_eks_cluster" "CLUSTER" {
name = "UNIR-API-REST-CLUSTER-${var.SUFFIX}"
role_arn = "${aws_iam_role.eks_cluster_role.arn}"
vpc_config {
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
}
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
resource "aws_eks_node_group" "nodes" {
cluster_name = "${aws_eks_cluster.CLUSTER.name}"
node_group_name = "node_sping_boot"
node_role_arn = "${aws_iam_role.eks_nodes_role.arn}"
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# instance_types is mediumt3 by default
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
output "eks_cluster_endpoint" {
value = "${aws_eks_cluster.CLUSTER.endpoint}"
}
output "eks_cluster_certificat_authority" {
value = "${aws_eks_cluster.CLUSTER.certificate_authority}"
}
securityAndGroups.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "eks-cluster-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role" "eks_nodes_role" {
name = "eks-node-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = "${aws_iam_role.eks_nodes_role.name}"
}
VPCAndRouting.tf to create my routing, VPC, and Subnets
resource "aws_vpc" "unir_shop_vpc_dev" {
cidr_block = "${var.NET_CIDR_BLOCK}"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "UNIR-VPC-SHOP-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_route_table" "route" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.unir_gat_shop_dev.id}"
}
tags = {
Name = "UNIR-RoutePublic-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "unir_subnet_aplications" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_APLICATIONS}"
availability_zone = "${var.ZONE_SUB}"
depends_on = ["aws_internet_gateway.unir_gat_shop_dev"]
map_public_ip_on_launch = true
tags = {
Name = "UNIR-SUBNET-APLICATIONS-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_subnet" "unir_subnet_cluster_1" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
map_public_ip_on_launch = true
availability_zone = "${var.ZONE_SUB_CLUSTER_2}"
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_subnet" "unir_subnet_cluster_2" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_2}"
availability_zone = "${var.ZONE_SUB_CLUSTER_1}"
map_public_ip_on_launch = true
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_internet_gateway" "unir_gat_shop_dev" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
tags = {
Environment = "${var.SUFFIX}"
Name = "UNIR-publicGateway-${var.SUFFIX}"
}
}
My variables:
SUFFIX="DEV"
ZONE="eu-west-1"
TERRAFORM_USER_ID=
TERRAFORM_USER_PASS=
ZONE_SUB="eu-west-1b"
ZONE_SUB_CLUSTER_1="eu-west-1a"
ZONE_SUB_CLUSTER_2="eu-west-1c"
NET_CIDR_BLOCK="172.15.0.0/24"
SUBNET_CIDR_APLICATIONS="172.15.0.0/27"
SUBNET_CIDR_CLUSTER_1="172.15.0.32/27"
SUBNET_CIDR_CLUSTER_2="172.15.0.64/27"
SUBNET_CIDR_CLUSTER_3="172.15.0.128/27"
SUBNET_CIDR_CLUSTER_4="172.15.0.160/27"
SUBNET_CIDR_CLUSTER_5="172.15.0.192/27"
SUBNET_CIDR_CLUSTER_6="172.15.0.224/27"
MONGO_SSH_KEY=
KIBANA_SSH_KEY=
CLUSTER_SSH_KEY=
Will be more logs necesary?
|
|
Try the follwoing Dockerfile. It performs a multi-stage build where in the first stage, your code is compiled to an executable. Then in the next stage, only the parts necessary for running your application are copied so that your Docker image has less size than if you copied all your uncompiled and compiled code in the Docker image. For more infomation about multi-stage builds, consult the official documentation or one of the countless internet tutorials.
FROM maven:3.8-openjdk-17 as maven
WORKDIR /app
COPY ./pom.xml .
RUN mvn dependency:go-offline
COPY ./src ./src
RUN mvn package -DskipTests=true
WORKDIR /app/target/dependency
RUN jar -xf ../employeemanagementservice.jar
FROM ibm-semeru-runtimes:open-17-jre-centos7
ARG DEPENDENCY=/app/target/dependency
COPY --from=maven ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=maven ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=maven ${DEPENDENCY}/BOOT-INF/classes /app
CMD java -server -Xmx1024m -Xss1024k -XX:MaxMetaspaceSize=135m -XX:CompressedClassSpaceSize=28m -XX:ReservedCodeCacheSize=13m -XX:+IdleTuningGcOnIdle -Xtune:virtualized -cp app:app/lib/* com.greatlearning.employeemanagementservice.Application
I made the following assumptions:
Your build artifact is named employeemanagementservice.jar. Check your target directory after your maven build to verify. I always configure it like this in the pom.xml
<build>
<finalName>${project.artifactId}</finalName>
</build>
You run your tests in a CI/CD pipeline and don't want to run them in the Docker image build step. If you wanted to run your tests you'd have to remove the -DskipTests=true from the RUN mvn package -DskipTests=true comannd.
Your main class is called Application and it resides in the com.greatlearning.employeemanagementservice package. If not, change it in the Docker CMD command.
You want your service to use at most 1GB of RAM. If not, change the -Xmx1024m to your desired amount. The other java arguments are for optimization purposes, you can look them up online.
| ###Instruction: Hello Fellow Developers,
I am new to software development and looking forward to doing transitioning to software development.
I am learning new skills and tools to scale up and I came across DOCKER and KUBERNETES
I have completed my backend with spring boot, java and MYSQL I just wanted to know how to dockerize an image for the spring boot application in an industrial real-time manner just to get a hang of it.
Things to image
Environment: JAVA 17
Dependencies: POM.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.5</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.greatlearning.employeemanagementservice</groupId>
<artifactId>employeemanagementservice</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>employeemanagementservice</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.19</version>
</dependency>
<!-- spring security -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
Application-properties:
server.port=8082
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/empdb
spring.datasource.username=root
spring.datasource.password=*s12
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.main.allow-bean-definition-overriding=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect
This is an employee management project with simple CRUD operation I have done. It would be helpful if someone can guide me through Docker image settings.
Pardon me, If it is inappropriate to post such questions in a stack overflow.
### Response: Try the follwoing Dockerfile. It performs a multi-stage build where in the first stage, your code is compiled to an executable. Then in the next stage, only the parts necessary for running your application are copied so that your Docker image has less size than if you copied all your uncompiled and compiled code in the Docker image. For more infomation about multi-stage builds, consult the official documentation or one of the countless internet tutorials.
FROM maven:3.8-openjdk-17 as maven
WORKDIR /app
COPY ./pom.xml .
RUN mvn dependency:go-offline
COPY ./src ./src
RUN mvn package -DskipTests=true
WORKDIR /app/target/dependency
RUN jar -xf ../employeemanagementservice.jar
FROM ibm-semeru-runtimes:open-17-jre-centos7
ARG DEPENDENCY=/app/target/dependency
COPY --from=maven ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=maven ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=maven ${DEPENDENCY}/BOOT-INF/classes /app
CMD java -server -Xmx1024m -Xss1024k -XX:MaxMetaspaceSize=135m -XX:CompressedClassSpaceSize=28m -XX:ReservedCodeCacheSize=13m -XX:+IdleTuningGcOnIdle -Xtune:virtualized -cp app:app/lib/* com.greatlearning.employeemanagementservice.Application
I made the following assumptions:
Your build artifact is named employeemanagementservice.jar. Check your target directory after your maven build to verify. I always configure it like this in the pom.xml
<build>
<finalName>${project.artifactId}</finalName>
</build>
You run your tests in a CI/CD pipeline and don't want to run them in the Docker image build step. If you wanted to run your tests you'd have to remove the -DskipTests=true from the RUN mvn package -DskipTests=true comannd.
Your main class is called Application and it resides in the com.greatlearning.employeemanagementservice package. If not, change it in the Docker CMD command.
You want your service to use at most 1GB of RAM. If not, change the -Xmx1024m to your desired amount. The other java arguments are for optimization purposes, you can look them up online.
| Hello Fellow Developers,
I am new to software development and looking forward to doing transitioning to software development.
I am learning new skills and tools to scale up and I came across DOCKER and KUBERNETES
I have completed my backend with spring boot, java and MYSQL I just wanted to know how to dockerize an image for the spring boot application in an industrial real-time manner just to get a hang of it.
Things to image
Environment: JAVA 17
Dependencies: POM.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.5</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.greatlearning.employeemanagementservice</groupId>
<artifactId>employeemanagementservice</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>employeemanagementservice</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.19</version>
</dependency>
<!-- spring security -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
Application-properties:
server.port=8082
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/empdb
spring.datasource.username=root
spring.datasource.password=*s12
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.main.allow-bean-definition-overriding=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect
This is an employee management project with simple CRUD operation I have done. It would be helpful if someone can guide me through Docker image settings.
Pardon me, If it is inappropriate to post such questions in a stack overflow.
|
|
You can read properties from the environment, but that could require a bit of coding & refactoring (depending on the rest of your code) to import and use such values, i.e. into a Properties object.
If you want to use an application.properties file directly you're better off mounting the single file into the app's container by using SubPath.
Make sure you mount the file exactly as specified in the linked official docs or you are likely to get errors related to the container's FS permissions
| ###Instruction: I'm configuring app to works in Kubernetes google cloud cluster. I'd like to pass parameters to application.properties in Spring boot application via configMap. I'm trying to pass value by Environment Variable.
I've created config map in google cloud Kubernetes cluster in namespace default like below:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config-key: 12345789abde
kubectl create -f app-config.yaml -n default
configmap/app-config created
I'm checking if configMap has been created:
key value pair looks fine:
I'm trying to deploy Spring boot app with using cloudbuild.yaml(it works when I not use configMap). The content is:
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'Build docker image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA', '.']
- id: 'Push image to Container Registry'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA']
- id: 'Set image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA," deployment.yaml']
- id: 'Create or update cluster based on last docker image'
name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
- id: 'Expose service to outside world via load balancer'
name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'service-load-balancer.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
deployment.yaml with reference to config map (container is also in default namespace) is:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "app"
image: "kubernetes-cluster-test-image"
env:
- name: CONFIG_KEY
valueFrom:
configMapKeyRef:
name: app-config
key: config-key
Spring boot is not able to read placeholder and I'm obtaining an error in attempt to deply app to google cloud like below:
I'm trying to reference to env with name CONFIG_KEY in application.properties:
com.example.dockerkubernetes.property.value=${CONFIG_KEY}
and then in Spring Boot controller:
@RestController
@RequiredArgsConstructor
public class MyController {
@Value("${com.example.dockerkubernetes.property.value}")
private String testValue;
public String getConfigMapTestKey() {
return this.testValue;
}
}
Has anyone any idea why it dosen't work? Maybe some permissions are missing?
I would be grateful for help.
### Response: You can read properties from the environment, but that could require a bit of coding & refactoring (depending on the rest of your code) to import and use such values, i.e. into a Properties object.
If you want to use an application.properties file directly you're better off mounting the single file into the app's container by using SubPath.
Make sure you mount the file exactly as specified in the linked official docs or you are likely to get errors related to the container's FS permissions
| I'm configuring app to works in Kubernetes google cloud cluster. I'd like to pass parameters to application.properties in Spring boot application via configMap. I'm trying to pass value by Environment Variable.
I've created config map in google cloud Kubernetes cluster in namespace default like below:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config-key: 12345789abde
kubectl create -f app-config.yaml -n default
configmap/app-config created
I'm checking if configMap has been created:
key value pair looks fine:
I'm trying to deploy Spring boot app with using cloudbuild.yaml(it works when I not use configMap). The content is:
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'Build docker image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA', '.']
- id: 'Push image to Container Registry'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA']
- id: 'Set image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA," deployment.yaml']
- id: 'Create or update cluster based on last docker image'
name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
- id: 'Expose service to outside world via load balancer'
name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'service-load-balancer.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
deployment.yaml with reference to config map (container is also in default namespace) is:
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "app"
image: "kubernetes-cluster-test-image"
env:
- name: CONFIG_KEY
valueFrom:
configMapKeyRef:
name: app-config
key: config-key
Spring boot is not able to read placeholder and I'm obtaining an error in attempt to deply app to google cloud like below:
I'm trying to reference to env with name CONFIG_KEY in application.properties:
com.example.dockerkubernetes.property.value=${CONFIG_KEY}
and then in Spring Boot controller:
@RestController
@RequiredArgsConstructor
public class MyController {
@Value("${com.example.dockerkubernetes.property.value}")
private String testValue;
public String getConfigMapTestKey() {
return this.testValue;
}
}
Has anyone any idea why it dosen't work? Maybe some permissions are missing?
I would be grateful for help.
|
|
Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.
DaemonSet can deploy apps to multiple nodes in a single step. Like deployments, DaemonSets must be applied using kubectl before they can take effect.
Since kubernetes is running on Linux,this is easily done by running du command.you can either manually ssh into each kubernetes nodes,or use a Daemonset As follows:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-checker
labels:
app: disk-checker
spec:
selector:
matchLabels:
app: disk-checker
template:
metadata:
labels:
app: disk-checker
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: busybox
imagePullPolicy: IfNotPresent
name: disk-checked
command: ["/bin/sh"]
args: ["-c", "du -a /host | sort -n -r | head -n 20"]
volumeMounts:
- name: host
mountPath: "/host"
volumes:
- name: host
hostPath:
path: "/"
Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold, check complete Node Conditions for more details.
Ways to set Kubelet options :
1)Command line options like --eviction-hard.
2)Config file.
3)More recent is dynamic configuration.
When you experience an issue with node disk pressure, your immediate thoughts should be when you run into the issue: an error in garbage collecting or log files. Of course the better answer here is to clean up unused files (free up some disk space).
So Monitor your clusters and get notified of any node disks approaching pressure, and get the issue resolved before it starts killing other pods inside the cluster.
Edit :
AFAIK there is no magic trick to know the exact threshold of diskpressure . You need to start with reasonable (limits & requests) and refine using trial and error.
Refer to this SO for more information on how to set the threshold of diskpressure.
| ###Instruction: I know that the kubelet reports that the node is in diskpressure if there is not enough space on the node.
But I want to know the exact threshold of diskpressure.
Please let me know the source code of the kubelet related this issue if you could.
Or I really thanks for your help about the official documentation from k8s or sth else.
Thanks again!!
### Response: Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.
DaemonSet can deploy apps to multiple nodes in a single step. Like deployments, DaemonSets must be applied using kubectl before they can take effect.
Since kubernetes is running on Linux,this is easily done by running du command.you can either manually ssh into each kubernetes nodes,or use a Daemonset As follows:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-checker
labels:
app: disk-checker
spec:
selector:
matchLabels:
app: disk-checker
template:
metadata:
labels:
app: disk-checker
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: busybox
imagePullPolicy: IfNotPresent
name: disk-checked
command: ["/bin/sh"]
args: ["-c", "du -a /host | sort -n -r | head -n 20"]
volumeMounts:
- name: host
mountPath: "/host"
volumes:
- name: host
hostPath:
path: "/"
Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold, check complete Node Conditions for more details.
Ways to set Kubelet options :
1)Command line options like --eviction-hard.
2)Config file.
3)More recent is dynamic configuration.
When you experience an issue with node disk pressure, your immediate thoughts should be when you run into the issue: an error in garbage collecting or log files. Of course the better answer here is to clean up unused files (free up some disk space).
So Monitor your clusters and get notified of any node disks approaching pressure, and get the issue resolved before it starts killing other pods inside the cluster.
Edit :
AFAIK there is no magic trick to know the exact threshold of diskpressure . You need to start with reasonable (limits & requests) and refine using trial and error.
Refer to this SO for more information on how to set the threshold of diskpressure.
| I know that the kubelet reports that the node is in diskpressure if there is not enough space on the node.
But I want to know the exact threshold of diskpressure.
Please let me know the source code of the kubelet related this issue if you could.
Or I really thanks for your help about the official documentation from k8s or sth else.
Thanks again!!
|
|
As @karthikeayan said, Delete and Terminate are the same. And yes, your pods got deleted because activeDeadlineSeconds exceeded.
if your job have some error but you restartPolicy is not Never, so your job created pods will be deleted
restartPolicy can be set: This indicates that kubernetes will restart until the Job is completed successfully on OnFailure. However, the number of failures does not rise with each retry. To prevent a loop from failing, you can set activeDeadlineSeconds to a value.
As you have researched and gathered pretty good information it is good enough, To find the logs of a deleted pod follow this stack link or else the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service as suggested by @Jonas
| ###Instruction: I'm trying to troubleshoot an issue I'm having in kubernetes where after a job fails, the associated pod seemingly disappears and I can't view the logs. The job still exists though.
But that's not what my question is about. In reading through the documentation, it seemingly uses the terms "terminated" and "deleted" interchangably. This is leading me to be very confused. I would assume that terminated pods are not necessarily deleted, but the way the documentation is written, it implies that a terminated pod and a deleted pod are the same thing.
Example 1: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup
When a Job completes, no more Pods are created, but the Pods are usually not deleted either
"usually" then links to https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy which then describes the logic by which pods will be terminated. So here, a link to a section which purports to describe the logic by which pods will be deleted, instead describes the logic by which pods will be terminated, implying they are one and the same.
Example 2: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced
This section is titled "Forced Pod termination" and proceeds to explain what happens when you attempt to force delete a pod. Again implying that terminating and deleting a pod are one-and-the-same.
Example 3:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
This section, titled "Termination of Pods", describes what happens when the user requests deletion of a pod.
The job in question is failing due to DeadlineExceeded. The documentation states "Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded." If terminated and deleted mean the same thing, then that would explain why my pods are gone. I find that a strange design choice but it at least would explain my problem.
The kubernetes documentation asked me if the documentation was helpful, I said "no" and it told me to create a question on stack overflow, so that's what I'm doing :)
### Response: As @karthikeayan said, Delete and Terminate are the same. And yes, your pods got deleted because activeDeadlineSeconds exceeded.
if your job have some error but you restartPolicy is not Never, so your job created pods will be deleted
restartPolicy can be set: This indicates that kubernetes will restart until the Job is completed successfully on OnFailure. However, the number of failures does not rise with each retry. To prevent a loop from failing, you can set activeDeadlineSeconds to a value.
As you have researched and gathered pretty good information it is good enough, To find the logs of a deleted pod follow this stack link or else the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service as suggested by @Jonas
| I'm trying to troubleshoot an issue I'm having in kubernetes where after a job fails, the associated pod seemingly disappears and I can't view the logs. The job still exists though.
But that's not what my question is about. In reading through the documentation, it seemingly uses the terms "terminated" and "deleted" interchangably. This is leading me to be very confused. I would assume that terminated pods are not necessarily deleted, but the way the documentation is written, it implies that a terminated pod and a deleted pod are the same thing.
Example 1: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup
When a Job completes, no more Pods are created, but the Pods are usually not deleted either
"usually" then links to https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy which then describes the logic by which pods will be terminated. So here, a link to a section which purports to describe the logic by which pods will be deleted, instead describes the logic by which pods will be terminated, implying they are one and the same.
Example 2: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced
This section is titled "Forced Pod termination" and proceeds to explain what happens when you attempt to force delete a pod. Again implying that terminating and deleting a pod are one-and-the-same.
Example 3:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
This section, titled "Termination of Pods", describes what happens when the user requests deletion of a pod.
The job in question is failing due to DeadlineExceeded. The documentation states "Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded." If terminated and deleted mean the same thing, then that would explain why my pods are gone. I find that a strange design choice but it at least would explain my problem.
The kubernetes documentation asked me if the documentation was helpful, I said "no" and it told me to create a question on stack overflow, so that's what I'm doing :)
|
|
Use another IP-range (private) for your cluster, like 10.10.x.x.
Do not use the network on which your nodes a runing.
--pod-network-cidr=10.10.10.0/24
This will create a range of 255 ipaddresses each node can use for the pods, when they starts. so called clusterIps.
You need also a network fabric like flannel or calico.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.yml
Or you wont be able to connect to your cluster.
| ###Instruction: I am learning kubernetes in my home network. which is configured like this:
-192.168.1.1(router)
-192.168.1.30(ubuntu machine 1, master node)
-192.168.1.71(ubuntu machine 2, worker node)
My router ip 192.168.1.1 can not be modified. When I execute sudo kubeadm init --pod-network-cidr=192.168.1.0/24 in master node, master node iptable adds cni0 like below, and other nodes inside network(like ubuntu machine 2) becomes unavailable to connect
361: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 32:61:29:d2:1d:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::3061:29ff:fed2:1d16/64 scope link
valid_lft forever preferred_lft forever
what could be the problem? thanks in advance
### Response: Use another IP-range (private) for your cluster, like 10.10.x.x.
Do not use the network on which your nodes a runing.
--pod-network-cidr=10.10.10.0/24
This will create a range of 255 ipaddresses each node can use for the pods, when they starts. so called clusterIps.
You need also a network fabric like flannel or calico.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.yml
Or you wont be able to connect to your cluster.
| I am learning kubernetes in my home network. which is configured like this:
-192.168.1.1(router)
-192.168.1.30(ubuntu machine 1, master node)
-192.168.1.71(ubuntu machine 2, worker node)
My router ip 192.168.1.1 can not be modified. When I execute sudo kubeadm init --pod-network-cidr=192.168.1.0/24 in master node, master node iptable adds cni0 like below, and other nodes inside network(like ubuntu machine 2) becomes unavailable to connect
361: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 32:61:29:d2:1d:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::3061:29ff:fed2:1d16/64 scope link
valid_lft forever preferred_lft forever
what could be the problem? thanks in advance
|
|
I also made a github issue with the same question here:
https://github.com/kubernetes/ingress-nginx/issues/9402
And the discussion on the kuberenetes stack took me to the following link: https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies
where he says that: "if you have a service receiving external traffic from an LB (using NodePorts), you almost always want to use externalTrafficPolicy: Local (with pod anti-affinity to alleviate imbalanced traffic)."
I didn't find the way how to preserve client IP without external LB or Proxy so this looks like the only possible solution. If you find any better, please comment.
For more info see the discussion on kubernetes stack: https://kubernetes.slack.com/archives/CANQGM8BA/p1670925074384859
| ###Instruction: I have AKS with nginx load balancer (ingress controller) installed with helm as the entry point to my cluster.
I want to preserve the real IP addresses from clients for further whitelisting with nginx.ingress.kubernetes.io/whitelist-source-range on certain paths of specific services.
I found the solution here:
nginx ingress controller forward source ip
which works, but also I found in certain blogs and in the Kubernetes documentation, that setting externalTrafficPolicy=Local is not good for proper load balancing.
I tried to set different variations of settings to my nginx configmap, but with no luck.
I also found, that it can be done somehow with X-Forwarded-For header, but I couldn't resolve it.
### Response: I also made a github issue with the same question here:
https://github.com/kubernetes/ingress-nginx/issues/9402
And the discussion on the kuberenetes stack took me to the following link: https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies
where he says that: "if you have a service receiving external traffic from an LB (using NodePorts), you almost always want to use externalTrafficPolicy: Local (with pod anti-affinity to alleviate imbalanced traffic)."
I didn't find the way how to preserve client IP without external LB or Proxy so this looks like the only possible solution. If you find any better, please comment.
For more info see the discussion on kubernetes stack: https://kubernetes.slack.com/archives/CANQGM8BA/p1670925074384859
| I have AKS with nginx load balancer (ingress controller) installed with helm as the entry point to my cluster.
I want to preserve the real IP addresses from clients for further whitelisting with nginx.ingress.kubernetes.io/whitelist-source-range on certain paths of specific services.
I found the solution here:
nginx ingress controller forward source ip
which works, but also I found in certain blogs and in the Kubernetes documentation, that setting externalTrafficPolicy=Local is not good for proper load balancing.
I tried to set different variations of settings to my nginx configmap, but with no luck.
I also found, that it can be done somehow with X-Forwarded-For header, but I couldn't resolve it.
|
|
OP asks for a way for all pods to run on a pre-defined set of nodes. So, adding to @yee379, you may define a patch to be applied to multiple generic resources at once like this:
patches:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
template:
spec:
nodeSelector:
key: value
target:
kind: (StatefulSet|Deployment|Job)
# ... Include more resources
# If you'd like to include Pods specifically:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
nodeSelector:
key: value
target:
kind: Pod
| ###Instruction: It would be useful for me to be able to define at a higher level a way to determine a set of nodes that a set of pods should run ontop of.
Is there a way to use kustomize so that i can specify what nodeSelectors a deployment should have?
### Response: OP asks for a way for all pods to run on a pre-defined set of nodes. So, adding to @yee379, you may define a patch to be applied to multiple generic resources at once like this:
patches:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
template:
spec:
nodeSelector:
key: value
target:
kind: (StatefulSet|Deployment|Job)
# ... Include more resources
# If you'd like to include Pods specifically:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
nodeSelector:
key: value
target:
kind: Pod
| It would be useful for me to be able to define at a higher level a way to determine a set of nodes that a set of pods should run ontop of.
Is there a way to use kustomize so that i can specify what nodeSelectors a deployment should have?
|
|
Yes Grafana supports the Mysql, Postgress
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#database
You can set the value at grafana.ini
https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L682
grafana.ini:
database:
type: mysql
host: ${host}
user: ${username}
password: ${password}
if looking forward to set environment variables
GF_DATABASE_TYPE=mysql
GF_DATABASE_HOST=host
GF_DATABASE_USER=username
GF_DATABASE_PASSWORD=password
| ###Instruction: I am trying to setup grafana on kubernetes and need to use mysql as its backend to store dashboards and its config instead of persistent volume.
Is there any way around to achieve it?
Tried grafana helm charts expecting any variable to use mysql as backend but could not find any.
### Response: Yes Grafana supports the Mysql, Postgress
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#database
You can set the value at grafana.ini
https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L682
grafana.ini:
database:
type: mysql
host: ${host}
user: ${username}
password: ${password}
if looking forward to set environment variables
GF_DATABASE_TYPE=mysql
GF_DATABASE_HOST=host
GF_DATABASE_USER=username
GF_DATABASE_PASSWORD=password
| I am trying to setup grafana on kubernetes and need to use mysql as its backend to store dashboards and its config instead of persistent volume.
Is there any way around to achieve it?
Tried grafana helm charts expecting any variable to use mysql as backend but could not find any.
|
|
Add in eks module will fix issue.
node_security_group_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = null
}
Ref:
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1986
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810
| ###Instruction: Warning SyncLoadBalancerFailed 54s (x4 over 91s) service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-05f3a11329a20bb93; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
Normal EnsuringLoadBalancer 14s (x5 over 94s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 13s service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-046c2cc46714af250; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
I created the cluster with this code: eks using terraform
### Response: Add in eks module will fix issue.
node_security_group_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = null
}
Ref:
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1986
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810
| Warning SyncLoadBalancerFailed 54s (x4 over 91s) service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-05f3a11329a20bb93; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
Normal EnsuringLoadBalancer 14s (x5 over 94s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 13s service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-046c2cc46714af250; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
I created the cluster with this code: eks using terraform
|
|
I found that when use Statefulset instead of the Deployment then it's possible to inject the determinable POD name as an environment variable.
...
containers:
...
env:
- name: "POD_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.name
And then POD_NAME variable has the value like this: pod-0 or pod-1 and so on.
So we can find out the instance number with that ordinal number.
| ###Instruction: When we use PM2 in cluster mode, we can find out the instance number inside node app using process.env.NODE_APP_INSTANCE, but how we can find that inside a kubernetes cluster without pm2. I'm looking for similar like find replica instance number or etc.
Imagine a node app with 2 or more replicas and we need to run node-cron scheduler only inside one of pods.
### Response: I found that when use Statefulset instead of the Deployment then it's possible to inject the determinable POD name as an environment variable.
...
containers:
...
env:
- name: "POD_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.name
And then POD_NAME variable has the value like this: pod-0 or pod-1 and so on.
So we can find out the instance number with that ordinal number.
| When we use PM2 in cluster mode, we can find out the instance number inside node app using process.env.NODE_APP_INSTANCE, but how we can find that inside a kubernetes cluster without pm2. I'm looking for similar like find replica instance number or etc.
Imagine a node app with 2 or more replicas and we need to run node-cron scheduler only inside one of pods.
|
|
I found https://github.com/istio/istio/issues/22290#issuecomment-1317595537 which shows how to write a custom EnvoyFilter to do path rewriting and adapted it to my needs. I'm not at all sure if the directives to specify how and where the filter should be applied are correct, but it at least does the prefix rewrite as an inbound rule.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: myapp-rewrite-filter
spec:
workloadSelector:
labels:
app: myapp
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.filters.http.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function remove_prefix(s, prefix)
return (s:sub(0, #prefix) == prefix) and s:sub(#prefix+1) or s
end
function envoy_on_request(request_handle)
local filter_name = "SIMPLE_PATH_REWRITER"
request_handle:logDebug(filter_name .. ": Starting")
local prefix = "/foo"
local path = request_handle:headers():get(":path")
local unprefixed = remove_prefix(path, prefix)
request_handle:headers():replace(":path", unprefixed)
local msg = filter_name .. ": Path " .. path .. " rewritten to " .. unprefixed
request_handle:logDebug(msg)
end
| ###Instruction: I need the URI of requests that reach my myapp pods to be rewritten to remove the prefix /foo from the path. For example, a URI /foo/bar should be received as /bar. I am using a GCP load balancer that routes traffic directly to pods. I am not using Istio ingress, so Istio has no control over the load balancer's behavior.
I tried creating a VirtualService to handle the path rewrite:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- match:
- uri:
prefix: "/foo/"
rewrite:
uri: "/"
route:
- destination:
host: myapp
(This may not be exactly correct as I adapted/simplified what I tried for the question.)
This works when sending requests from a pod with an Istio sidecar to the myapp service, but not from the load balancer. I can see the URI is being rewritten as it goes out from any other pod, not when it's coming into a myapp pod.
How can I get URI rewriting as an incoming rule?
### Response: I found https://github.com/istio/istio/issues/22290#issuecomment-1317595537 which shows how to write a custom EnvoyFilter to do path rewriting and adapted it to my needs. I'm not at all sure if the directives to specify how and where the filter should be applied are correct, but it at least does the prefix rewrite as an inbound rule.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: myapp-rewrite-filter
spec:
workloadSelector:
labels:
app: myapp
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.filters.http.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function remove_prefix(s, prefix)
return (s:sub(0, #prefix) == prefix) and s:sub(#prefix+1) or s
end
function envoy_on_request(request_handle)
local filter_name = "SIMPLE_PATH_REWRITER"
request_handle:logDebug(filter_name .. ": Starting")
local prefix = "/foo"
local path = request_handle:headers():get(":path")
local unprefixed = remove_prefix(path, prefix)
request_handle:headers():replace(":path", unprefixed)
local msg = filter_name .. ": Path " .. path .. " rewritten to " .. unprefixed
request_handle:logDebug(msg)
end
| I need the URI of requests that reach my myapp pods to be rewritten to remove the prefix /foo from the path. For example, a URI /foo/bar should be received as /bar. I am using a GCP load balancer that routes traffic directly to pods. I am not using Istio ingress, so Istio has no control over the load balancer's behavior.
I tried creating a VirtualService to handle the path rewrite:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- match:
- uri:
prefix: "/foo/"
rewrite:
uri: "/"
route:
- destination:
host: myapp
(This may not be exactly correct as I adapted/simplified what I tried for the question.)
This works when sending requests from a pod with an Istio sidecar to the myapp service, but not from the load balancer. I can see the URI is being rewritten as it goes out from any other pod, not when it's coming into a myapp pod.
How can I get URI rewriting as an incoming rule?
|
|
Specify a single IP using CIDR notation. your-ip/32 (192.168.10.0/32 for example)
| ###Instruction: I am following this guide to expose a service running on my bare metal k8s cluster to the world.
The guide suggests using metallb for giving external access. The problem is, during the setup process of metallb, I am asked to give a range of available IP addresses.
The hosting provider I am using is very basic, and all I have is the IP address of the Linux instance that is running my K8s node. So my question is, how can I provision an IP address for assigning to my application? Is this possible with a single IP?
Alternatively I'd love get this done with a NodePort, but I need to support HTTPS traffic and I am not sure its possible if I go that way.
### Response: Specify a single IP using CIDR notation. your-ip/32 (192.168.10.0/32 for example)
| I am following this guide to expose a service running on my bare metal k8s cluster to the world.
The guide suggests using metallb for giving external access. The problem is, during the setup process of metallb, I am asked to give a range of available IP addresses.
The hosting provider I am using is very basic, and all I have is the IP address of the Linux instance that is running my K8s node. So my question is, how can I provision an IP address for assigning to my application? Is this possible with a single IP?
Alternatively I'd love get this done with a NodePort, but I need to support HTTPS traffic and I am not sure its possible if I go that way.
|
|
Let's simplify labels, selectors and template labels first.
The Labels in the metadata section are assigned to the deployment itself.
The Labels in the .spec.template section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.
The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in .spec.selector.matchLabels section.
Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used
"Let’s say you have deployment X which creates two pods with label nginx-pods and image nginx and another deployment Y which applies to the pods with the same label nginx-pods but uses images nginx:alpine. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with nginx:alpine image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments .spec.selector.matchLabels"
| ###Instruction: I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
### Response: Let's simplify labels, selectors and template labels first.
The Labels in the metadata section are assigned to the deployment itself.
The Labels in the .spec.template section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.
The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in .spec.selector.matchLabels section.
Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used
"Let’s say you have deployment X which creates two pods with label nginx-pods and image nginx and another deployment Y which applies to the pods with the same label nginx-pods but uses images nginx:alpine. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with nginx:alpine image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments .spec.selector.matchLabels"
| I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
|
|
This error comes because you dont have a Persistent Volume.
A pvc needs a a pv with at least the same capacity of the pvc.
This can be done manually or with a Volume provvisioner.
The most easy way someone would say is to use the local storageClass, which uses the diskspace from the node where the pod is instanciated, adds a pod affinity so that the pod starts allways on the same node and can use the volume on that disk. In your case you are using 3 replicas. Allthough its possible to start all 3 instances on the same node, this is mostlikly not what you want to achieve with Kubernetes. If that node breaks you wont have any other instance running on another node.
You need first to thing about the infrastructure of your cluster. Where should the data of the volumes be stored?
An Network File System, nfs, might be a could solution.
In this case you have an nfs somewhere in your infrastructure and all the nodes can reach it.
So you can create a PV which is accessible from all your node.
To not allocate a PV always manualy you can install a Volumeprovisioner inside your cluster.
I use in some cluster this one here:
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
As i said you must have already an nfs and configure the provvisioner.yaml with the path.
it looks like this:
# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
template:
spec:
containers:
- name: nfs-client-provisioner
env:
- name: NFS_SERVER
value: <YOUR_NFS_SERVER_IP>
- name: NFS_PATH
value: <YOUR_NFS_SERVER_SHARE>
volumes:
- name: nfs-client-root
nfs:
server: <YOUR_NFS_SERVER_IP>
path: <YOUR_NFS_SERVER_SHARE>
If you use an nfs without provvisioner, you need to define a storageClass which is linked to your nfs.
There are a lot of solutions to hold persitent volumes.
Here you can find a list of StorageClasses:
https://kubernetes.io/docs/concepts/storage/storage-classes/
At the end it depends also where your cluster is provvisioned if you are not managing it by yourself.
| ###Instruction: etcd:
enabled: true
name: etcd
replicaCount: 3
pdb:
create: false
image:
repository: "milvusdb/etcd"
tag: "3.5.0-r7"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2379
peerPort: 2380
auth:
rbac:
enabled: false
persistence:
enabled: true
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
Enable auto compaction
compaction by every 1000 revision
autoCompactionMode: revision
autoCompactionRetention: "1000"
Increase default quota to 4G
extraEnvVars:
name: ETCD_QUOTA_BACKEND_BYTES
value: "4294967296"
name: ETCD_HEARTBEAT_INTERVAL
value: "500"
name: ETCD_ELECTION_TIMEOUTenter code here
value: "2500"
Configuration values for the pulsar dependency
ref: https://github.com/apache/pulsar-helm-chart
enter image description here
I am trying to run the milvus cluster using kubernete in ubuntu server.
I used helm menifest https://milvus-io.github.io/milvus-helm/
Values.yaml
https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml
I checked PersistentValumeClaim their was an error
no persistent volumes available for this claim and no storage class is set
### Response: This error comes because you dont have a Persistent Volume.
A pvc needs a a pv with at least the same capacity of the pvc.
This can be done manually or with a Volume provvisioner.
The most easy way someone would say is to use the local storageClass, which uses the diskspace from the node where the pod is instanciated, adds a pod affinity so that the pod starts allways on the same node and can use the volume on that disk. In your case you are using 3 replicas. Allthough its possible to start all 3 instances on the same node, this is mostlikly not what you want to achieve with Kubernetes. If that node breaks you wont have any other instance running on another node.
You need first to thing about the infrastructure of your cluster. Where should the data of the volumes be stored?
An Network File System, nfs, might be a could solution.
In this case you have an nfs somewhere in your infrastructure and all the nodes can reach it.
So you can create a PV which is accessible from all your node.
To not allocate a PV always manualy you can install a Volumeprovisioner inside your cluster.
I use in some cluster this one here:
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
As i said you must have already an nfs and configure the provvisioner.yaml with the path.
it looks like this:
# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
template:
spec:
containers:
- name: nfs-client-provisioner
env:
- name: NFS_SERVER
value: <YOUR_NFS_SERVER_IP>
- name: NFS_PATH
value: <YOUR_NFS_SERVER_SHARE>
volumes:
- name: nfs-client-root
nfs:
server: <YOUR_NFS_SERVER_IP>
path: <YOUR_NFS_SERVER_SHARE>
If you use an nfs without provvisioner, you need to define a storageClass which is linked to your nfs.
There are a lot of solutions to hold persitent volumes.
Here you can find a list of StorageClasses:
https://kubernetes.io/docs/concepts/storage/storage-classes/
At the end it depends also where your cluster is provvisioned if you are not managing it by yourself.
| etcd:
enabled: true
name: etcd
replicaCount: 3
pdb:
create: false
image:
repository: "milvusdb/etcd"
tag: "3.5.0-r7"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2379
peerPort: 2380
auth:
rbac:
enabled: false
persistence:
enabled: true
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
Enable auto compaction
compaction by every 1000 revision
autoCompactionMode: revision
autoCompactionRetention: "1000"
Increase default quota to 4G
extraEnvVars:
name: ETCD_QUOTA_BACKEND_BYTES
value: "4294967296"
name: ETCD_HEARTBEAT_INTERVAL
value: "500"
name: ETCD_ELECTION_TIMEOUTenter code here
value: "2500"
Configuration values for the pulsar dependency
ref: https://github.com/apache/pulsar-helm-chart
enter image description here
I am trying to run the milvus cluster using kubernete in ubuntu server.
I used helm menifest https://milvus-io.github.io/milvus-helm/
Values.yaml
https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml
I checked PersistentValumeClaim their was an error
no persistent volumes available for this claim and no storage class is set
|
|
Technically, the parameter matchLabels decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different labels than matchLabels. So, the reason might be the uniformity between other Kubernetes resources (like Service where the matchLabels makes more sense).
I recommend reading the blog post matchLabels, labels, and selectors explained in detail, for beginners.
| ###Instruction: I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
### Response: Technically, the parameter matchLabels decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different labels than matchLabels. So, the reason might be the uniformity between other Kubernetes resources (like Service where the matchLabels makes more sense).
I recommend reading the blog post matchLabels, labels, and selectors explained in detail, for beginners.
| I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
|
|
There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.
So easy step you can take input of existing running POD to YAML file first
kubectl get pod <POD name> -o yaml > pod-spec.yaml
Create deployment spec YAML file now
kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
Edit the deployment-spec.yaml file
and in other tab pod-spec.yaml you can copy the Spec part from POD file to new deployment file.
Once deployment-spec.yaml is ready you can apply it. Make sure if you are running service labels get matched properly
kubectl apply -f deployment-spec.yaml
Delete the single running POD
kubectl delete pod <POD name>
| ###Instruction: I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod", however in K8s version 1.25, the --replicate parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?
### Response: There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.
So easy step you can take input of existing running POD to YAML file first
kubectl get pod <POD name> -o yaml > pod-spec.yaml
Create deployment spec YAML file now
kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
Edit the deployment-spec.yaml file
and in other tab pod-spec.yaml you can copy the Spec part from POD file to new deployment file.
Once deployment-spec.yaml is ready you can apply it. Make sure if you are running service labels get matched properly
kubectl apply -f deployment-spec.yaml
Delete the single running POD
kubectl delete pod <POD name>
| I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod", however in K8s version 1.25, the --replicate parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?
|
|
GitLab 15.7 (December 2022) suggests an alternative approach, which does not involve creating a new agent per project
Share CI/CD access to the agent within a personal namespace
The GitLab agent for Kubernetes provides a more secure solution for managing your clusters with GitLab CI/CD.
You can use a single agent with multiple projects and groups by sharing access
to the agent connection. In previous releases, you could not share access with personal
namespaces. This release adds support for CI/CD connection sharing to personal namespaces.
You can now use a single agent from any of the projects under your personal namespace.
See Documentation and Issue.
That means from a single agent, you should be able to access an agent connection from multiple projects (personal namespace or not).
| ###Instruction: I have Gitlab Kubernetes integration in my Project 1 and I am able to access the kube-context within that project's pipelines without any issues.
I have another project, Project 2 where I want to use the same cluster that I integrated in my Project 1.
This i my agent config file:
# .gitlab/agents/my-agent/config.yaml
ci_access:
projects:
- id: group/project-2
When I try to add a Kubernetes cluster in my Project 2, I am expecting to see the cluster name that I set up for Project 1 in the dropdown, but I don't see it:
### Response: GitLab 15.7 (December 2022) suggests an alternative approach, which does not involve creating a new agent per project
Share CI/CD access to the agent within a personal namespace
The GitLab agent for Kubernetes provides a more secure solution for managing your clusters with GitLab CI/CD.
You can use a single agent with multiple projects and groups by sharing access
to the agent connection. In previous releases, you could not share access with personal
namespaces. This release adds support for CI/CD connection sharing to personal namespaces.
You can now use a single agent from any of the projects under your personal namespace.
See Documentation and Issue.
That means from a single agent, you should be able to access an agent connection from multiple projects (personal namespace or not).
| I have Gitlab Kubernetes integration in my Project 1 and I am able to access the kube-context within that project's pipelines without any issues.
I have another project, Project 2 where I want to use the same cluster that I integrated in my Project 1.
This i my agent config file:
# .gitlab/agents/my-agent/config.yaml
ci_access:
projects:
- id: group/project-2
When I try to add a Kubernetes cluster in my Project 2, I am expecting to see the cluster name that I set up for Project 1 in the dropdown, but I don't see it:
|
|
The nodeport is on your kubernetes node.
Accessing from your local machine wont work.
Use http://masternodeipaddress:nodeport.
You can add a hostentry in your local hosts file like:
masternodeipaddress mytestdomain.local
then access http://mytestdomain.local:nodeport.
Update
With service type Loadbalancer.
Get the IP from your masternode.
I tried microk8s enable host-access, but had issues so i put the nodes ip address in externalIPs, as below.
From the browser http://nmasternodeipaddress.
deployment.yaml
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- image: nginx
name: app
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
with nodePort
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
nodePort: 31000 # optional, use kubectl get svc -o wide if you let it blank
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: NodePort
To set a domain local:
sudo vi /etc/hosts
add a line with masternode ip like:
10.165.39.165 mydomain.local
Now you can access in your browser with
http://mydomain.local
Note:
I have tested this with:
Ubuntu 22.04
multipass 3 nodes with microk8s 1.24
in microk8s dns ingress enabled.
| ###Instruction: I am trying to deploy a pod in kubernetes and link a service to it so I can access it via web browser.
The pod specification is the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: visualizer-nautic
labels:
app: visualizer
spec:
replicas: 1
selector:
matchLabels:
app: visualizer
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: zenoh-network
labels:
app: visualizer
spec:
containers:
- name: api
image: hielito/visualizer:latest
ports:
- containerPort: 8100
It deploys a web app, its frontend is exposed at port 8100, so then I add a NodePort service to make it visible in my network:
apiVersion: v1
kind: Service
metadata:
name: visualizer-service
spec:
type: NodePort
selector:
app: visualizer
ports:
- port: 8100
targetPort: 8100
nodePort: 30012
Once I have this service deployed, I try to access localhost:30012, but the web browser just gets stuck at loading and eventually throw a timeout error.
If anyone is wondering, the app deployed is working fine, if I execute a port-forward (microk8s kubectl port-forward pods/visualizer-nautic-77957b94c9-mww6p 8100:8100 ) I can then access the app working perfectly at localhost:8100.
EDIT: I forgot to add that I am using multus CNI add-on and have a MACVLAN network interface added to the pod. When I remove the additional CNI it works fine again, so the MACVLAN network might be interfering with the service.
EDIT2: As requested, I leave specs and versions here:
OS - Ubuntu 20.04 LTS
Kubernetes - Microk8s 1.23
### Response: The nodeport is on your kubernetes node.
Accessing from your local machine wont work.
Use http://masternodeipaddress:nodeport.
You can add a hostentry in your local hosts file like:
masternodeipaddress mytestdomain.local
then access http://mytestdomain.local:nodeport.
Update
With service type Loadbalancer.
Get the IP from your masternode.
I tried microk8s enable host-access, but had issues so i put the nodes ip address in externalIPs, as below.
From the browser http://nmasternodeipaddress.
deployment.yaml
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- image: nginx
name: app
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
with nodePort
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
nodePort: 31000 # optional, use kubectl get svc -o wide if you let it blank
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: NodePort
To set a domain local:
sudo vi /etc/hosts
add a line with masternode ip like:
10.165.39.165 mydomain.local
Now you can access in your browser with
http://mydomain.local
Note:
I have tested this with:
Ubuntu 22.04
multipass 3 nodes with microk8s 1.24
in microk8s dns ingress enabled.
| I am trying to deploy a pod in kubernetes and link a service to it so I can access it via web browser.
The pod specification is the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: visualizer-nautic
labels:
app: visualizer
spec:
replicas: 1
selector:
matchLabels:
app: visualizer
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: zenoh-network
labels:
app: visualizer
spec:
containers:
- name: api
image: hielito/visualizer:latest
ports:
- containerPort: 8100
It deploys a web app, its frontend is exposed at port 8100, so then I add a NodePort service to make it visible in my network:
apiVersion: v1
kind: Service
metadata:
name: visualizer-service
spec:
type: NodePort
selector:
app: visualizer
ports:
- port: 8100
targetPort: 8100
nodePort: 30012
Once I have this service deployed, I try to access localhost:30012, but the web browser just gets stuck at loading and eventually throw a timeout error.
If anyone is wondering, the app deployed is working fine, if I execute a port-forward (microk8s kubectl port-forward pods/visualizer-nautic-77957b94c9-mww6p 8100:8100 ) I can then access the app working perfectly at localhost:8100.
EDIT: I forgot to add that I am using multus CNI add-on and have a MACVLAN network interface added to the pod. When I remove the additional CNI it works fine again, so the MACVLAN network might be interfering with the service.
EDIT2: As requested, I leave specs and versions here:
OS - Ubuntu 20.04 LTS
Kubernetes - Microk8s 1.23
|
|
Narain's answer works when searching in a list of resources but when looking at yaml the interaction is slightly different:
Press /
Type search term, press enter
Press n to go to next, shift-n to go to previous match
Edit: I realize now that this is shown in the header. I suppose this explanation is good for other blind bats like myself.
| ###Instruction: When there's more than one search result from / filter command, how do you navigate to next item? Basically I'm looking for F3 (next search result) equivalent in k9s. Commands listed here does not seem to include what I'm looking for...
### Response: Narain's answer works when searching in a list of resources but when looking at yaml the interaction is slightly different:
Press /
Type search term, press enter
Press n to go to next, shift-n to go to previous match
Edit: I realize now that this is shown in the header. I suppose this explanation is good for other blind bats like myself.
| When there's more than one search result from / filter command, how do you navigate to next item? Basically I'm looking for F3 (next search result) equivalent in k9s. Commands listed here does not seem to include what I'm looking for...
|
|
One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
| ###Instruction: I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
### Response: One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
| I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
|
|
CloudSQL Proxy listens on localhost by default. If you want to expose it via a service, you'll want to add --address 0.0.0.0 to your cloud_sql_proxy command options.
| ###Instruction: I have deployed a Cloud SQL proxy and a service attached to it in Kubernetes. Both the proxy and the service are listening on port 5432. Other pods are not able to establish a connection to the Cloud SQL database through the proxy, but when I port forward to the pod of the proxy or to the service, I am able to connect to the database without any issue from my localhost.
The Kubernetes cluster and the Cloud SQL instance are both private. I have checked the service and deployment labels, the service and pod network configuration, the firewall rules, and the network configuration, but I am still unable to resolve the issue.
All pods are in the same namespace, logs of the proxy show no error, when I run nc -v $service_name $port in other pods, it yields no error and it doesn't show any sign of malfunctioning, it doesn't even print that the connection was successful. The problem is that these pods are not being able to establish a TCP connection to the service,
Here is an example of an error message:
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Connection to airbyte-db-svc:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
If needed here is the manifest that deploys the service and the proxy:
apiVersion: v1
kind: Service
metadata:
name: airbyte-db-svc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
airbyte: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airbyte-db-proxy
spec:
replicas: 1
selector:
matchLabels:
airbyte: db
template:
metadata:
labels:
airbyte: db
spec:
serviceAccountName: airbyte-admin
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command:
- "/cloud_sql_proxy"
- "-enable_iam_login"
- "-ip_address_types=PRIVATE"
- "-instances=PROJECT_ID_HERE:REGION_HERE:INSTANCE_CONNECTION_HERE=tcp:5432"
ports:
- containerPort: 5432
securityContext:
runAsNonRoot: true
The serviceAccount airbyte-admin has the good 'Cloud SQL Client' and the workloadIdentity configured in the GCP Project.
What could be the problem and how can I fix it?
### Response: CloudSQL Proxy listens on localhost by default. If you want to expose it via a service, you'll want to add --address 0.0.0.0 to your cloud_sql_proxy command options.
| I have deployed a Cloud SQL proxy and a service attached to it in Kubernetes. Both the proxy and the service are listening on port 5432. Other pods are not able to establish a connection to the Cloud SQL database through the proxy, but when I port forward to the pod of the proxy or to the service, I am able to connect to the database without any issue from my localhost.
The Kubernetes cluster and the Cloud SQL instance are both private. I have checked the service and deployment labels, the service and pod network configuration, the firewall rules, and the network configuration, but I am still unable to resolve the issue.
All pods are in the same namespace, logs of the proxy show no error, when I run nc -v $service_name $port in other pods, it yields no error and it doesn't show any sign of malfunctioning, it doesn't even print that the connection was successful. The problem is that these pods are not being able to establish a TCP connection to the service,
Here is an example of an error message:
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Connection to airbyte-db-svc:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
If needed here is the manifest that deploys the service and the proxy:
apiVersion: v1
kind: Service
metadata:
name: airbyte-db-svc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
airbyte: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airbyte-db-proxy
spec:
replicas: 1
selector:
matchLabels:
airbyte: db
template:
metadata:
labels:
airbyte: db
spec:
serviceAccountName: airbyte-admin
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command:
- "/cloud_sql_proxy"
- "-enable_iam_login"
- "-ip_address_types=PRIVATE"
- "-instances=PROJECT_ID_HERE:REGION_HERE:INSTANCE_CONNECTION_HERE=tcp:5432"
ports:
- containerPort: 5432
securityContext:
runAsNonRoot: true
The serviceAccount airbyte-admin has the good 'Cloud SQL Client' and the workloadIdentity configured in the GCP Project.
What could be the problem and how can I fix it?
|
|
To fix your issue make sure kubectl integration like below :
As @Ralle commented, Check versions. Kustomize v2.1.0 and v3.0.0+, and is included in kubectl 1.21+, for more information please look at the Kustomize doesn't work with CRDs when specifying images.
Update : The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which updated it to v4.0.5. It will be updated on a regular basis going forward, Check your versions & updates in the Kubernetes release notes.
For examples and guides for using the kubectl integration please see the kubernetes documentation.
Also check Kubernetes 1.22: Server Side Apply moves to GA for more information.
| ###Instruction: I am trying to install PGO (Postgres Operator) in k8s. I am following this documentation.
At the 2nd step when I run following command
kubectl apply --server-side -k kustomize/install/default
I see error
master-k8s@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default
error: containers path is not of type []interface{} but map[string]interface {}
System Specifications:
I have k8s 2 node cluster with one master node.
All running Ubuntu 20.4
What I have try:
I download repository again without clone and directory uploaded on master node
I try to provide full path and this time I received the same error
I checked the default directory there 2 files present
I try to run this command inside the directory.
What Do I need?
I am looking for solution why I am not able to follow the 2nd step of document.
Please help me to find what did I missed or doing it wrong.
I really thankful.
Update Question:
I updated the version of k8s and kustomize and still see the same issue.
master-k8s@masterk8s-virtual-machine:~/test/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default
error: containers path is not of type []interface{} but map[string]interface {}
Kustomize version:
{Version:kustomize/v4.5.7 GitCommit:56d82a8xxxxxxxxxxxxx BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
### Response: To fix your issue make sure kubectl integration like below :
As @Ralle commented, Check versions. Kustomize v2.1.0 and v3.0.0+, and is included in kubectl 1.21+, for more information please look at the Kustomize doesn't work with CRDs when specifying images.
Update : The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which updated it to v4.0.5. It will be updated on a regular basis going forward, Check your versions & updates in the Kubernetes release notes.
For examples and guides for using the kubectl integration please see the kubernetes documentation.
Also check Kubernetes 1.22: Server Side Apply moves to GA for more information.
| I am trying to install PGO (Postgres Operator) in k8s. I am following this documentation.
At the 2nd step when I run following command
kubectl apply --server-side -k kustomize/install/default
I see error
master-k8s@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default
error: containers path is not of type []interface{} but map[string]interface {}
System Specifications:
I have k8s 2 node cluster with one master node.
All running Ubuntu 20.4
What I have try:
I download repository again without clone and directory uploaded on master node
I try to provide full path and this time I received the same error
I checked the default directory there 2 files present
I try to run this command inside the directory.
What Do I need?
I am looking for solution why I am not able to follow the 2nd step of document.
Please help me to find what did I missed or doing it wrong.
I really thankful.
Update Question:
I updated the version of k8s and kustomize and still see the same issue.
master-k8s@masterk8s-virtual-machine:~/test/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default
error: containers path is not of type []interface{} but map[string]interface {}
Kustomize version:
{Version:kustomize/v4.5.7 GitCommit:56d82a8xxxxxxxxxxxxx BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
|
|
ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time and it checks how many pods need to maintain bases on which it creates or deletes the pods. ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. ReplicaSets can be used independently. With ReplicaSet you define the number of replicas you want to run for a particular service. You would have those many replicas running.
Whereas Deployment is the advancement of replica sets. When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. As a replica looks only on creating and deleting the pods. Deployment is recommended for application services and
With deployment you should be able to do rolling upgrades or rollbacks. You can update images from v1 to v2.
Refer to this SO1 , SO2 and official documentation of Replicasets and Deployment.
| ###Instruction: I know that deployment uses replicaset underneath it, has revision control, creates another replicaset during rolling upgrade/downgrade.
I want to know what is the scenario in which only replicaset can be used and deployment can't be used.
### Response: ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time and it checks how many pods need to maintain bases on which it creates or deletes the pods. ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. ReplicaSets can be used independently. With ReplicaSet you define the number of replicas you want to run for a particular service. You would have those many replicas running.
Whereas Deployment is the advancement of replica sets. When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. As a replica looks only on creating and deleting the pods. Deployment is recommended for application services and
With deployment you should be able to do rolling upgrades or rollbacks. You can update images from v1 to v2.
Refer to this SO1 , SO2 and official documentation of Replicasets and Deployment.
| I know that deployment uses replicaset underneath it, has revision control, creates another replicaset during rolling upgrade/downgrade.
I want to know what is the scenario in which only replicaset can be used and deployment can't be used.
|
|
I cleared kubectl from /usr/local/bin
and also from /home/$USER/.local/bin
And run the commands below:
curl -LO "https://dl.k8s.io/release/v1.24.7/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.24.7/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
sudo install -o root -g root -m 0755 kubectl /home/$USER/.local/bin/kubectl
| ###Instruction: kubectl version
/home/lenovo/.local/bin/kubectl: line 1: syntax error near unexpected token <' /home/lenovo/.local/bin/kubectl: line 1: NoSuchKeyThe specified key does not exist.No such object: kubernetes-release/release//bin/linux/amd64/kubectl'
### Response: I cleared kubectl from /usr/local/bin
and also from /home/$USER/.local/bin
And run the commands below:
curl -LO "https://dl.k8s.io/release/v1.24.7/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.24.7/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
sudo install -o root -g root -m 0755 kubectl /home/$USER/.local/bin/kubectl
| kubectl version
/home/lenovo/.local/bin/kubectl: line 1: syntax error near unexpected token <' /home/lenovo/.local/bin/kubectl: line 1: NoSuchKeyThe specified key does not exist.No such object: kubernetes-release/release//bin/linux/amd64/kubectl'
|
|
Stateful apps' helm charts assume volumes were provisioned beforehand using storage static provisioning or on-demand using dynamic provisioning.
Storage provisioning includes storage allocation, creation of PV object and binding it to the storage. This is usually done manually by storage admins for static provisioning or automatically by default storage class for dynamic provisioning.
If you install a helm chart in managed Kubernetes cluster on cloud or even Minikube, there's default storage class which will dynamically provision the storage based on helm chart's PVC and bind the provisioned PV to it. (request for storage)
If you prefer the chart to use statically provisioned PVs, you should disable using default storage class in chart's values.yaml or even specify the volume to be used there (the exact syntax and options may change from chart to chart, but the idea is the same).
Using dynamic provisioning has many advantages over static provisioning which I summarized in my blog post on Dynamic Provisioning of Kubernetes Storage
| ###Instruction: One of the things I noticed is that Helm charts that require persistence (e.g. Postgres) contain a persistent volume claim but not a default persistent volume.
It seems that it is assumed that this should not be part of the Helm chart and should be provisioned separately.
Is there some best practice or idea behind the lack of a pv (maybe with templated options) or at least a default one? This would allow running the chart out of the box.
### Response: Stateful apps' helm charts assume volumes were provisioned beforehand using storage static provisioning or on-demand using dynamic provisioning.
Storage provisioning includes storage allocation, creation of PV object and binding it to the storage. This is usually done manually by storage admins for static provisioning or automatically by default storage class for dynamic provisioning.
If you install a helm chart in managed Kubernetes cluster on cloud or even Minikube, there's default storage class which will dynamically provision the storage based on helm chart's PVC and bind the provisioned PV to it. (request for storage)
If you prefer the chart to use statically provisioned PVs, you should disable using default storage class in chart's values.yaml or even specify the volume to be used there (the exact syntax and options may change from chart to chart, but the idea is the same).
Using dynamic provisioning has many advantages over static provisioning which I summarized in my blog post on Dynamic Provisioning of Kubernetes Storage
| One of the things I noticed is that Helm charts that require persistence (e.g. Postgres) contain a persistent volume claim but not a default persistent volume.
It seems that it is assumed that this should not be part of the Helm chart and should be provisioned separately.
Is there some best practice or idea behind the lack of a pv (maybe with templated options) or at least a default one? This would allow running the chart out of the box.
|
|
TLS certificate errors
The following error indicates a possible certificate mismatch.
# kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The base64 --decode command can be used to decode the certificate and openssl x509 -text -noout can be used for viewing the certificate information.
Unset the KUBECONFIG environment variable using:
unset KUBECONFIG
Or set it to the default KUBECONFIG location:
export KUBECONFIG=/etc/kubernetes/admin.conf
Another workaround is to overwrite the existing kubeconfig for the "admin" user:
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This error should only come up if you have no contexts configured in your client and Make sure you are executing the command from the right user. If you run kubectl config view and you get something like this:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Refer to github link & Kubernetes community Forums for more information to resolve your issue.
| ###Instruction: I have been trying to install kubernetes in Oracle cloud server.
I followed the kubernetes instruction.
Output of kubectl config view :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.208:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
Error when exucuting sudo kubectl apply -f :
The connection to the server localhost:8080 was refused - did you specify the right host or port?
in addition, if I wanted to see the pods then came out another error message too.
Error when executing kubectl get pods :
E1228 00:53:51.858542 44216 memcache.go:238] couldn't get current server API group list: Get "https://10.0.0.208:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
### Response: TLS certificate errors
The following error indicates a possible certificate mismatch.
# kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The base64 --decode command can be used to decode the certificate and openssl x509 -text -noout can be used for viewing the certificate information.
Unset the KUBECONFIG environment variable using:
unset KUBECONFIG
Or set it to the default KUBECONFIG location:
export KUBECONFIG=/etc/kubernetes/admin.conf
Another workaround is to overwrite the existing kubeconfig for the "admin" user:
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This error should only come up if you have no contexts configured in your client and Make sure you are executing the command from the right user. If you run kubectl config view and you get something like this:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Refer to github link & Kubernetes community Forums for more information to resolve your issue.
| I have been trying to install kubernetes in Oracle cloud server.
I followed the kubernetes instruction.
Output of kubectl config view :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.208:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
Error when exucuting sudo kubectl apply -f :
The connection to the server localhost:8080 was refused - did you specify the right host or port?
in addition, if I wanted to see the pods then came out another error message too.
Error when executing kubectl get pods :
E1228 00:53:51.858542 44216 memcache.go:238] couldn't get current server API group list: Get "https://10.0.0.208:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
|
|
hostpath is, as the name suggests, reading and writing from a place on the host where the pod is running. If the host goes down, or the pod gets evicted or otherwise removed from the node, that data is (normally) lost. This is why the "binding" is mentioned -- the pod must stay on that same node otherwise it will lose that data.
Using a volume type and having volumes provisioned is better as the disk and the pod can be reattached together on another node and you will not lose the data.
In terms of I/O, there would indeed be a miniscule difference, since you're no longer talking to the node's local disk but a mounted disk.
hostPath volumes are generally used for temporary files or storage that can be lost without impact to the pod, in much the same way you would use /tmp on a desktop machine/
| ###Instruction: I'm a beginner in kubernetes, and when I was reading the book, I found that it is not recommended to use hostpath as the volume type for production environment, because it will lead to binding between pod and node, but if you don't use hostpath, then if you use other volume types, when reading and writing files, will it lead to extra network IO, and will this performance suffer? Will this have an additional performance impact?
### Response: hostpath is, as the name suggests, reading and writing from a place on the host where the pod is running. If the host goes down, or the pod gets evicted or otherwise removed from the node, that data is (normally) lost. This is why the "binding" is mentioned -- the pod must stay on that same node otherwise it will lose that data.
Using a volume type and having volumes provisioned is better as the disk and the pod can be reattached together on another node and you will not lose the data.
In terms of I/O, there would indeed be a miniscule difference, since you're no longer talking to the node's local disk but a mounted disk.
hostPath volumes are generally used for temporary files or storage that can be lost without impact to the pod, in much the same way you would use /tmp on a desktop machine/
| I'm a beginner in kubernetes, and when I was reading the book, I found that it is not recommended to use hostpath as the volume type for production environment, because it will lead to binding between pod and node, but if you don't use hostpath, then if you use other volume types, when reading and writing files, will it lead to extra network IO, and will this performance suffer? Will this have an additional performance impact?
|
|
There are two things happening here...
Each partition in your topic is assigned to a sub-task. This is done round-robin, so if you have 5 topic partitions and 5 sub-tasks (your parallelism) then every sub-task is reading from a single different topic partition.
Records being read from the topic are keyed and distributed (what Flink calls partitioning). If you only have one unique key, then every record it sent to the same sub-task, and thus only one sub-task is getting any records. Any time you have low key cardinality relative to the number of sub-tasks you can get skewed distribution of data.
Usually in Statefun you'd scale up processing by having more parallel functions, versus scaling up the number of task managers that are running.
| ###Instruction:
Figure out what is the correct way to scale up the remote function.
Figure out scaling relations between replicas of the remote function, Flink parallelism.default configuration, ingress topic partition counts together with message partition keys. What is the design intentions behind this topic.
As the docs suggest, one of the benefits of flink statefun remote functions is that the remote function can scale differently with the flink workers and task parallelism. To understand more about how these messages are sent to the remote function processes. I have tried following scenarios.
Preparation
Use https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s this for my experiment.
Modify the https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py to the following to check the logs how things are parallelized in practice
...
functions = StatefulFunctions()
@functions.bind(typename="example/hello")
async def hello(context: Context, message: Message):
arg = message.raw_value().decode('utf-8')
hostname = os.getenv('HOSTNAME')
for _ in range(10):
print(f"{datetime.utcnow()} {hostname}: Hello from {context.address.id}: you wrote {arg}!", flush=True)
time.sleep(1)
...
Play around the parallelism.default in the flink.conf, replicas count in the functions deployment configuration as well different partitioning configurations in the ingress topic: names
Observations
When sending messages with the same partition key, everything seems to be running sequentially. Meaning if I send 5 messages like "key1:message1", "key1:message2", "key1:message3", "key1:message4", ""key1:message5". I can see that only one of the pod is getting requests even I have more replicas (Configured 5 replicas) of the remote function in the deployment. Regardless how I configure the parallelism or increasing the ingress topic partition count, it always stays the same behavior.
When sending messages with 10 partition keys (The topic is configured with 5 partitions, and parallelism is configured to 5 and the replicas of the remote function is configured to 5). How the replicas remote function receiving the requests seems to be random. Sometime, 5 of them receiving requests at the same time so that 5 of them can run some task together. But some time only 2 of them are utilized and other 3 are just waiting there.
Seems parallelism determines the number of consumers in the same consumer group that subscribing to the ingress topic. I suspect if I have if configured more parallelism than the number of partitions in the ingress topic. the extra parallelism will just stay idle.
My Expectations
What I really expect how this SHOULD work is that 5 of the replica remote functions should always be fully utilized if there is still backlogs in the ingress topic.
When the ingress topic is configured with multiple partitions, each partitions should be batched separately and multiplex with other parallelism (or consumers) batches to utilize all of the processes in the remote functions.
Can some Flink expert help me understand the above behavior and design intentions more?
### Response: There are two things happening here...
Each partition in your topic is assigned to a sub-task. This is done round-robin, so if you have 5 topic partitions and 5 sub-tasks (your parallelism) then every sub-task is reading from a single different topic partition.
Records being read from the topic are keyed and distributed (what Flink calls partitioning). If you only have one unique key, then every record it sent to the same sub-task, and thus only one sub-task is getting any records. Any time you have low key cardinality relative to the number of sub-tasks you can get skewed distribution of data.
Usually in Statefun you'd scale up processing by having more parallel functions, versus scaling up the number of task managers that are running.
|
Figure out what is the correct way to scale up the remote function.
Figure out scaling relations between replicas of the remote function, Flink parallelism.default configuration, ingress topic partition counts together with message partition keys. What is the design intentions behind this topic.
As the docs suggest, one of the benefits of flink statefun remote functions is that the remote function can scale differently with the flink workers and task parallelism. To understand more about how these messages are sent to the remote function processes. I have tried following scenarios.
Preparation
Use https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s this for my experiment.
Modify the https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py to the following to check the logs how things are parallelized in practice
...
functions = StatefulFunctions()
@functions.bind(typename="example/hello")
async def hello(context: Context, message: Message):
arg = message.raw_value().decode('utf-8')
hostname = os.getenv('HOSTNAME')
for _ in range(10):
print(f"{datetime.utcnow()} {hostname}: Hello from {context.address.id}: you wrote {arg}!", flush=True)
time.sleep(1)
...
Play around the parallelism.default in the flink.conf, replicas count in the functions deployment configuration as well different partitioning configurations in the ingress topic: names
Observations
When sending messages with the same partition key, everything seems to be running sequentially. Meaning if I send 5 messages like "key1:message1", "key1:message2", "key1:message3", "key1:message4", ""key1:message5". I can see that only one of the pod is getting requests even I have more replicas (Configured 5 replicas) of the remote function in the deployment. Regardless how I configure the parallelism or increasing the ingress topic partition count, it always stays the same behavior.
When sending messages with 10 partition keys (The topic is configured with 5 partitions, and parallelism is configured to 5 and the replicas of the remote function is configured to 5). How the replicas remote function receiving the requests seems to be random. Sometime, 5 of them receiving requests at the same time so that 5 of them can run some task together. But some time only 2 of them are utilized and other 3 are just waiting there.
Seems parallelism determines the number of consumers in the same consumer group that subscribing to the ingress topic. I suspect if I have if configured more parallelism than the number of partitions in the ingress topic. the extra parallelism will just stay idle.
My Expectations
What I really expect how this SHOULD work is that 5 of the replica remote functions should always be fully utilized if there is still backlogs in the ingress topic.
When the ingress topic is configured with multiple partitions, each partitions should be batched separately and multiplex with other parallelism (or consumers) batches to utilize all of the processes in the remote functions.
Can some Flink expert help me understand the above behavior and design intentions more?
|
|
A better approach for achieving this would be deploying a fluentd daemonset in your cluster and use the azure storage plugin to upload logs to a storage account.
This tool was built for this specific purpose and will probably serve you better for this purpose.
| ###Instruction: I have a task to automate the uploading of AKS logs (control plane and workload) to the Azure storage account so that they can be viewed later or may be an alert notification to the email/teams channel in case of any failure. It would have been an easy task if the log analytics workspace would have been used however, to save the cost we have kept it disabled.
I have tried using the below cronjob which would upload the pod logs to storage account on a regular basis, but it is throwing me the below errors[1]
apiVersion: batch/v1
kind: CronJob
metadata:
name: log-uploader
spec:
schedule: "0 0 * * *" # Run every day at midnight
jobTemplate:
spec:
template:
spec:
containers:
- name: log-uploader
image: mcr.microsoft.com/azure-cli:latest
command:
- bash
- "-c"
- |
az aks install-cli
# Set environment variables for Azure Storage Account and Container
export AZURE_STORAGE_ACCOUNT=test-101
export AZURE_STORAGE_CONTAINER=logs-101
# Iterate over all pods in the cluster and upload their logs to Azure Blob Storage
for pod in $(kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.namespace}{"\n"}{end}'); do
namespace=$(echo $pod | awk '{print $2}')
pod_name=$(echo $pod | awk '{print $1}')
# Use the Kubernetes logs API to retrieve the logs for the pod
logs=$(kubectl logs -n $namespace $pod_name)
# Use the Azure CLI to upload the logs to Azure Blob Storage
echo $logs | az storage blob upload --file - --account-name $AZURE_STORAGE_ACCOUNT --container-name $AZURE_STORAGE_CONTAINER --name "$namespace/$pod_name_`date`.log"
done
restartPolicy: OnFailure
Errors[1]
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
The same commands are running fine outside the container.
Any thoughts/suggestions would be highly appreciated.
Regards,
Piyush
### Response: A better approach for achieving this would be deploying a fluentd daemonset in your cluster and use the azure storage plugin to upload logs to a storage account.
This tool was built for this specific purpose and will probably serve you better for this purpose.
| I have a task to automate the uploading of AKS logs (control plane and workload) to the Azure storage account so that they can be viewed later or may be an alert notification to the email/teams channel in case of any failure. It would have been an easy task if the log analytics workspace would have been used however, to save the cost we have kept it disabled.
I have tried using the below cronjob which would upload the pod logs to storage account on a regular basis, but it is throwing me the below errors[1]
apiVersion: batch/v1
kind: CronJob
metadata:
name: log-uploader
spec:
schedule: "0 0 * * *" # Run every day at midnight
jobTemplate:
spec:
template:
spec:
containers:
- name: log-uploader
image: mcr.microsoft.com/azure-cli:latest
command:
- bash
- "-c"
- |
az aks install-cli
# Set environment variables for Azure Storage Account and Container
export AZURE_STORAGE_ACCOUNT=test-101
export AZURE_STORAGE_CONTAINER=logs-101
# Iterate over all pods in the cluster and upload their logs to Azure Blob Storage
for pod in $(kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.namespace}{"\n"}{end}'); do
namespace=$(echo $pod | awk '{print $2}')
pod_name=$(echo $pod | awk '{print $1}')
# Use the Kubernetes logs API to retrieve the logs for the pod
logs=$(kubectl logs -n $namespace $pod_name)
# Use the Azure CLI to upload the logs to Azure Blob Storage
echo $logs | az storage blob upload --file - --account-name $AZURE_STORAGE_ACCOUNT --container-name $AZURE_STORAGE_CONTAINER --name "$namespace/$pod_name_`date`.log"
done
restartPolicy: OnFailure
Errors[1]
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
The same commands are running fine outside the container.
Any thoughts/suggestions would be highly appreciated.
Regards,
Piyush
|
|
Use entrypoint for that:
services:
app:
build: .
entrypoint: ./entrypoint.sh
command: bundle exec rails s -p 3000 -b 0
ports:
- 3000:3000
# entrypoint.sh
#!/bin/bash
set -e
# is this still an issue?
rm -f /myapp/tmp/pids/server.pid
# do what you have to do
bin/rails assets:precompile
# pass the torch to `command:`
exec "$@"
Also the lazy way:
command: bash -c "bin/rails assets:precompile && bin/rails s -p 3000 -b 0"
You can also use ENTRYPOINT in Dockerfile and build it into the image:
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
| ###Instruction: I have a problem. In my kubernetes cluster, I am trying to run my Rails application. I got the image loaded, but now I want to write a custom command. The default command in my Dockerfile is:
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
But I want to also run rails assets:precompile at startup for production. I tried those commands using this config:
command: ["bundle", "exec", "rails", "assets:precompile", "&&", "bundle", "exec", "rails", "server"]
But after the first command has been executed, I get the error:
rails aborted!
Don't know how to build task '&&' (See the list of available tasks with `rails --tasks`)
I also tried the following with args:
command: ["/bin/sh -c"]
args:
- bundle exec rails assets:precompile;
bundle exec rails server;
But that results in a very long error which basicly says that the format of args is incorrect. Can someone explain to me how I can run both commands at startup?
### Response: Use entrypoint for that:
services:
app:
build: .
entrypoint: ./entrypoint.sh
command: bundle exec rails s -p 3000 -b 0
ports:
- 3000:3000
# entrypoint.sh
#!/bin/bash
set -e
# is this still an issue?
rm -f /myapp/tmp/pids/server.pid
# do what you have to do
bin/rails assets:precompile
# pass the torch to `command:`
exec "$@"
Also the lazy way:
command: bash -c "bin/rails assets:precompile && bin/rails s -p 3000 -b 0"
You can also use ENTRYPOINT in Dockerfile and build it into the image:
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
| I have a problem. In my kubernetes cluster, I am trying to run my Rails application. I got the image loaded, but now I want to write a custom command. The default command in my Dockerfile is:
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
But I want to also run rails assets:precompile at startup for production. I tried those commands using this config:
command: ["bundle", "exec", "rails", "assets:precompile", "&&", "bundle", "exec", "rails", "server"]
But after the first command has been executed, I get the error:
rails aborted!
Don't know how to build task '&&' (See the list of available tasks with `rails --tasks`)
I also tried the following with args:
command: ["/bin/sh -c"]
args:
- bundle exec rails assets:precompile;
bundle exec rails server;
But that results in a very long error which basicly says that the format of args is incorrect. Can someone explain to me how I can run both commands at startup?
|
|
Even if you run Jenkins inside a docker container, you must install docker inside.
Best way create a new Dockerfile and install docker inside.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
For the sake of clarity if you deploy to Kubernetes or use docker-compose, you will need also kubectl and docker-compose inside your container. If not you can remove the installation from Dockerfile above.
| ###Instruction: In Jenkins, I want to understand why "docker ps" is not running inside my container despite I redact my Jenkins file like this :
podTemplate(serviceAccount: 'jenkins', containers: [
containerTemplate(
name: 'docker',
image: 'docker',
resourceRequestCpu: '100m',
resourceLimitCpu: '300m',
resourceRequestMemory: '300Mi',
resourceLimitMemory: '500Mi',
command: 'cat',
privileged: true,
ttyEnabled: true
)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]
) {
node(POD_LABEL) {
stage('Check running containers') {
container('docker') {
sh 'hostname'
sh 'hostname -i'
sh 'docker --version'
sh 'docker ps'
}
}
I have always this kind of message after I start my pipeline :
unix:///var/run/docker.sock.
Is the docker daemon running?
Thanks
It must have something like the result of a PS running with docker command.
### Response: Even if you run Jenkins inside a docker container, you must install docker inside.
Best way create a new Dockerfile and install docker inside.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
For the sake of clarity if you deploy to Kubernetes or use docker-compose, you will need also kubectl and docker-compose inside your container. If not you can remove the installation from Dockerfile above.
| In Jenkins, I want to understand why "docker ps" is not running inside my container despite I redact my Jenkins file like this :
podTemplate(serviceAccount: 'jenkins', containers: [
containerTemplate(
name: 'docker',
image: 'docker',
resourceRequestCpu: '100m',
resourceLimitCpu: '300m',
resourceRequestMemory: '300Mi',
resourceLimitMemory: '500Mi',
command: 'cat',
privileged: true,
ttyEnabled: true
)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]
) {
node(POD_LABEL) {
stage('Check running containers') {
container('docker') {
sh 'hostname'
sh 'hostname -i'
sh 'docker --version'
sh 'docker ps'
}
}
I have always this kind of message after I start my pipeline :
unix:///var/run/docker.sock.
Is the docker daemon running?
Thanks
It must have something like the result of a PS running with docker command.
|
|
The best approach would be to use a post sync hook.
Why is it necessary to run the tests in a GitHub actions workflow? Since the application is already deployed to the cluster, wouldn't it make more sense to run the tests directly on the cluster, instead of going through the trouble of communicating with Github?
| ###Instruction: I'm working in a company that uses Github Actions and Argocd.(using argocd helm chart).
Needless to say that the Github repo is private and argocd is in an internal network that used by the company only.
The flow of what we want to do is that when we deploy the app and the deployment succeeded - Trigger another workflow that will run tests on the deployed environment.
Basically, the deployment will be the trigger for another workflow.
I have been trying to configure webhook from argocd to github but with no success.
What is the best approach to this situation, will be happy to provide more context if needed.
Edit:
The test workflow i'm trying to use workflow_dispatch.
name: workflow_02
on:
push:
branches: [ argo-github-trigger ]
workflow_dispatch:
jobs:
log-the-inputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Worked"
I'm expecting to see a "Run workflow" button on github but it doesn't appear. On another repo, that I have Admin priviliges and can work on the main branch, I tried the same workflow and it worked.
### Response: The best approach would be to use a post sync hook.
Why is it necessary to run the tests in a GitHub actions workflow? Since the application is already deployed to the cluster, wouldn't it make more sense to run the tests directly on the cluster, instead of going through the trouble of communicating with Github?
| I'm working in a company that uses Github Actions and Argocd.(using argocd helm chart).
Needless to say that the Github repo is private and argocd is in an internal network that used by the company only.
The flow of what we want to do is that when we deploy the app and the deployment succeeded - Trigger another workflow that will run tests on the deployed environment.
Basically, the deployment will be the trigger for another workflow.
I have been trying to configure webhook from argocd to github but with no success.
What is the best approach to this situation, will be happy to provide more context if needed.
Edit:
The test workflow i'm trying to use workflow_dispatch.
name: workflow_02
on:
push:
branches: [ argo-github-trigger ]
workflow_dispatch:
jobs:
log-the-inputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Worked"
I'm expecting to see a "Run workflow" button on github but it doesn't appear. On another repo, that I have Admin priviliges and can work on the main branch, I tried the same workflow and it worked.
|
|
You can use add_listener() and can listen to only particular types of events by giving the appropriate mask argument to add_listener(), OR’ing the different constants together. The listener callable is called with one argument, the event object.
Example:
def my_listener(event):
if event.exception:
print('The job crashed :(')
else:
print('The job worked :)')
scheduler.add_listener(my_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
Refer to this document for more information.
| ###Instruction: We need to configure health check for our service which uses apscheduler to schedule and run the jobs. The idea is to check if apscheduler is running jobs fine at the specified time or working as expected.
We tried scheduler.running, but it shows true when it's not able to pick next jobs. Any suggestions here?
### Response: You can use add_listener() and can listen to only particular types of events by giving the appropriate mask argument to add_listener(), OR’ing the different constants together. The listener callable is called with one argument, the event object.
Example:
def my_listener(event):
if event.exception:
print('The job crashed :(')
else:
print('The job worked :)')
scheduler.add_listener(my_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
Refer to this document for more information.
| We need to configure health check for our service which uses apscheduler to schedule and run the jobs. The idea is to check if apscheduler is running jobs fine at the specified time or working as expected.
We tried scheduler.running, but it shows true when it's not able to pick next jobs. Any suggestions here?
|
|
Is it possible not to have separate copies of the filesystem of the
container.
Yes, you can do that.
You can have the ReadOnlyMany file system or ReadWriteMany file system.
Hostpath
You can also leverage the Hostpath to save the files over there and when POD is scheduled on that node it will use from there.
Make sure if your node goes down you might lose the saved file on that node, instead using above solution would be good.
EFS or NFS
If you are on GCP or AWS you can use the existing FS solution to mount the existing file system to multiple POD.
You can also create your own NFS file system and save files into it, when POD gets scheduled it will start using files from NFS shared file system.
GCP Filestore
| ###Instruction: I have a container with pytorch. The container is 3GB. When i run 20 instances of this container it takes 60GB of disk space. I basically have 20 copies of the same library. Is it possible not to have separate copies of the filesystem of the container. If the container fs is set to readonly would that help? Im running in k8s.
### Response:
Is it possible not to have separate copies of the filesystem of the
container.
Yes, you can do that.
You can have the ReadOnlyMany file system or ReadWriteMany file system.
Hostpath
You can also leverage the Hostpath to save the files over there and when POD is scheduled on that node it will use from there.
Make sure if your node goes down you might lose the saved file on that node, instead using above solution would be good.
EFS or NFS
If you are on GCP or AWS you can use the existing FS solution to mount the existing file system to multiple POD.
You can also create your own NFS file system and save files into it, when POD gets scheduled it will start using files from NFS shared file system.
GCP Filestore
| I have a container with pytorch. The container is 3GB. When i run 20 instances of this container it takes 60GB of disk space. I basically have 20 copies of the same library. Is it possible not to have separate copies of the filesystem of the container. If the container fs is set to readonly would that help? Im running in k8s.
|
|
From minikube documentation: minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes
If you want to learn more about Kubernetes, I suggest reading and implementing this repository.
In the end, if you want to use Kubernetes in production, please forget minikube and run Kubernetes.
| ###Instruction: I am trying to familiarise myself with kubernetes, and want to run a k8s stack on some low spec hw (think raspberry pi).
I found what appears to be a great guide to set up kubernetes un Ubuntu but I ran into issues, causing me to reinstall the OS several times to ensure that I had not made a fundamental mistake which was poisoning my attempt.
Getting fed up with waiting for the basics, I tried to set it up in a vagrant environment, which does allow me to skip some of the time-consuming and tedious steps regarding reinstalls, but still seems like a fragile process. Looking at udemy and youtube, as well as getting started articles, a lot of focus appears to be in minikube... as I read it, that is essentially a vm with a ready to go kubernetes set up already.
My question is: is the overhead using minikube small enough to use on servers with minimal resources? Or is it only usable for testing and demonstration? Since I have issues getting a working cluster, I can't test and verify it myself...
### Response: From minikube documentation: minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes
If you want to learn more about Kubernetes, I suggest reading and implementing this repository.
In the end, if you want to use Kubernetes in production, please forget minikube and run Kubernetes.
| I am trying to familiarise myself with kubernetes, and want to run a k8s stack on some low spec hw (think raspberry pi).
I found what appears to be a great guide to set up kubernetes un Ubuntu but I ran into issues, causing me to reinstall the OS several times to ensure that I had not made a fundamental mistake which was poisoning my attempt.
Getting fed up with waiting for the basics, I tried to set it up in a vagrant environment, which does allow me to skip some of the time-consuming and tedious steps regarding reinstalls, but still seems like a fragile process. Looking at udemy and youtube, as well as getting started articles, a lot of focus appears to be in minikube... as I read it, that is essentially a vm with a ready to go kubernetes set up already.
My question is: is the overhead using minikube small enough to use on servers with minimal resources? Or is it only usable for testing and demonstration? Since I have issues getting a working cluster, I can't test and verify it myself...
|
|
You can configure the Filebeat in some ways.
1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.
2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.
| ###Instruction: I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -> logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?
### Response: You can configure the Filebeat in some ways.
1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.
2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.
| I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -> logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?
|
|
Taking the comment by @Ivonet as the base - you configure the auth against the remote repository source in Artifactory itself. You can see the docs here.
Once Artifactory is setup, you set your imagePullSecret to auth against Artifactory itself. See some examples in this knowledge-base article
| ###Instruction: I have a Kubernetes cluster, with Artifactory as my internal registry and proxy to pull images from external private registries. How do I authenticate to these external private registries when I want to pull an image from my Kubernetes cluster? Normally in Kubernetes, this is done by using image pull secrets, however, it is not clear if Artifactory is able to handle the secret to authenticate to the external regisitry. What alternatives do I have?
### Response: Taking the comment by @Ivonet as the base - you configure the auth against the remote repository source in Artifactory itself. You can see the docs here.
Once Artifactory is setup, you set your imagePullSecret to auth against Artifactory itself. See some examples in this knowledge-base article
| I have a Kubernetes cluster, with Artifactory as my internal registry and proxy to pull images from external private registries. How do I authenticate to these external private registries when I want to pull an image from my Kubernetes cluster? Normally in Kubernetes, this is done by using image pull secrets, however, it is not clear if Artifactory is able to handle the secret to authenticate to the external regisitry. What alternatives do I have?
|
|
When you reference the named template "imagePullSecret" inside the range, the context "." you are providing refers to the body of the loop, which does not have the "Values" attribute.
Try providing the root context instead:
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" $ }}
---
{{- end}}
| ###Instruction: I am trying to dynamic lookup available namespaces and able to create secrets in the namespaces using below helm chart.
templates/secrets.yaml
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end}}
values.yaml
imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: someone@host.com
_helpers.tpl
{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
When i run this helm chart, i get below error
Error: INSTALLATION FAILED: template: secrets/templates/_helpers.tpl:2:16: executing "imagePullSecret" at <.Values.imageCredentials>: nil pointer evaluating interface {}.imageCredentials
I dont know what I am doing wrong here.
### Response: When you reference the named template "imagePullSecret" inside the range, the context "." you are providing refers to the body of the loop, which does not have the "Values" attribute.
Try providing the root context instead:
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" $ }}
---
{{- end}}
| I am trying to dynamic lookup available namespaces and able to create secrets in the namespaces using below helm chart.
templates/secrets.yaml
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end}}
values.yaml
imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: someone@host.com
_helpers.tpl
{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
When i run this helm chart, i get below error
Error: INSTALLATION FAILED: template: secrets/templates/_helpers.tpl:2:16: executing "imagePullSecret" at <.Values.imageCredentials>: nil pointer evaluating interface {}.imageCredentials
I dont know what I am doing wrong here.
|
|
Blog you are following is from June 5, 2021 might be using the old k8s release, while in 2022-12-08 1.24 relased and you are using the 1.25
In 1.24 Kubernetes release there is change which don't auto crate the secret when you are creating the ServiceAccount.
Steps to follow
If you want to create the secret you can create it with
kubectl create sa <Service account name>
Create a YAML file with below and update the name and Service Account name
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<Service Account name>"
kubectl apply -f <File created above>.yaml
Extra option
You can also use the kubectl in case just want to create the Token
kubectl create token jwt-tk --duration=999999h
Few ref
Release note & Pull request
| ###Instruction: I want to create a service account in Kubernetes v1.25.5 using this tutorial https://devopscube.com/kubernetes-api-access-service-account/
Full log:
root@node1:~# kubectl create namespace devops-tools
namespace/devops-tools created
root@node1:~# kubectl create serviceaccount api-service-account -n devops-tools
serviceaccount/api-service-account created
root@node1:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: api-service-account
namespace: devops-tools
EOF
Warning: resource serviceaccounts/api-service-account is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/api-service-account configured
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-cluster-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
EOF verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
clusterrole.rbac.authorization.k8s.io/api-cluster-role created
root@node1:~# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
caliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatus
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
ipreservations crd.projectcalico.org/v1 false IPReservation
kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: api-cluster-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: api-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-cluster-role
EOF
clusterrolebinding.rbac.authorization.k8s.io/api-cluster-role-binding created
root@node1:~# kubectl auth can-i get pods --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl auth can-i delete deployments --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl get serviceaccount api-service-account -o=jsonpath='{.secrets[0].name}' -n devops-tools
root@node1:~#
root@node1:~# kubectl get secrets api-service-account-token-pgtrr -o=jsonpath='{.data.token}' -n devops-tools | base64 -D
base64: invalid option -- 'D'
Try 'base64 --help' for more information.
Error from server (NotFound): secrets "api-service-account-token-pgtrr" not found
root@node1:~# kubectl get endpoints | grep kubernetes
kubernetes 192.168.1.24:6443 3d22h
As you can see I should get the token secret but I get empty result. Do you know what should be the correct step?
### Response: Blog you are following is from June 5, 2021 might be using the old k8s release, while in 2022-12-08 1.24 relased and you are using the 1.25
In 1.24 Kubernetes release there is change which don't auto crate the secret when you are creating the ServiceAccount.
Steps to follow
If you want to create the secret you can create it with
kubectl create sa <Service account name>
Create a YAML file with below and update the name and Service Account name
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<Service Account name>"
kubectl apply -f <File created above>.yaml
Extra option
You can also use the kubectl in case just want to create the Token
kubectl create token jwt-tk --duration=999999h
Few ref
Release note & Pull request
| I want to create a service account in Kubernetes v1.25.5 using this tutorial https://devopscube.com/kubernetes-api-access-service-account/
Full log:
root@node1:~# kubectl create namespace devops-tools
namespace/devops-tools created
root@node1:~# kubectl create serviceaccount api-service-account -n devops-tools
serviceaccount/api-service-account created
root@node1:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: api-service-account
namespace: devops-tools
EOF
Warning: resource serviceaccounts/api-service-account is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/api-service-account configured
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-cluster-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
EOF verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
clusterrole.rbac.authorization.k8s.io/api-cluster-role created
root@node1:~# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
caliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatus
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
ipreservations crd.projectcalico.org/v1 false IPReservation
kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: api-cluster-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: api-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-cluster-role
EOF
clusterrolebinding.rbac.authorization.k8s.io/api-cluster-role-binding created
root@node1:~# kubectl auth can-i get pods --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl auth can-i delete deployments --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl get serviceaccount api-service-account -o=jsonpath='{.secrets[0].name}' -n devops-tools
root@node1:~#
root@node1:~# kubectl get secrets api-service-account-token-pgtrr -o=jsonpath='{.data.token}' -n devops-tools | base64 -D
base64: invalid option -- 'D'
Try 'base64 --help' for more information.
Error from server (NotFound): secrets "api-service-account-token-pgtrr" not found
root@node1:~# kubectl get endpoints | grep kubernetes
kubernetes 192.168.1.24:6443 3d22h
As you can see I should get the token secret but I get empty result. Do you know what should be the correct step?
|
|
Here's how I finally managed to do it :
package main
import (
"context"
"fmt"
"os"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/remotecommand"
)
func CopyIntoPod(podName string, namespace string, containerName string, srcPath string, dstPath string) {
// Get the default kubeconfig file
kubeConfig := filepath.Join(homedir.HomeDir(), ".kube", "config")
// Create a config object using the kubeconfig file
config, err := clientcmd.BuildConfigFromFlags("", kubeConfig)
if err != nil {
fmt.Printf("Error creating config: %s\n", err)
return
}
// Create a Kubernetes client
client, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Printf("Error creating client: %s\n", err)
return
}
// Open the file to copy
localFile, err := os.Open(srcPath)
if err != nil {
fmt.Printf("Error opening local file: %s\n", err)
return
}
defer localFile.Close()
pod, err := client.CoreV1().Pods(namespace).Get(context.TODO(), podName, metav1.GetOptions{})
if err != nil {
fmt.Printf("Error getting pod: %s\n", err)
return
}
// Find the container in the pod
var container *corev1.Container
for _, c := range pod.Spec.Containers {
if c.Name == containerName {
container = &c
break
}
}
if container == nil {
fmt.Printf("Container not found in pod\n")
return
}
// Create a stream to the container
req := client.CoreV1().RESTClient().Post().
Resource("pods").
Name(podName).
Namespace(namespace).
SubResource("exec").
Param("container", containerName)
req.VersionedParams(&corev1.PodExecOptions{
Container: containerName,
Command: []string{"bash", "-c", "cat > " + dstPath},
Stdin: true,
Stdout: true,
Stderr: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
fmt.Printf("Error creating executor: %s\n", err)
return
}
// Create a stream to the container
err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
Stdin: localFile,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: false,
})
if err != nil {
fmt.Printf("Error streaming: %s\n", err)
return
}
fmt.Println("File copied successfully")
}
| ###Instruction: My query is pretty much what the title says, I have a local file say file.txt and I want to copy it into pod1's container container1.
If I was to do it using kubectl, the appropriate command would be :
kubectl cp file.txt pod1:file.txt -c container1
However, how do I do it using the Go client of kubectl?
I tried 2 ways but none of them worked :
import (
"fmt"
"context"
"log"
"os"
"path/filepath"
g "github.com/sdslabs/katana/configs"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
//"k8s.io/kubectl/pkg/cmd/exec"
)
func CopyIntoPod(namespace string, podName string, containerName string, srcPath string, dstPath string) {
// Create a Kubernetes client
config, err := GetKubeConfig()
if err != nil {
log.Fatal(err)
}
client, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
// Build the command to execute
cmd := []string{"cp", srcPath, dstPath}
// Use the PodExecOptions struct to specify the options for the exec request
options := v1.PodExecOptions{
Container: containerName,
Command: cmd,
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
}
log.Println("Options set!")
// Use the CoreV1Api.Exec method to execute the command inside the container
req := client.CoreV1().RESTClient().Post().
Namespace(namespace).
Name(podName).
Resource("pods").
SubResource("exec").
VersionedParams(&options, metav1.ParameterCodec)
log.Println("Request generated")
exec, err := req.Stream(context.TODO())
if err != nil {
log.Fatal(err)
}
defer exec.Close()
// Read the response from the exec command
var result []byte
if _, err := exec.Read(result); err != nil {
log.Fatal(err)
}
fmt.Println("File copied successfully!")
}
This gave me the error message :
no kind is registered for the type v1.PodExecOptions in scheme "pkg/runtime/scheme.go:100"
I couldn't figure it out, so I tried another way :
type PodExec struct {
RestConfig *rest.Config
*kubernetes.Clientset
}
func NewPodExec(config *rest.Config, clientset *kubernetes.Clientset) *PodExec {
config.APIPath = "/api" // Make sure we target /api and not just /
config.GroupVersion = &schema.GroupVersion{Version: "v1"} // this targets the core api groups so the url path will be /api/v1
config.NegotiatedSerializer = serializer.WithoutConversionCodecFactory{CodecFactory: scheme.Codecs}
return &PodExec{
RestConfig: config,
Clientset: clientset,
}
}
func (p *PodExec) PodCopyFile(src string, dst string, containername string, podNamespace string) (*bytes.Buffer, *bytes.Buffer, *bytes.Buffer, error) {
ioStreams, in, out, errOut := genericclioptions.NewTestIOStreams()
copyOptions := cp.NewCopyOptions(ioStreams)
copyOptions.Clientset = p.Clientset
copyOptions.ClientConfig = p.RestConfig
copyOptions.Container = containername
copyOptions.Namespace = podNamespace
err := copyOptions.Run()
if err != nil {
return nil, nil, nil, fmt.Errorf("could not run copy operation: %v", err)
}
return in, out, errOut, nil
}
However, there were some issues with the copyOptions.Run() command, it tried to look for o.args[0] and o.args[0] inside copyOptions but o is not imported so it couldn't be modified.
Context : https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run
So, now I'm really lost and confused. Any help would be appreciated. Thanks.
Edit : I did think of a viable method where we can just call cmd.exec() and run the kubectl cp command directly but it seems kinda hacky and I'm not sure whether it would work, any thoughts?
### Response: Here's how I finally managed to do it :
package main
import (
"context"
"fmt"
"os"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/remotecommand"
)
func CopyIntoPod(podName string, namespace string, containerName string, srcPath string, dstPath string) {
// Get the default kubeconfig file
kubeConfig := filepath.Join(homedir.HomeDir(), ".kube", "config")
// Create a config object using the kubeconfig file
config, err := clientcmd.BuildConfigFromFlags("", kubeConfig)
if err != nil {
fmt.Printf("Error creating config: %s\n", err)
return
}
// Create a Kubernetes client
client, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Printf("Error creating client: %s\n", err)
return
}
// Open the file to copy
localFile, err := os.Open(srcPath)
if err != nil {
fmt.Printf("Error opening local file: %s\n", err)
return
}
defer localFile.Close()
pod, err := client.CoreV1().Pods(namespace).Get(context.TODO(), podName, metav1.GetOptions{})
if err != nil {
fmt.Printf("Error getting pod: %s\n", err)
return
}
// Find the container in the pod
var container *corev1.Container
for _, c := range pod.Spec.Containers {
if c.Name == containerName {
container = &c
break
}
}
if container == nil {
fmt.Printf("Container not found in pod\n")
return
}
// Create a stream to the container
req := client.CoreV1().RESTClient().Post().
Resource("pods").
Name(podName).
Namespace(namespace).
SubResource("exec").
Param("container", containerName)
req.VersionedParams(&corev1.PodExecOptions{
Container: containerName,
Command: []string{"bash", "-c", "cat > " + dstPath},
Stdin: true,
Stdout: true,
Stderr: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
fmt.Printf("Error creating executor: %s\n", err)
return
}
// Create a stream to the container
err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
Stdin: localFile,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: false,
})
if err != nil {
fmt.Printf("Error streaming: %s\n", err)
return
}
fmt.Println("File copied successfully")
}
| My query is pretty much what the title says, I have a local file say file.txt and I want to copy it into pod1's container container1.
If I was to do it using kubectl, the appropriate command would be :
kubectl cp file.txt pod1:file.txt -c container1
However, how do I do it using the Go client of kubectl?
I tried 2 ways but none of them worked :
import (
"fmt"
"context"
"log"
"os"
"path/filepath"
g "github.com/sdslabs/katana/configs"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
//"k8s.io/kubectl/pkg/cmd/exec"
)
func CopyIntoPod(namespace string, podName string, containerName string, srcPath string, dstPath string) {
// Create a Kubernetes client
config, err := GetKubeConfig()
if err != nil {
log.Fatal(err)
}
client, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
// Build the command to execute
cmd := []string{"cp", srcPath, dstPath}
// Use the PodExecOptions struct to specify the options for the exec request
options := v1.PodExecOptions{
Container: containerName,
Command: cmd,
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
}
log.Println("Options set!")
// Use the CoreV1Api.Exec method to execute the command inside the container
req := client.CoreV1().RESTClient().Post().
Namespace(namespace).
Name(podName).
Resource("pods").
SubResource("exec").
VersionedParams(&options, metav1.ParameterCodec)
log.Println("Request generated")
exec, err := req.Stream(context.TODO())
if err != nil {
log.Fatal(err)
}
defer exec.Close()
// Read the response from the exec command
var result []byte
if _, err := exec.Read(result); err != nil {
log.Fatal(err)
}
fmt.Println("File copied successfully!")
}
This gave me the error message :
no kind is registered for the type v1.PodExecOptions in scheme "pkg/runtime/scheme.go:100"
I couldn't figure it out, so I tried another way :
type PodExec struct {
RestConfig *rest.Config
*kubernetes.Clientset
}
func NewPodExec(config *rest.Config, clientset *kubernetes.Clientset) *PodExec {
config.APIPath = "/api" // Make sure we target /api and not just /
config.GroupVersion = &schema.GroupVersion{Version: "v1"} // this targets the core api groups so the url path will be /api/v1
config.NegotiatedSerializer = serializer.WithoutConversionCodecFactory{CodecFactory: scheme.Codecs}
return &PodExec{
RestConfig: config,
Clientset: clientset,
}
}
func (p *PodExec) PodCopyFile(src string, dst string, containername string, podNamespace string) (*bytes.Buffer, *bytes.Buffer, *bytes.Buffer, error) {
ioStreams, in, out, errOut := genericclioptions.NewTestIOStreams()
copyOptions := cp.NewCopyOptions(ioStreams)
copyOptions.Clientset = p.Clientset
copyOptions.ClientConfig = p.RestConfig
copyOptions.Container = containername
copyOptions.Namespace = podNamespace
err := copyOptions.Run()
if err != nil {
return nil, nil, nil, fmt.Errorf("could not run copy operation: %v", err)
}
return in, out, errOut, nil
}
However, there were some issues with the copyOptions.Run() command, it tried to look for o.args[0] and o.args[0] inside copyOptions but o is not imported so it couldn't be modified.
Context : https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run
So, now I'm really lost and confused. Any help would be appreciated. Thanks.
Edit : I did think of a viable method where we can just call cmd.exec() and run the kubectl cp command directly but it seems kinda hacky and I'm not sure whether it would work, any thoughts?
|
|
It is possible by using Podman. Podman includes a daemon that implements a Docker-compatible API. On a local machine this can be started via podman system service --time 0 tcp://0.0.0.0:2375.
When running in Kubernetes (or generally in a container) you can use the container image from Quay: quay.io/containers/podman. Start the service in the background and the run your build. Something like this should work:
build:
image: my-java-builder
services:
- name: quay.io/containers/podman:v4.2.1
alias: docker
command: ["podman", "system", "service", "--time=0", "tcp://0.0.0.0:2375"]
variables:
DOCKER_HOST: tcp://docker:2375
script:
- ./gradlew bootBuildImage
| ###Instruction: SpringBoot 2.3 introduced a feature to create OCI/Docker images by running ./gradlew bootBuildImage instead of having a Dockerfile and execute docker build .
When building on a Gitlab build server that is running inside a Kubernetes cluster there is no Docker daemon available though (no docker-in-docker service for security reasons). Instead images have to be build and uploaded using Google's "Kaniko" tool.
Is it possible to combine both somehow i.e. use "bootBuildImage" without a running Docker daemon?
### Response: It is possible by using Podman. Podman includes a daemon that implements a Docker-compatible API. On a local machine this can be started via podman system service --time 0 tcp://0.0.0.0:2375.
When running in Kubernetes (or generally in a container) you can use the container image from Quay: quay.io/containers/podman. Start the service in the background and the run your build. Something like this should work:
build:
image: my-java-builder
services:
- name: quay.io/containers/podman:v4.2.1
alias: docker
command: ["podman", "system", "service", "--time=0", "tcp://0.0.0.0:2375"]
variables:
DOCKER_HOST: tcp://docker:2375
script:
- ./gradlew bootBuildImage
| SpringBoot 2.3 introduced a feature to create OCI/Docker images by running ./gradlew bootBuildImage instead of having a Dockerfile and execute docker build .
When building on a Gitlab build server that is running inside a Kubernetes cluster there is no Docker daemon available though (no docker-in-docker service for security reasons). Instead images have to be build and uploaded using Google's "Kaniko" tool.
Is it possible to combine both somehow i.e. use "bootBuildImage" without a running Docker daemon?
|
|
The reason is somewhat hidden in the ContainerLastStatus field (JSON) of the KubePodInventory table. A query to get all pods killed with reason OOMKilled could be:
KubePodInventory
| where PodStatus != "running"
| extend ContainerLastStatusJSON = parse_json(ContainerLastStatus)
| extend FinishedAt = todatetime(ContainerLastStatusJSON.finishedAt)
| where ContainerLastStatusJSON.reason == "OOMKilled"
| distinct PodUid, ControllerName, ContainerLastStatus, FinishedAt
| order by FinishedAt asc
| ###Instruction: I have a kubernetes cluster in Azure (AKS) with the log analytics enabled. I can see that a lot of pods are being killed by OOMKilled message but I want to troubleshoot this with the log analytics from Azure. My question is how can I track or query, from the log analytics, all the pods that are killed by the OOMKilled reason?
Thanks!
### Response: The reason is somewhat hidden in the ContainerLastStatus field (JSON) of the KubePodInventory table. A query to get all pods killed with reason OOMKilled could be:
KubePodInventory
| where PodStatus != "running"
| extend ContainerLastStatusJSON = parse_json(ContainerLastStatus)
| extend FinishedAt = todatetime(ContainerLastStatusJSON.finishedAt)
| where ContainerLastStatusJSON.reason == "OOMKilled"
| distinct PodUid, ControllerName, ContainerLastStatus, FinishedAt
| order by FinishedAt asc
| I have a kubernetes cluster in Azure (AKS) with the log analytics enabled. I can see that a lot of pods are being killed by OOMKilled message but I want to troubleshoot this with the log analytics from Azure. My question is how can I track or query, from the log analytics, all the pods that are killed by the OOMKilled reason?
Thanks!
|