input
stringclasses 1
value | output
stringlengths 30
28.3k
| text
stringlengths 151
31.8k
| instruction
stringlengths 49
30.3k
|
---|---|---|---|
As mentioned in "Kubernetes executor for GitLab Runner":
The Kubernetes executor, when used with GitLab CI, connects to the Kubernetes API in the cluster creating a Pod for each GitLab CI Job.
This Pod is made up of, at the very least:
a build container,
a helper container, and
an additional container for each service defined in the .gitlab-ci.yml or config.toml files.
Since those are pods, you should see them in a NewRelic Kubernetes cluster explorer.
| ###Instruction: I am tasked with building a new relic chart to show gitlab runner job count over time.
I am trying to determine what type of object is a gitlab runner job. Is is a deployment, a pod or a statefulset?
Any assistance with being able to visualize gitlab runner pods in new relic would be appreciated.
### Response: As mentioned in "Kubernetes executor for GitLab Runner":
The Kubernetes executor, when used with GitLab CI, connects to the Kubernetes API in the cluster creating a Pod for each GitLab CI Job.
This Pod is made up of, at the very least:
a build container,
a helper container, and
an additional container for each service defined in the .gitlab-ci.yml or config.toml files.
Since those are pods, you should see them in a NewRelic Kubernetes cluster explorer.
| I am tasked with building a new relic chart to show gitlab runner job count over time.
I am trying to determine what type of object is a gitlab runner job. Is is a deployment, a pod or a statefulset?
Any assistance with being able to visualize gitlab runner pods in new relic would be appreciated.
|
|
To solve this issue, I had to create separate certificate for both metrics-server and adapter. Adapter also has an issue about adding capability to ignore cert validation which wasn't merged.
For metrics-server and cert request I used the following:
{
"hosts": [
"prometheus-adapter",
"prometheus-adapter.monitoring",
"prometheus-adapter.monitoring.svc",
"prometheus-adapter.monitoring.pod",
"prometheus-adapter.monitoring.svc.cluster.local",
"prometheus-adapter.monitoring.pod.cluster.local",
"<pod ip>",
"<service ip>"
],
"CN": "prometheus-adapter.monitoring.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
},
}
{
"hosts": [
"metrics-server",
"metrics-server.kube-system",
"metrics-server.kube-system.svc",
"metrics-server.kube-system.pod",
"metrics-server.kube-system.svc.cluster.local",
"metrics-server.kube-system.pod.cluster.local",
"<service ip>",
"<pod ip>"
],
"CN": "metrics-server.kube-system",
"key": {
"algo": "ecdsa",
"size": 256
},
}
For ca, you can create your certificate authority or use Kubernetes signers as indicated here
The only point worth noting here is that if you use either of signers, you should mount the ca bundle yourself to your deployments.
Finally, mount tls keys and ca bundle to your deployment.
extraArguments:
- --tls-cert-file=/var/run/serving-cert/tls.crt
- --tls-private-key-file=/var/run/serving-cert/tls.key
- --client-ca-file=/etc/ssl/certs/ca.crt
| ###Instruction: I have installed Prometheus-adapter along with the default metrics-server that comes with k3s securely on port 443.
Unfortunately, I get no resources when I query custom.metrics.k8s.io
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
When I look at the logs of Prometheus-adapter I get unable to update list of all metrics: unable to fetch metrics for query ...: x509: certificate is valid for localhost, localhost, not metrics-server.kube-system
How can I resolve this issue?
### Response: To solve this issue, I had to create separate certificate for both metrics-server and adapter. Adapter also has an issue about adding capability to ignore cert validation which wasn't merged.
For metrics-server and cert request I used the following:
{
"hosts": [
"prometheus-adapter",
"prometheus-adapter.monitoring",
"prometheus-adapter.monitoring.svc",
"prometheus-adapter.monitoring.pod",
"prometheus-adapter.monitoring.svc.cluster.local",
"prometheus-adapter.monitoring.pod.cluster.local",
"<pod ip>",
"<service ip>"
],
"CN": "prometheus-adapter.monitoring.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
},
}
{
"hosts": [
"metrics-server",
"metrics-server.kube-system",
"metrics-server.kube-system.svc",
"metrics-server.kube-system.pod",
"metrics-server.kube-system.svc.cluster.local",
"metrics-server.kube-system.pod.cluster.local",
"<service ip>",
"<pod ip>"
],
"CN": "metrics-server.kube-system",
"key": {
"algo": "ecdsa",
"size": 256
},
}
For ca, you can create your certificate authority or use Kubernetes signers as indicated here
The only point worth noting here is that if you use either of signers, you should mount the ca bundle yourself to your deployments.
Finally, mount tls keys and ca bundle to your deployment.
extraArguments:
- --tls-cert-file=/var/run/serving-cert/tls.crt
- --tls-private-key-file=/var/run/serving-cert/tls.key
- --client-ca-file=/etc/ssl/certs/ca.crt
| I have installed Prometheus-adapter along with the default metrics-server that comes with k3s securely on port 443.
Unfortunately, I get no resources when I query custom.metrics.k8s.io
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
When I look at the logs of Prometheus-adapter I get unable to update list of all metrics: unable to fetch metrics for query ...: x509: certificate is valid for localhost, localhost, not metrics-server.kube-system
How can I resolve this issue?
|
|
After asking in the Kind Slack channel in the Kubernetes workspace I could finally find the answer to my question: whole thread here.
TL,DR; Kind was unable to load the images with architectures that don't match the host architecture because it lacked a required --all-platforms argument in the call to the ctr tool used by kind load docker-image to load the docker images into the cluster. A PR to fix the issue was filed and it will be fixed in future releases of Kind.
| ###Instruction: I have an Apple Macbook Pro with an M1 chip, where I have a local kubernetes cluster running through Kind. The thing is I don't understand how Kind deals with docker images from different platforms/architectures. The thing is I have an application installed via Helm that points to some docker images with linux/amd64 architecture, and when I install it through helm (helm install -n [namespace] [repo] [app]), it works like a charm (I can actually look at the downloaded images and see that their architecture is amd64). However, when I download the same image to my local docker registry with docker pull [my-image], and then try to load it to the kind cluster with kind load docker-image [my-image] [my-cluster], it doesn't work and I get the following error:
Image: "[my-image]" with ID "sha256:d3a8..." not yet present on node "[my-cluster]-control-plane", loading...
ERROR: failed to load image: command "docker exec --privileged -i [my-cluster]-control-plane ctr --namespace=k8s.io images import --digests --snapshotter=overlayfs -" failed with error: exit status 1
Command Output: ctr: image might be filtered out
And after googling the error a little bit, I could see that it is due to the mismatch of architectures between the image and the kind cluster.
Could someone explain to me the reason for these different behaviors?
### Response: After asking in the Kind Slack channel in the Kubernetes workspace I could finally find the answer to my question: whole thread here.
TL,DR; Kind was unable to load the images with architectures that don't match the host architecture because it lacked a required --all-platforms argument in the call to the ctr tool used by kind load docker-image to load the docker images into the cluster. A PR to fix the issue was filed and it will be fixed in future releases of Kind.
| I have an Apple Macbook Pro with an M1 chip, where I have a local kubernetes cluster running through Kind. The thing is I don't understand how Kind deals with docker images from different platforms/architectures. The thing is I have an application installed via Helm that points to some docker images with linux/amd64 architecture, and when I install it through helm (helm install -n [namespace] [repo] [app]), it works like a charm (I can actually look at the downloaded images and see that their architecture is amd64). However, when I download the same image to my local docker registry with docker pull [my-image], and then try to load it to the kind cluster with kind load docker-image [my-image] [my-cluster], it doesn't work and I get the following error:
Image: "[my-image]" with ID "sha256:d3a8..." not yet present on node "[my-cluster]-control-plane", loading...
ERROR: failed to load image: command "docker exec --privileged -i [my-cluster]-control-plane ctr --namespace=k8s.io images import --digests --snapshotter=overlayfs -" failed with error: exit status 1
Command Output: ctr: image might be filtered out
And after googling the error a little bit, I could see that it is due to the mismatch of architectures between the image and the kind cluster.
Could someone explain to me the reason for these different behaviors?
|
|
read_namespaced_deployment expects deployment name, which does not make sense of using label, you should use list_namespaced_deployment
you can use label_selector
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
from kubernetes import client, config
config.load_kube_config()
kube_client = client.AppsV1Api()
resp = kube_client.list_namespaced_deployment(namespace="app", label_selector="team=blue")
for deployment in resp.items:
print(deployment)
to get all label
kubectl get deployment --show-labels
| ###Instruction: I am looking for the python Kubernetes client equivalent for
kubectl get deploy -n app -l team=blue
I'm aware of this function deployment = api.read_namespaced_deployment(name='foo', namespace='bar') but how can I have same thing with label?
### Response: read_namespaced_deployment expects deployment name, which does not make sense of using label, you should use list_namespaced_deployment
you can use label_selector
:param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything.
from kubernetes import client, config
config.load_kube_config()
kube_client = client.AppsV1Api()
resp = kube_client.list_namespaced_deployment(namespace="app", label_selector="team=blue")
for deployment in resp.items:
print(deployment)
to get all label
kubectl get deployment --show-labels
| I am looking for the python Kubernetes client equivalent for
kubectl get deploy -n app -l team=blue
I'm aware of this function deployment = api.read_namespaced_deployment(name='foo', namespace='bar') but how can I have same thing with label?
|
|
Figured it out - I needed to add container using a query parameter:
?container=nginx
Working Example:
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log?container=nginx
| ###Instruction: Forgive me for asking a stupid question but I can't seem to find anywhere in the Kubernetes API reference how to query logs via the REST API if there's more than one container running inside the pod?
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log
Returns:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"a
container name must be specified for pod my-app-1,
choose one of: [nginx php-fpm]","reason":"BadRequest","code":400}
I tried:
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/nginx/log
and it results in an error that the resource can't be found.
How do I specify the container name when making an HTTP request to the API?
### Response: Figured it out - I needed to add container using a query parameter:
?container=nginx
Working Example:
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log?container=nginx
| Forgive me for asking a stupid question but I can't seem to find anywhere in the Kubernetes API reference how to query logs via the REST API if there's more than one container running inside the pod?
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log
Returns:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"a
container name must be specified for pod my-app-1,
choose one of: [nginx php-fpm]","reason":"BadRequest","code":400}
I tried:
cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/nginx/log
and it results in an error that the resource can't be found.
How do I specify the container name when making an HTTP request to the API?
|
|
It's fine and it will work if you don't provide the requests (CPU/MEM) to workloads.
But consider the scenario, suppose you have 1-2 Nodes with a capacity of 1 GB and you have not mentioned the requests.
Already running application utilizing half of the node around 0.5 GB. Your new app needs now 1 GB to start so K8s will schedule the PODs onto that node as not aware of the minimum requirement to start the application.
After that whatever happens, we call it a Crash.
If you have extra resources in the cluster, setting affinity and confidence in the application code you can go without putting the requests (not best practice).
| ###Instruction: I was wondering if it was really relevant to set “requests” (CPU/MEM) values if I’m not using HPA ?
If those values are not used to scale up or down some pods, what is the point ?
### Response: It's fine and it will work if you don't provide the requests (CPU/MEM) to workloads.
But consider the scenario, suppose you have 1-2 Nodes with a capacity of 1 GB and you have not mentioned the requests.
Already running application utilizing half of the node around 0.5 GB. Your new app needs now 1 GB to start so K8s will schedule the PODs onto that node as not aware of the minimum requirement to start the application.
After that whatever happens, we call it a Crash.
If you have extra resources in the cluster, setting affinity and confidence in the application code you can go without putting the requests (not best practice).
| I was wondering if it was really relevant to set “requests” (CPU/MEM) values if I’m not using HPA ?
If those values are not used to scale up or down some pods, what is the point ?
|
|
I just ran into this same problem and even though things were working fine previously, something was updated (possibly .NET 6.0.402) which caused a problem.
What I noticed is that my exported dev cert pfx in the Docker container had it's permissions set to:
-rw------- 1 root root 2383 Oct 18 14:40 cert.pfx
In my Dockerfile, I export the dotnet dev cert and run a chmod to add read permissions for everyone:
RUN dotnet dev-certs https --clean && dotnet dev-certs https --export-path /app/publish/cert.pfx -p {password}
RUN chmod 644 /app/publish/cert.pfx
This resulted in permissions which were the same as my appsettings files:
-rw-r--r-- 1 root root 535 Oct 18 14:11 appsettings.Development.json
-rw-r--r-- 1 root root 331 Sep 27 18:13 appsettings.json
-rw-r--r-- 1 root root 2383 Oct 18 14:40 cert.pfx
That fixed the error for me.
| ###Instruction: I have a .NET Core Web API hosted in Kubernetes as a Pod. It is also exposed as a Service.
I have created a Dev SSL certificate and it's produced a aspnetapp.pfx file.
Here is a snippet of my Docker file:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 443
ENV ASPNETCORE_URLS=https://+:443
ENV ASPNETCORE_HTTPS_PORT=443
ENV ASPNETCORE_Kestrel__Certificates__Default__Password={password}
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=APIGateway/Certificates/aspnetapp.pfx
When I run the app in Kubernetes I receive an error in the container logs, and the container is failing to start:
error:2006D002:BIO routines:BIO_new_file:system lib
I know its able to find the SSL certificate but, its throwing the above error.
Please help!:)
### Response: I just ran into this same problem and even though things were working fine previously, something was updated (possibly .NET 6.0.402) which caused a problem.
What I noticed is that my exported dev cert pfx in the Docker container had it's permissions set to:
-rw------- 1 root root 2383 Oct 18 14:40 cert.pfx
In my Dockerfile, I export the dotnet dev cert and run a chmod to add read permissions for everyone:
RUN dotnet dev-certs https --clean && dotnet dev-certs https --export-path /app/publish/cert.pfx -p {password}
RUN chmod 644 /app/publish/cert.pfx
This resulted in permissions which were the same as my appsettings files:
-rw-r--r-- 1 root root 535 Oct 18 14:11 appsettings.Development.json
-rw-r--r-- 1 root root 331 Sep 27 18:13 appsettings.json
-rw-r--r-- 1 root root 2383 Oct 18 14:40 cert.pfx
That fixed the error for me.
| I have a .NET Core Web API hosted in Kubernetes as a Pod. It is also exposed as a Service.
I have created a Dev SSL certificate and it's produced a aspnetapp.pfx file.
Here is a snippet of my Docker file:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 443
ENV ASPNETCORE_URLS=https://+:443
ENV ASPNETCORE_HTTPS_PORT=443
ENV ASPNETCORE_Kestrel__Certificates__Default__Password={password}
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=APIGateway/Certificates/aspnetapp.pfx
When I run the app in Kubernetes I receive an error in the container logs, and the container is failing to start:
error:2006D002:BIO routines:BIO_new_file:system lib
I know its able to find the SSL certificate but, its throwing the above error.
Please help!:)
|
|
I found the answer myself, when using index we can search for nested variables with quotes.
{{ index .Values.valuenum "joji.json" }}
link for helm doc about index and more
| ###Instruction: i want to access dotted named value with helm to use in ConfigMap
the value is something like that
valuenum:
joji.json: zok
i want to use it in ConfigMap with helm as this
{{ toYaml .Values.valuenum.joji.json }}
it returns syntax error.
could not find a fix for it.
### Response: I found the answer myself, when using index we can search for nested variables with quotes.
{{ index .Values.valuenum "joji.json" }}
link for helm doc about index and more
| i want to access dotted named value with helm to use in ConfigMap
the value is something like that
valuenum:
joji.json: zok
i want to use it in ConfigMap with helm as this
{{ toYaml .Values.valuenum.joji.json }}
it returns syntax error.
could not find a fix for it.
|
|
Assuming you are using the default sigs.k8s.io/controller-runtime/pkg/client's client.Client, you get access to the List() function.
In your case r.List(...).
Usage:
case 1: list by label
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingLabels{"foo": "bar"})
if err != nil {
return ctrl.Result{}, err
}
}
case 2: list all in namespace
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.InNamespace("default"))
if err != nil {
return ctrl.Result{}, err
}
}
case 3: list by field i.e. metadata.name
// in your Reconciler Setup function create an index
func SetupWithManager(mgr ctrl.Manager) error {
r := &CronJobReconciler{
Client: mgr.GetClient(),
}
mgr.GetFieldIndexer().IndexField(context.TODO(), &cronjobv1alpha1.CronJob{}, "metadata.name", NameIndexer)
return ctrl.NewControllerManagedBy(mgr).
For(&cronjobv1alpha1.CronJob{}).
Complete(r)
}
func NameIndexer(o client.Object) []string {
m := o.(*cronjobv1alpha1.CronJob)
return []string{m.ObjectMeta.Name}
}
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingFields{"metadata.name": "test"})
if err != nil {
return ctrl.Result{}, err
}
}
| ###Instruction: I'm developing a kubernetes operator that represents a very simple api and a controller.
I would like to maximize the number of the CustomResources those could belonging to the specific CustomResourceDefinition that the operator defines. (As specially I would like to allow just one CR, if it is already defined, the operator should throw an error message and skip reconciling it.)
If I generate the api, there is a KindList struct default generated, and if I understand correctly, it should keep track of the CRs already defined for my CRD. It is also added to the scheme by default. See the example from kubebuilder documentation:
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
Unfortunately, I can not find out how to access this List from the controller. I have tried like this, but r.Get can not accept cacheList:
cronjob/cronjob_controller.go
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
cronjobv1alpha1 "github.com/example/cronjob-operator/api/v1alpha1"
)
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
cronjob := cronjobv1alpha1.Memcached{}
if err := r.Get(ctx, req.NamespacedName, &cronjob); err != nil {
return ctrl.Result{}, err
}
cronjobList := cachev1alpha1.MemcachedList{}
if err := r.Get(ctx, req.NamespacedName, &cronjobList); err != nil {
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
If I get the list, I could validate the length of it, and do or skip the reconcile.
Is it even a correct approach? Is there a better way to achieve my goal? Should I create a webhook instead?
### Response: Assuming you are using the default sigs.k8s.io/controller-runtime/pkg/client's client.Client, you get access to the List() function.
In your case r.List(...).
Usage:
case 1: list by label
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingLabels{"foo": "bar"})
if err != nil {
return ctrl.Result{}, err
}
}
case 2: list all in namespace
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.InNamespace("default"))
if err != nil {
return ctrl.Result{}, err
}
}
case 3: list by field i.e. metadata.name
// in your Reconciler Setup function create an index
func SetupWithManager(mgr ctrl.Manager) error {
r := &CronJobReconciler{
Client: mgr.GetClient(),
}
mgr.GetFieldIndexer().IndexField(context.TODO(), &cronjobv1alpha1.CronJob{}, "metadata.name", NameIndexer)
return ctrl.NewControllerManagedBy(mgr).
For(&cronjobv1alpha1.CronJob{}).
Complete(r)
}
func NameIndexer(o client.Object) []string {
m := o.(*cronjobv1alpha1.CronJob)
return []string{m.ObjectMeta.Name}
}
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingFields{"metadata.name": "test"})
if err != nil {
return ctrl.Result{}, err
}
}
| I'm developing a kubernetes operator that represents a very simple api and a controller.
I would like to maximize the number of the CustomResources those could belonging to the specific CustomResourceDefinition that the operator defines. (As specially I would like to allow just one CR, if it is already defined, the operator should throw an error message and skip reconciling it.)
If I generate the api, there is a KindList struct default generated, and if I understand correctly, it should keep track of the CRs already defined for my CRD. It is also added to the scheme by default. See the example from kubebuilder documentation:
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
Unfortunately, I can not find out how to access this List from the controller. I have tried like this, but r.Get can not accept cacheList:
cronjob/cronjob_controller.go
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
cronjobv1alpha1 "github.com/example/cronjob-operator/api/v1alpha1"
)
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
cronjob := cronjobv1alpha1.Memcached{}
if err := r.Get(ctx, req.NamespacedName, &cronjob); err != nil {
return ctrl.Result{}, err
}
cronjobList := cachev1alpha1.MemcachedList{}
if err := r.Get(ctx, req.NamespacedName, &cronjobList); err != nil {
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
If I get the list, I could validate the length of it, and do or skip the reconcile.
Is it even a correct approach? Is there a better way to achieve my goal? Should I create a webhook instead?
|
|
According to my research I found the following way.
Remove the redirection from an ASP.NET Core app when we host it to kubernetes.
Apply the redirect to Ingress itself. By adding the following 2 lines to it:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
So my ingress code becomes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
Let me know your thoughts. If you have a better answer the kindly add a new answer on the question.
| ###Instruction: My asp.net core web application has redirection applied on statup.cs as:
app.UseHttpsRedirection();
The problem arises when I host it on kubernetes where this redirection does not happens. The app opens on both http and https url.
I am hosting the app on local kubernetes cluster on my pc having kali linus.
This is the ingress which i am using:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
The ingress on opens 2 ports which are - 80, 443.
How should the redirection be done in such a case?
The deployment is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-dep
labels:
app: aspnet-core-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: csimpleweb
image: simpleweb
imagePullPolicy: Never
ports:
- containerPort: 80
The Service is:
apiVersion: v1
kind: Service
metadata:
name: first-service
spec:
type: NodePort
selector:
component: web
ports:
- port: 8080
targetPort: 80
So in summary I want to ask the following:
How redirection works in k8s + ASP.NET Core. The redirection does not work in my case where I have applied the redirection on the app.
Should the redirection be done on ingress if so how. In that case I think we have remove app.UseHttpsRedirection() from the app.
What docker container ports, which the pods will host, we have to open and how?
Thank you
### Response: According to my research I found the following way.
Remove the redirection from an ASP.NET Core app when we host it to kubernetes.
Apply the redirect to Ingress itself. By adding the following 2 lines to it:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
So my ingress code becomes:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
Let me know your thoughts. If you have a better answer the kindly add a new answer on the question.
| My asp.net core web application has redirection applied on statup.cs as:
app.UseHttpsRedirection();
The problem arises when I host it on kubernetes where this redirection does not happens. The app opens on both http and https url.
I am hosting the app on local kubernetes cluster on my pc having kali linus.
This is the ingress which i am using:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
The ingress on opens 2 ports which are - 80, 443.
How should the redirection be done in such a case?
The deployment is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-dep
labels:
app: aspnet-core-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: csimpleweb
image: simpleweb
imagePullPolicy: Never
ports:
- containerPort: 80
The Service is:
apiVersion: v1
kind: Service
metadata:
name: first-service
spec:
type: NodePort
selector:
component: web
ports:
- port: 8080
targetPort: 80
So in summary I want to ask the following:
How redirection works in k8s + ASP.NET Core. The redirection does not work in my case where I have applied the redirection on the app.
Should the redirection be done on ingress if so how. In that case I think we have remove app.UseHttpsRedirection() from the app.
What docker container ports, which the pods will host, we have to open and how?
Thank you
|
|
https://docs.fluentbit.io/manual/pipeline/filters/kubernetes
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default:443
tls.verify Off
In my issue, Kubernetes Apiserver ssl error.
| ###Instruction: I'm using fluent-bit to collect logs and pass it to fluentd for processing in a Kubernetes environment. Fluent-bit instances are controlled by DaemonSet and read logs from docker containers.
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
There is a fluent-bit service also running
Name: monitoring-fluent-bit-dips
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit-dips
app.kubernetes.io/version=1.8.10
helm.sh/chart=fluent-bit-0.19.6
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=fluent-bit-dips
Type: ClusterIP
IP Families: <none>
IP: 10.43.72.32
IPs: <none>
Port: http 2020/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.144:2020,10.42.1.155:2020,10.42.2.186:2020 + 1 more...
Session Affinity: None
Events: <none>
Fluentd service description is as below
Name: monitoring-logservice
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=logservice
app.kubernetes.io/version=1.9
helm.sh/chart=logservice-0.1.2
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=logservice
Type: ClusterIP
IP Families: <none>
IP: 10.43.44.254
IPs: <none>
Port: http 24224/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.143:24224
Session Affinity: None
Events: <none>
But fluent-bit logs doesn't reach fluentd and getting following error
[error] [upstream] connection #81 to monitoring-fluent-bit-dips:24224 timed out after 10 seconds
I tried several things like;
re-deploying fluent-bit pods
re-deploy fluentd pod
Upgrade fluent-bit version from 1.7.3 to 1.8.10
This is an Kubernetes environment where fluent-bit able to communicate with fluentd in the very earlier stage of deployment. Apart from that, this same fluent versions is working when I deploy locally with docker-desktop environment.
My guesses are
fluent-bit cannot manage the amount of log process
fluent services are unable to communicate once the services are restarted
Anyone having any experience in this or has any idea how to debug this issue more deeper?
Updated following with fluentd running pod description
Name: monitoring-logservice-5b8864ffd8-gfpzc
Namespace: dips
Priority: 0
Node: sl-sy-k3s-01/10.16.1.99
Start Time: Mon, 29 Nov 2021 13:09:13 +0530
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/name=logservice
pod-template-hash=5b8864ffd8
Annotations: kubectl.kubernetes.io/restartedAt: 2021-11-29T12:37:23+05:30
Status: Running
IP: 10.42.0.143
IPs:
IP: 10.42.0.143
Controlled By: ReplicaSet/monitoring-logservice-5b8864ffd8
Containers:
logservice:
Container ID: containerd://102483a7647fd2f10bead187eddf69aa4fad72051d6602dd171e1a373d4209d7
Image: our.private.repo/dips/logservice/splunk:1.9
Image ID: our.private.repo/dips/logservice/splunk@sha256:531f15f523a251b93dc8a25056f05c0c7bb428241531485a22b94896974e17e8
Ports: 24231/TCP, 24224/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 29 Nov 2021 13:09:14 +0530
Ready: True
Restart Count: 0
Liveness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SOME_ENV_VARS
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from monitoring-logservice-token-g9kwt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
monitoring-logservice-token-g9kwt:
Type: Secret (a volume populated by a Secret)
SecretName: monitoring-logservice-token-g9kwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
### Response: https://docs.fluentbit.io/manual/pipeline/filters/kubernetes
filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default:443
tls.verify Off
In my issue, Kubernetes Apiserver ssl error.
| I'm using fluent-bit to collect logs and pass it to fluentd for processing in a Kubernetes environment. Fluent-bit instances are controlled by DaemonSet and read logs from docker containers.
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
There is a fluent-bit service also running
Name: monitoring-fluent-bit-dips
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit-dips
app.kubernetes.io/version=1.8.10
helm.sh/chart=fluent-bit-0.19.6
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=fluent-bit-dips
Type: ClusterIP
IP Families: <none>
IP: 10.43.72.32
IPs: <none>
Port: http 2020/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.144:2020,10.42.1.155:2020,10.42.2.186:2020 + 1 more...
Session Affinity: None
Events: <none>
Fluentd service description is as below
Name: monitoring-logservice
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=logservice
app.kubernetes.io/version=1.9
helm.sh/chart=logservice-0.1.2
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=logservice
Type: ClusterIP
IP Families: <none>
IP: 10.43.44.254
IPs: <none>
Port: http 24224/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.143:24224
Session Affinity: None
Events: <none>
But fluent-bit logs doesn't reach fluentd and getting following error
[error] [upstream] connection #81 to monitoring-fluent-bit-dips:24224 timed out after 10 seconds
I tried several things like;
re-deploying fluent-bit pods
re-deploy fluentd pod
Upgrade fluent-bit version from 1.7.3 to 1.8.10
This is an Kubernetes environment where fluent-bit able to communicate with fluentd in the very earlier stage of deployment. Apart from that, this same fluent versions is working when I deploy locally with docker-desktop environment.
My guesses are
fluent-bit cannot manage the amount of log process
fluent services are unable to communicate once the services are restarted
Anyone having any experience in this or has any idea how to debug this issue more deeper?
Updated following with fluentd running pod description
Name: monitoring-logservice-5b8864ffd8-gfpzc
Namespace: dips
Priority: 0
Node: sl-sy-k3s-01/10.16.1.99
Start Time: Mon, 29 Nov 2021 13:09:13 +0530
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/name=logservice
pod-template-hash=5b8864ffd8
Annotations: kubectl.kubernetes.io/restartedAt: 2021-11-29T12:37:23+05:30
Status: Running
IP: 10.42.0.143
IPs:
IP: 10.42.0.143
Controlled By: ReplicaSet/monitoring-logservice-5b8864ffd8
Containers:
logservice:
Container ID: containerd://102483a7647fd2f10bead187eddf69aa4fad72051d6602dd171e1a373d4209d7
Image: our.private.repo/dips/logservice/splunk:1.9
Image ID: our.private.repo/dips/logservice/splunk@sha256:531f15f523a251b93dc8a25056f05c0c7bb428241531485a22b94896974e17e8
Ports: 24231/TCP, 24224/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 29 Nov 2021 13:09:14 +0530
Ready: True
Restart Count: 0
Liveness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SOME_ENV_VARS
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from monitoring-logservice-token-g9kwt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
monitoring-logservice-token-g9kwt:
Type: Secret (a volume populated by a Secret)
SecretName: monitoring-logservice-token-g9kwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
|
|
I tried to replicate the use case, but I got an error
2022/10/21 06:42:12 [error] 29#29: *9 directory index of "/usr/share/nginx/html/" is forbidden, client: 20.1.48.169, server: localhost, request: "GET / HTTP/1.1", host: "20.1.55.62:80"
so it seems like vault changed the directory permission as well, as it create .env in the path, here is the config
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/develop/us-west-2/app1-secrets
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/develop/us-west-2/app1-secrets" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: rolename
The work around was to overide the command of the desired container, for this use case, i used nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
Here is the compelete example with dummy value of my-app
apiVersion: apps/v1
kind: Deployment
metadata:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/my-app/develop/us-west-2/develop-my-app
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/my-app/develop/us-west-2/develop-my-app" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /vault/secret/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: my-app-develop-my-app
spec:
serviceAccountName: develop-my-app
containers:
- name: debug
image: nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
| ###Instruction: I'm using Hashicorp Vault in Kubernetes. I'm trying to mount secret file into main folder where my application resides. It would look like that: /usr/share/nginx/html/.env while application files are in /usr/share/nginx/html. But the container is not starting because of that. I suspect that that /usr/share/nginx/html was overwritten by Vault (annotation: vault.hashicorp.com/secret-volume-path). How can I mount only file /usr/share/nginx/html/.env?
My annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: configs/data/app/dev
vault.hashicorp.com/agent-inject-template-.env: |
{{- with secret (print "configs/data/app/dev") -}}{{- range $k, $v := .Data.data -}}
{{ $k }}={{ $v }}
{{ end }}{{- end -}}
vault.hashicorp.com/role: app
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html
### Response: I tried to replicate the use case, but I got an error
2022/10/21 06:42:12 [error] 29#29: *9 directory index of "/usr/share/nginx/html/" is forbidden, client: 20.1.48.169, server: localhost, request: "GET / HTTP/1.1", host: "20.1.55.62:80"
so it seems like vault changed the directory permission as well, as it create .env in the path, here is the config
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/develop/us-west-2/app1-secrets
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/develop/us-west-2/app1-secrets" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: rolename
The work around was to overide the command of the desired container, for this use case, i used nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
Here is the compelete example with dummy value of my-app
apiVersion: apps/v1
kind: Deployment
metadata:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/my-app/develop/us-west-2/develop-my-app
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/my-app/develop/us-west-2/develop-my-app" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /vault/secret/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: my-app-develop-my-app
spec:
serviceAccountName: develop-my-app
containers:
- name: debug
image: nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
| I'm using Hashicorp Vault in Kubernetes. I'm trying to mount secret file into main folder where my application resides. It would look like that: /usr/share/nginx/html/.env while application files are in /usr/share/nginx/html. But the container is not starting because of that. I suspect that that /usr/share/nginx/html was overwritten by Vault (annotation: vault.hashicorp.com/secret-volume-path). How can I mount only file /usr/share/nginx/html/.env?
My annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: configs/data/app/dev
vault.hashicorp.com/agent-inject-template-.env: |
{{- with secret (print "configs/data/app/dev") -}}{{- range $k, $v := .Data.data -}}
{{ $k }}={{ $v }}
{{ end }}{{- end -}}
vault.hashicorp.com/role: app
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html
|
|
You should use the unsupported resource property. For instance:
apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
...
spec:
unsupported:
podTemplate:
spec:
containers:
- name: keycloak
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
See kubectl explain Keycloak.spec.unsupported.podTemplate
KIND: Keycloak
VERSION: k8s.keycloak.org/v2alpha1
RESOURCE: podTemplate <Object>
DESCRIPTION:
You can configure that will be merged with the one configured by default by the operator.
...
FIELDS:
metadata <Object>
spec <Object>
| ###Instruction: The Keycloak operator uses Quarkus: https://github.com/keycloak/keycloak/tree/main/operator
In application.properties (https://github.com/keycloak/keycloak/blob/main/operator/src/main/resources/application.properties) we can set environment variables:
https://quarkus.io/guides/deploying-to-kubernetes#environment-variables-from-keyvalue-pairs
For example:
quarkus.kubernetes.env.vars.kc-hostname=localhost
quarkus.kubernetes.env.vars.kc-proxy=edge
quarkus.kubernetes.env.vars.proxy-address-forwarding=true
In the Kubernetes manifests which are generated, these environment variables appear in the operator container:
spec:
containers:
- env:
...
- name: KC_HOSTNAME
value: localhost
- name: PROXY_ADDRESS_FORWARDING
value: "true"
...
- name: KC_PROXY
value: edge
image: keycloak/keycloak-operator:19.0.2
imagePullPolicy: Always
However, I need them to be set in the application container, instead.
Here is another verification of this. The running operator container:
$ kubectl describe pod keycloak-operator --namespace=keycloak
Name: keycloak-operator-6479dbc544-2wl4d
...
Controlled By: ReplicaSet/keycloak-operator-6479dbc544
Containers:
keycloak-operator:
Image: keycloak/keycloak-operator:19.0.2
...
Environment:
KUBERNETES_NAMESPACE: keycloak (v1:metadata.namespace)
KC_HOSTNAME: localhost
PROXY_ADDRESS_FORWARDING: true
OPERATOR_KEYCLOAK_IMAGE: quay.io/keycloak/keycloak:nightly
KC_PROXY: edge
and using the application manifest provided here (https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak) here is the running application pod:
$ kubectl describe pod example-kc --namespace=keycloak
Name: example-kc-0
Namespace: keycloak
...
Containers:
keycloak:
Container ID:
Image: quay.io/keycloak/keycloak:nightly
...
Environment:
KC_CACHE_STACK: kubernetes
KC_HEALTH_ENABLED: true
KC_CACHE: ispn
KC_DB: postgres
KC_DB_URL_HOST: postgres-db
KC_DB_USERNAME: <set to the key 'username' in secret 'keycloak-db-secret'> Optional: false
KC_DB_PASSWORD: <set to the key 'password' in secret 'keycloak-db-secret'> Optional: false
KEYCLOAK_ADMIN: <set to the key 'username' in secret 'example-kc-initial-admin'> Optional: false
KEYCLOAK_ADMIN_PASSWORD: <set to the key 'password' in secret 'example-kc-initial-admin'> Optional: false
jgroups.dns.query: example-kc-discovery.keycloak
KC_HOSTNAME: test.keycloak.org
KC_HTTPS_CERTIFICATE_FILE: /mnt/certificates/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /mnt/certificates/tls.key
KC_PROXY: passthrough
Modifying the manifest (https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak) to replace hostname: test.keycloak.org with hostname: localhost does work. But how about for the other environment variables KC_PROXY and PROXY_ADDRESS_FORWARDING? Thank you
### Response: You should use the unsupported resource property. For instance:
apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
...
spec:
unsupported:
podTemplate:
spec:
containers:
- name: keycloak
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
See kubectl explain Keycloak.spec.unsupported.podTemplate
KIND: Keycloak
VERSION: k8s.keycloak.org/v2alpha1
RESOURCE: podTemplate <Object>
DESCRIPTION:
You can configure that will be merged with the one configured by default by the operator.
...
FIELDS:
metadata <Object>
spec <Object>
| The Keycloak operator uses Quarkus: https://github.com/keycloak/keycloak/tree/main/operator
In application.properties (https://github.com/keycloak/keycloak/blob/main/operator/src/main/resources/application.properties) we can set environment variables:
https://quarkus.io/guides/deploying-to-kubernetes#environment-variables-from-keyvalue-pairs
For example:
quarkus.kubernetes.env.vars.kc-hostname=localhost
quarkus.kubernetes.env.vars.kc-proxy=edge
quarkus.kubernetes.env.vars.proxy-address-forwarding=true
In the Kubernetes manifests which are generated, these environment variables appear in the operator container:
spec:
containers:
- env:
...
- name: KC_HOSTNAME
value: localhost
- name: PROXY_ADDRESS_FORWARDING
value: "true"
...
- name: KC_PROXY
value: edge
image: keycloak/keycloak-operator:19.0.2
imagePullPolicy: Always
However, I need them to be set in the application container, instead.
Here is another verification of this. The running operator container:
$ kubectl describe pod keycloak-operator --namespace=keycloak
Name: keycloak-operator-6479dbc544-2wl4d
...
Controlled By: ReplicaSet/keycloak-operator-6479dbc544
Containers:
keycloak-operator:
Image: keycloak/keycloak-operator:19.0.2
...
Environment:
KUBERNETES_NAMESPACE: keycloak (v1:metadata.namespace)
KC_HOSTNAME: localhost
PROXY_ADDRESS_FORWARDING: true
OPERATOR_KEYCLOAK_IMAGE: quay.io/keycloak/keycloak:nightly
KC_PROXY: edge
and using the application manifest provided here (https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak) here is the running application pod:
$ kubectl describe pod example-kc --namespace=keycloak
Name: example-kc-0
Namespace: keycloak
...
Containers:
keycloak:
Container ID:
Image: quay.io/keycloak/keycloak:nightly
...
Environment:
KC_CACHE_STACK: kubernetes
KC_HEALTH_ENABLED: true
KC_CACHE: ispn
KC_DB: postgres
KC_DB_URL_HOST: postgres-db
KC_DB_USERNAME: <set to the key 'username' in secret 'keycloak-db-secret'> Optional: false
KC_DB_PASSWORD: <set to the key 'password' in secret 'keycloak-db-secret'> Optional: false
KEYCLOAK_ADMIN: <set to the key 'username' in secret 'example-kc-initial-admin'> Optional: false
KEYCLOAK_ADMIN_PASSWORD: <set to the key 'password' in secret 'example-kc-initial-admin'> Optional: false
jgroups.dns.query: example-kc-discovery.keycloak
KC_HOSTNAME: test.keycloak.org
KC_HTTPS_CERTIFICATE_FILE: /mnt/certificates/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /mnt/certificates/tls.key
KC_PROXY: passthrough
Modifying the manifest (https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak) to replace hostname: test.keycloak.org with hostname: localhost does work. But how about for the other environment variables KC_PROXY and PROXY_ADDRESS_FORWARDING? Thank you
|
|
I don't think that's a seldon-specific question.
I think you should define 3 charts like this for QA, Alpha and Prod. Each chart should have different values, and that's it!
Another thing is that this solution would duplicates the code - your static chart would be copied 3 times with just HPA params changed. One of the solutions for that is using helm charts. In this case, you write the chart template, that has a placeholder for the HPA values, and you just write 3 values files. That way you have one chart written and 3 values files that dynamically set the values for HPA.
| ###Instruction: Using CRD of SeldonDeployment I want to configure HPA following this example
. The seldon deployment config is -
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- hpaSpec:
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 10
type: Resource
minReplicas: 1
spec:
containers:
- image: seldonio/mock_classifier:1.5.0-dev
imagePullPolicy: IfNotPresent
name: classifier
resources:
requests:
cpu: '0.5'
terminationGracePeriodSeconds: 1
graph:
children: []
name: classifier
type: MODEL
name: example
When the above is applied this generates a deployment and a declarative HPA (along with other components). The deployment file is identical for QA, Alpha and Prod environments leading to the number of max / min replicas being the same. 1. Can the number of replicas be made configurable for different environments? i.e different values for max / min replicas for different envs (in the CRD SeldonDeployment config file).
Note: for other (non-Seldon) deployments, we use declarative HPA for each environment. But this is not possible in this case since the Seldon controller manager reverts any diff between the applied CRD and current (generated from the CRD) deployment.
2. More generalized question: how would we inject dynamic variables in CRD-based deployments?
### Response: I don't think that's a seldon-specific question.
I think you should define 3 charts like this for QA, Alpha and Prod. Each chart should have different values, and that's it!
Another thing is that this solution would duplicates the code - your static chart would be copied 3 times with just HPA params changed. One of the solutions for that is using helm charts. In this case, you write the chart template, that has a placeholder for the HPA values, and you just write 3 values files. That way you have one chart written and 3 values files that dynamically set the values for HPA.
| Using CRD of SeldonDeployment I want to configure HPA following this example
. The seldon deployment config is -
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- hpaSpec:
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 10
type: Resource
minReplicas: 1
spec:
containers:
- image: seldonio/mock_classifier:1.5.0-dev
imagePullPolicy: IfNotPresent
name: classifier
resources:
requests:
cpu: '0.5'
terminationGracePeriodSeconds: 1
graph:
children: []
name: classifier
type: MODEL
name: example
When the above is applied this generates a deployment and a declarative HPA (along with other components). The deployment file is identical for QA, Alpha and Prod environments leading to the number of max / min replicas being the same. 1. Can the number of replicas be made configurable for different environments? i.e different values for max / min replicas for different envs (in the CRD SeldonDeployment config file).
Note: for other (non-Seldon) deployments, we use declarative HPA for each environment. But this is not possible in this case since the Seldon controller manager reverts any diff between the applied CRD and current (generated from the CRD) deployment.
2. More generalized question: how would we inject dynamic variables in CRD-based deployments?
|
|
You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.
Solution 1:
Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.
Solution 2
If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:
a) Edit the PV and PVC and delete or set to null the finalizers in the metadata
kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
b) Simply patch the PV and PVC as shown below:
kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
Hope it helps.
| ###Instruction: Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:
kubectl delete -f file.yml
kubectl delete PVC
but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.
Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX
Here is the status in my terminal:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h
Thank You for your answers,
Stack Community :)
### Response: You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.
Solution 1:
Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.
Solution 2
If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:
a) Edit the PV and PVC and delete or set to null the finalizers in the metadata
kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
b) Simply patch the PV and PVC as shown below:
kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
Hope it helps.
| Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:
kubectl delete -f file.yml
kubectl delete PVC
but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.
Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX
Here is the status in my terminal:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h
Thank You for your answers,
Stack Community :)
|
|
Your interceptor usage is likely preventing execution of the RPC.
// When a unary interceptor(s) is set on a ClientConn, gRPC
// delegates all unary RPC invocations to the interceptor, and it is the
// responsibility of the interceptor to call invoker to complete the processing
// of the RPC.
https://github.com/grpc/grpc-go/blob/9127159caf5a3879dad56b795938fde3bc0a7eaa/interceptor.go#L31-L34
So your interceptor function should instead look like:
grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return invoker(ctx, method, req, reply, cc, opts...)
}))
| ###Instruction: I am trying to make GRPC work in a digitalocean Kubernetes cluster - but am still to succeed.
PS: Apologies for the long content
I have found some content regarding this but those revolve around some ingress. For me, these are internal services.
I have a .proto defined as such:
syntax = "proto3";
package imgproto;
option go_package = ".;imgproto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
service ImaginaryServer {
rpc Ping(PingPayload) returns (PongPayload) {}
}
message PingPayload {
google.protobuf.Timestamp ts = 1;
}
message PongPayload {
google.protobuf.Timestamp ts = 1;
}
After running proto-gen-go, I populate the implementation with:
type ImaginaryServerImpl struct {
imgproto.UnimplementedImaginaryServer
}
func (s *ImaginaryServerImpl) Ping(_ context.Context, in *imgproto.PingPayload) (*imgproto.PongPayload, error) {
fmt.Printf("ImaginaryServerImpl.Ping: %v\n", in)
return &imgproto.PongPayload{
Ts: timestamppb.New(time.Now()),
}, nil
}
Create and register it to a GRPC server:
grpcServer := grpc.NewServer()
imgproto.RegisterImaginaryServer(grpcServer, &ImaginaryServerImpl{})
And start the server:
grpcListener, err := net.Listen("tcp", fmt.Sprintf(":%d", constants.PORT_GRPC))
if err != nil {
return err
}
go func() {
if err := grpcServer.Serve(grpcListener); err != nil {
fmt.Println("GRPC Server startup failed with", err)
}
}()
<-ctx.Done()
grpcServer.GracefulStop()
I wrote up the client as:
grpcConnection, err := grpc.Dial(
endpoint,
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return nil
}))
if err != nil {
return ctx, err
}
client := pmqproto.NewImaginaryClient(grpcConnection)
fmt.Println(" >>>>>>>>>>> PING:")
pong, pingErr := client.Ping(ctx, &imgproto.PingPayload{Ts: timestamppb.Now()}, grpc.WaitForReady(false))
if pingErr != nil {
fmt.Println(pingErr)
}
fmt.Println(" >>>>>>>>>>> PONG: ", pong.Ts.AsTime().String())
But it looks like the client is returning without actually invoking the RPC.
Log that I am seeing in the client:
>>>>>>>>>>> PING:
Calling GRPC: /imgproto.ImaginaryServer/Ping ts:{seconds:1666113879 nanos:778900352} 0x127aa60
>>>>>>>>>>> PONG: 1970-01-01 00:00:00 +0000 UTC
There are no logs in the server.
My k8s yaml is as such:
apiVersion: v1
kind: Service
metadata:
name: reality
namespace: reality-ns
spec:
type: ClusterIP
selector:
app: reality
ports:
- name: grpc
protocol: TCP
port: 6772
targetPort: 6772
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reality
namespace: reality-ns
labels:
app: reality
spec:
selector:
matchLabels:
app: reality
template:
metadata:
labels:
app: reality
spec:
containers:
- name: reality
image: registry.example.com/binaek/reality
imagePullPolicy: Always
command: ["cmd"]
ports:
- name: grpc
containerPort: 6772
I am not able to locate what I am doing wrong. Desperately need help at this point.
The image uses gcr.io/distroless/base as it's base.
### Response: Your interceptor usage is likely preventing execution of the RPC.
// When a unary interceptor(s) is set on a ClientConn, gRPC
// delegates all unary RPC invocations to the interceptor, and it is the
// responsibility of the interceptor to call invoker to complete the processing
// of the RPC.
https://github.com/grpc/grpc-go/blob/9127159caf5a3879dad56b795938fde3bc0a7eaa/interceptor.go#L31-L34
So your interceptor function should instead look like:
grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return invoker(ctx, method, req, reply, cc, opts...)
}))
| I am trying to make GRPC work in a digitalocean Kubernetes cluster - but am still to succeed.
PS: Apologies for the long content
I have found some content regarding this but those revolve around some ingress. For me, these are internal services.
I have a .proto defined as such:
syntax = "proto3";
package imgproto;
option go_package = ".;imgproto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
service ImaginaryServer {
rpc Ping(PingPayload) returns (PongPayload) {}
}
message PingPayload {
google.protobuf.Timestamp ts = 1;
}
message PongPayload {
google.protobuf.Timestamp ts = 1;
}
After running proto-gen-go, I populate the implementation with:
type ImaginaryServerImpl struct {
imgproto.UnimplementedImaginaryServer
}
func (s *ImaginaryServerImpl) Ping(_ context.Context, in *imgproto.PingPayload) (*imgproto.PongPayload, error) {
fmt.Printf("ImaginaryServerImpl.Ping: %v\n", in)
return &imgproto.PongPayload{
Ts: timestamppb.New(time.Now()),
}, nil
}
Create and register it to a GRPC server:
grpcServer := grpc.NewServer()
imgproto.RegisterImaginaryServer(grpcServer, &ImaginaryServerImpl{})
And start the server:
grpcListener, err := net.Listen("tcp", fmt.Sprintf(":%d", constants.PORT_GRPC))
if err != nil {
return err
}
go func() {
if err := grpcServer.Serve(grpcListener); err != nil {
fmt.Println("GRPC Server startup failed with", err)
}
}()
<-ctx.Done()
grpcServer.GracefulStop()
I wrote up the client as:
grpcConnection, err := grpc.Dial(
endpoint,
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return nil
}))
if err != nil {
return ctx, err
}
client := pmqproto.NewImaginaryClient(grpcConnection)
fmt.Println(" >>>>>>>>>>> PING:")
pong, pingErr := client.Ping(ctx, &imgproto.PingPayload{Ts: timestamppb.Now()}, grpc.WaitForReady(false))
if pingErr != nil {
fmt.Println(pingErr)
}
fmt.Println(" >>>>>>>>>>> PONG: ", pong.Ts.AsTime().String())
But it looks like the client is returning without actually invoking the RPC.
Log that I am seeing in the client:
>>>>>>>>>>> PING:
Calling GRPC: /imgproto.ImaginaryServer/Ping ts:{seconds:1666113879 nanos:778900352} 0x127aa60
>>>>>>>>>>> PONG: 1970-01-01 00:00:00 +0000 UTC
There are no logs in the server.
My k8s yaml is as such:
apiVersion: v1
kind: Service
metadata:
name: reality
namespace: reality-ns
spec:
type: ClusterIP
selector:
app: reality
ports:
- name: grpc
protocol: TCP
port: 6772
targetPort: 6772
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reality
namespace: reality-ns
labels:
app: reality
spec:
selector:
matchLabels:
app: reality
template:
metadata:
labels:
app: reality
spec:
containers:
- name: reality
image: registry.example.com/binaek/reality
imagePullPolicy: Always
command: ["cmd"]
ports:
- name: grpc
containerPort: 6772
I am not able to locate what I am doing wrong. Desperately need help at this point.
The image uses gcr.io/distroless/base as it's base.
|
|
It is possible from Kubernetes 1.25: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-failure-policy
It only exists for Kubernetes jobs.
In the case of your initial question, the syntax would be:
apiVersion: batch/v1
kind: Job
spec:
podFailurePolicy:
rules:
- action: Count
onExitCodes:
operator: In
values: [1]
- action: FailJob
onExitCodes:
operator: In
values: [2]
| ###Instruction: Is it possible to apply restartPolicy for the pod based on docker exit code? What I mean is that if a container will exit with code 1 then the pod should be restarted (restartPolicy = Always), but when container will exit with code 2 then pod shouldn't be restarted (restartPolicy = Never)?
I have a program in the container which have 2 exit codes:
exit 1 - where a failure is recoverable (e.g. disconnection from a server as server can be down/restarted),
exit 2 - where a failure is not recoverable (passed wrong argument).
I want kubernetes to restart the pod when exit code is 1 but not when exit code is 2 as there will be no chance to recovery.
I think I can trick kubernetes by setting exit code for non-recoverable failure to 0 (normal termination) and exit code for recoverable failure to 1. Then set restartPolicy to onFailure but I don't like this solution.
### Response: It is possible from Kubernetes 1.25: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-failure-policy
It only exists for Kubernetes jobs.
In the case of your initial question, the syntax would be:
apiVersion: batch/v1
kind: Job
spec:
podFailurePolicy:
rules:
- action: Count
onExitCodes:
operator: In
values: [1]
- action: FailJob
onExitCodes:
operator: In
values: [2]
| Is it possible to apply restartPolicy for the pod based on docker exit code? What I mean is that if a container will exit with code 1 then the pod should be restarted (restartPolicy = Always), but when container will exit with code 2 then pod shouldn't be restarted (restartPolicy = Never)?
I have a program in the container which have 2 exit codes:
exit 1 - where a failure is recoverable (e.g. disconnection from a server as server can be down/restarted),
exit 2 - where a failure is not recoverable (passed wrong argument).
I want kubernetes to restart the pod when exit code is 1 but not when exit code is 2 as there will be no chance to recovery.
I think I can trick kubernetes by setting exit code for non-recoverable failure to 0 (normal termination) and exit code for recoverable failure to 1. Then set restartPolicy to onFailure but I don't like this solution.
|
|
This is not an anti-pattern. It is just difficult to implement and manage.
Point 1
In a self hosted cluster also you can have persistent volume storage provisioned though GlusterFS and CEPH. So, you don't always have to use ephemeral storage. So, Point 1 is not fully valid.
The DBs are generally created as a statefulsets, where every instance gets its own copy of data.
Point 2
When your DB cluster horizontally scales, the 'init' container of the new DB pod or a CRD provided by the DB needs to register the 'secondary' DB pod so it becomes the part of your dB cluster
A statefulset needs to also run as a headless service so the IPs of each endpoint is also known all the time for cluster healthcheck and primary->secondary data sync and to elect a new primary selection in case the primary node goes down
So, as long as the new pods register themselves to the DB cluster, you will be okay to run your db workload inside a kubernetes cluster
Further reading: https://devopscube.com/deploy-postgresql-statefulset/
| ###Instruction: Let's say we are running some services in a Kubernetes cluster and one of them requires a PostgreSQL instance, expected to persist data reliably. Should the DB live in the cluster or be configured separately?
Imagine that the DB is deployed in the cluster. This probably means one of the following:
We need a process for migrating the data to another node in case the current one goes down. This sounds like a non-trivial task. Or:
The node where the DB lives has to be treated in a special way. Horizontal scaling must be constrained to the other nodes and the cluster ceases to be homogeneous. This might be seen as a design flaw, going against the spirit of maintaining disposable, replaceable containers.
Point (1) applies only to self-managed clusters where all the storage we have at our disposal is tied to machines where the nodes run. If we are using a managed cloud, we can use persistent volume claims and a new instance can pick up the data automatically. Still, this means that if the node with the DB is removed, we will suffer a database downtime until a new instance comes up. So point (2) remains valid also for managed K8s offerings.
Therefore I can well understand the argument for keeping the DB outside of Kubernetes. What would some counterarguments look like? There are a lot of official helm charts for various DBs which suggests that people keep their DBs in Kubernetes clusters after all.
Happy to learn some critical thoughts!
### Response: This is not an anti-pattern. It is just difficult to implement and manage.
Point 1
In a self hosted cluster also you can have persistent volume storage provisioned though GlusterFS and CEPH. So, you don't always have to use ephemeral storage. So, Point 1 is not fully valid.
The DBs are generally created as a statefulsets, where every instance gets its own copy of data.
Point 2
When your DB cluster horizontally scales, the 'init' container of the new DB pod or a CRD provided by the DB needs to register the 'secondary' DB pod so it becomes the part of your dB cluster
A statefulset needs to also run as a headless service so the IPs of each endpoint is also known all the time for cluster healthcheck and primary->secondary data sync and to elect a new primary selection in case the primary node goes down
So, as long as the new pods register themselves to the DB cluster, you will be okay to run your db workload inside a kubernetes cluster
Further reading: https://devopscube.com/deploy-postgresql-statefulset/
| Let's say we are running some services in a Kubernetes cluster and one of them requires a PostgreSQL instance, expected to persist data reliably. Should the DB live in the cluster or be configured separately?
Imagine that the DB is deployed in the cluster. This probably means one of the following:
We need a process for migrating the data to another node in case the current one goes down. This sounds like a non-trivial task. Or:
The node where the DB lives has to be treated in a special way. Horizontal scaling must be constrained to the other nodes and the cluster ceases to be homogeneous. This might be seen as a design flaw, going against the spirit of maintaining disposable, replaceable containers.
Point (1) applies only to self-managed clusters where all the storage we have at our disposal is tied to machines where the nodes run. If we are using a managed cloud, we can use persistent volume claims and a new instance can pick up the data automatically. Still, this means that if the node with the DB is removed, we will suffer a database downtime until a new instance comes up. So point (2) remains valid also for managed K8s offerings.
Therefore I can well understand the argument for keeping the DB outside of Kubernetes. What would some counterarguments look like? There are a lot of official helm charts for various DBs which suggests that people keep their DBs in Kubernetes clusters after all.
Happy to learn some critical thoughts!
|
|
I am not aware of something what returns you an example directly. But you could use kubectl explain to get information about the fields of the api-resources schema:
e.g.
$ kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io
To get the fields listed in a recursive way, use:
kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io --recursive
Some operators also provide examples of their provided CRDs in the alm-examples annotation of their ClusterServiceVersion. So maybe this could be another way for you.
| ###Instruction: A way to get the schema using as an example secretproviderclasses.secrets-store.csi.x-k8s.io i would use the command kubectl describe crd secretproviderclasses.secrets-store.csi.x-k8s.io and get as a result:
Name: secretproviderclasses.secrets-store.csi.x-k8s.io
Namespace:
Labels: <none>
Annotations: controller-gen.kubebuilder.io/version: v0.9.0
helm.sh/resource-policy: keep
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2022-10-11T15:27:15Z
Generation: 1
Managed Fields:
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:controller-gen.kubebuilder.io/version:
f:spec:
f:conversion:
.:
f:strategy:
f:group:
f:names:
f:kind:
f:listKind:
f:plural:
f:singular:
f:scope:
f:versions:
Manager: helm
Operation: Update
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind:
f:listKind:
f:plural:
f:singular:
f:conditions:
k:{"type":"Established"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"NamesAccepted"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
Manager: kube-apiserver
Operation: Update
Subresource: status
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-10-11T15:27:38Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:helm.sh/resource-policy:
Manager: kubectl-patch
Operation: Update
Time: 2022-10-12T16:02:53Z
Resource Version: 123907610
UID: 4a251e0a-97fc-4369-903f-9aa9a13469c1
Spec:
Conversion:
Strategy: None
Group: secrets-store.csi.x-k8s.io
Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Scope: Namespaced
Versions:
Name: v1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: true
Deprecated: true
Deprecation Warning: secrets-store.csi.x-k8s.io/v1alpha1 is deprecated. Use secrets-store.csi.x-k8s.io/v1 instead.
Name: v1alpha1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: false
Status:
Accepted Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Conditions:
Last Transition Time: 2022-10-11T15:27:15Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2022-10-11T15:27:15Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
Events:
Which is really verbose and hard to read, so is there some command that returns a json schema with examples?
### Response: I am not aware of something what returns you an example directly. But you could use kubectl explain to get information about the fields of the api-resources schema:
e.g.
$ kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io
To get the fields listed in a recursive way, use:
kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io --recursive
Some operators also provide examples of their provided CRDs in the alm-examples annotation of their ClusterServiceVersion. So maybe this could be another way for you.
| A way to get the schema using as an example secretproviderclasses.secrets-store.csi.x-k8s.io i would use the command kubectl describe crd secretproviderclasses.secrets-store.csi.x-k8s.io and get as a result:
Name: secretproviderclasses.secrets-store.csi.x-k8s.io
Namespace:
Labels: <none>
Annotations: controller-gen.kubebuilder.io/version: v0.9.0
helm.sh/resource-policy: keep
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2022-10-11T15:27:15Z
Generation: 1
Managed Fields:
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:controller-gen.kubebuilder.io/version:
f:spec:
f:conversion:
.:
f:strategy:
f:group:
f:names:
f:kind:
f:listKind:
f:plural:
f:singular:
f:scope:
f:versions:
Manager: helm
Operation: Update
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind:
f:listKind:
f:plural:
f:singular:
f:conditions:
k:{"type":"Established"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"NamesAccepted"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
Manager: kube-apiserver
Operation: Update
Subresource: status
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-10-11T15:27:38Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:helm.sh/resource-policy:
Manager: kubectl-patch
Operation: Update
Time: 2022-10-12T16:02:53Z
Resource Version: 123907610
UID: 4a251e0a-97fc-4369-903f-9aa9a13469c1
Spec:
Conversion:
Strategy: None
Group: secrets-store.csi.x-k8s.io
Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Scope: Namespaced
Versions:
Name: v1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: true
Deprecated: true
Deprecation Warning: secrets-store.csi.x-k8s.io/v1alpha1 is deprecated. Use secrets-store.csi.x-k8s.io/v1 instead.
Name: v1alpha1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: false
Status:
Accepted Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Conditions:
Last Transition Time: 2022-10-11T15:27:15Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2022-10-11T15:27:15Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
Events:
Which is really verbose and hard to read, so is there some command that returns a json schema with examples?
|
|
A standard Kubernetes Service provides basic load-balancing. Even for a ClusterIP-type Service, the Service has its own cluster-internal IP address and DNS name, and forwards requests to the collection of Pods specified by its selector:.
In normal use, it is enough to create a multiple-replica Deployment, set a Service to point at its Pods, and send requests only to the Service. All of the replicas will receive requests.
The documentation discusses the implementation of internal load balancing in more detail than an application developer normally needs. Unless your cluster administrator has done extra setup, you'll probably get round-robin request routing – the first Pod will receive the first request, the second Pod the second, and so on.
... the official Cilium documentation states ...
This is almost certainly a statement about external load balancing. As a cluster administrator (not a programmer) a "plain" Kubernetes installation doesn't include an external load-balancer implementation, and a LoadBalancer-type Service behaves identically to a NodePort-type Service.
There are obvious deficiencies to round-robin scheduling, most notably if you do wind up having individual network requests that take a long time and a lot of resource to service. As an application developer the best way to address this is to make these very-long-running requests run asynchronously; return something like an HTTP 201 Created status with a unique per-job URL, and do the actual work in a separate queue-backed worker.
| ###Instruction: I have read this question which is very similar to what I am asking, but still wanted to write a new question since the accepted answer there seems very incomplete and also potentially wrong.
Basically, it seems like there is some missing or contradictory information regarding built in load-balancing for regular Kubernetes Services (I am not talking about LoadBalancer services). For example, the official Cilium documentation states that "Kubernetes doesn't come with an implementation of Load Balancing". In addition, I couldn't find any information in the official Kubernetes documentation about load balancing for internal services (there was only a section discussing this under ingresses).
So my question is - how does load balancing or distribution of requests work when we make a request from within a Kubernetes cluster to the internal address of a Kubernetes service?
I know there's a Kubernetes proxy on each node that creates the DNS records for such services, but what about services that span multiple pods and nodes? There's got to be some form of request distribution or load-balancing, or else this just wouldn't work at all, no?
### Response: A standard Kubernetes Service provides basic load-balancing. Even for a ClusterIP-type Service, the Service has its own cluster-internal IP address and DNS name, and forwards requests to the collection of Pods specified by its selector:.
In normal use, it is enough to create a multiple-replica Deployment, set a Service to point at its Pods, and send requests only to the Service. All of the replicas will receive requests.
The documentation discusses the implementation of internal load balancing in more detail than an application developer normally needs. Unless your cluster administrator has done extra setup, you'll probably get round-robin request routing – the first Pod will receive the first request, the second Pod the second, and so on.
... the official Cilium documentation states ...
This is almost certainly a statement about external load balancing. As a cluster administrator (not a programmer) a "plain" Kubernetes installation doesn't include an external load-balancer implementation, and a LoadBalancer-type Service behaves identically to a NodePort-type Service.
There are obvious deficiencies to round-robin scheduling, most notably if you do wind up having individual network requests that take a long time and a lot of resource to service. As an application developer the best way to address this is to make these very-long-running requests run asynchronously; return something like an HTTP 201 Created status with a unique per-job URL, and do the actual work in a separate queue-backed worker.
| I have read this question which is very similar to what I am asking, but still wanted to write a new question since the accepted answer there seems very incomplete and also potentially wrong.
Basically, it seems like there is some missing or contradictory information regarding built in load-balancing for regular Kubernetes Services (I am not talking about LoadBalancer services). For example, the official Cilium documentation states that "Kubernetes doesn't come with an implementation of Load Balancing". In addition, I couldn't find any information in the official Kubernetes documentation about load balancing for internal services (there was only a section discussing this under ingresses).
So my question is - how does load balancing or distribution of requests work when we make a request from within a Kubernetes cluster to the internal address of a Kubernetes service?
I know there's a Kubernetes proxy on each node that creates the DNS records for such services, but what about services that span multiple pods and nodes? There's got to be some form of request distribution or load-balancing, or else this just wouldn't work at all, no?
|
|
You cannot have cloud agnostic storage by using the CSI drivers or the native VolumeClaims in Kubernetes. That's because these API's are the upstream way of provisioning storage which each cloud provider has to integrate with to translate them into the Cloud Specific API (PD for Google, EBS for AWS...)
Unless you have a self-managed Storage that you can access via an NFS driver or a specific driver from the tools managed above. And still with that the Self-Managed Storage solution is going to be based on a Cloud provider specific volume. So You are just going to shift the issue to a different place.
| ###Instruction: Currently, my Kubernetes cluster is provisioned via GKE.
I use GCE Persistent Disks to persist my data.
In GCE, persistent storage is provided via GCE Persistent Disks. Kubernetes supports adding them to Pods or PersistenVolumes or StorageClasses via the gcePersistentDisk volume/provisioner type.
What if I would like to transfer my cluster from Google to, lets say, Azure or AWS?
Then I would have to change value of volume type to azureFile or awsElasticBlockStore respectively in all occurrences in the manifest files.
I hope CSI driver will solve that problem, unfortunately, they also use a different type of volume for each provider cloud provider, for example pd.csi.storage.gke.io for GCP or disk.csi.azure.com for Azure.
Is there any convenient way to make the Kubernetes volumes to be cloud agnostic? In which I wouldn't have to make any changes in manifest files before K8s cluster migration.
### Response: You cannot have cloud agnostic storage by using the CSI drivers or the native VolumeClaims in Kubernetes. That's because these API's are the upstream way of provisioning storage which each cloud provider has to integrate with to translate them into the Cloud Specific API (PD for Google, EBS for AWS...)
Unless you have a self-managed Storage that you can access via an NFS driver or a specific driver from the tools managed above. And still with that the Self-Managed Storage solution is going to be based on a Cloud provider specific volume. So You are just going to shift the issue to a different place.
| Currently, my Kubernetes cluster is provisioned via GKE.
I use GCE Persistent Disks to persist my data.
In GCE, persistent storage is provided via GCE Persistent Disks. Kubernetes supports adding them to Pods or PersistenVolumes or StorageClasses via the gcePersistentDisk volume/provisioner type.
What if I would like to transfer my cluster from Google to, lets say, Azure or AWS?
Then I would have to change value of volume type to azureFile or awsElasticBlockStore respectively in all occurrences in the manifest files.
I hope CSI driver will solve that problem, unfortunately, they also use a different type of volume for each provider cloud provider, for example pd.csi.storage.gke.io for GCP or disk.csi.azure.com for Azure.
Is there any convenient way to make the Kubernetes volumes to be cloud agnostic? In which I wouldn't have to make any changes in manifest files before K8s cluster migration.
|
|
Add "ec2:DescribeInstanceTypes" permission to your <AmazonEKSClusterAutoscalerPolicy> IAM policy.
Then restart the Autoscaller deployment.
kubectl -n kube-system rollout restart deployment.apps/cluster-autoscaler
This worked for me.
| ###Instruction: Auto scaler failing with the below error.
Procedure followed for here
aws_cloud_provider.go:369] Failed to generate AWS EC2 Instance Types: WebIdentityErr: failed to retrieve credentials
cluster autoscaler failed to generate aws ec2 instance types: unable to load ec2 instance type list
### Response: Add "ec2:DescribeInstanceTypes" permission to your <AmazonEKSClusterAutoscalerPolicy> IAM policy.
Then restart the Autoscaller deployment.
kubectl -n kube-system rollout restart deployment.apps/cluster-autoscaler
This worked for me.
| Auto scaler failing with the below error.
Procedure followed for here
aws_cloud_provider.go:369] Failed to generate AWS EC2 Instance Types: WebIdentityErr: failed to retrieve credentials
cluster autoscaler failed to generate aws ec2 instance types: unable to load ec2 instance type list
|
|
You don't need to open a bash in the container first. You can run the stolonctl command directly:
$ kubectl exec -it pod -c container ${cluster_name} -- stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status
| ###Instruction: I am able to exec into a pod , and in that pod i will also like to run a stolonctl command at the same time into the pod i just exec into. Here is an example of what i will like to achieve. I tried using the first command first and then tried to see if i write the second command if it will work based on the first but it didnt.
Special execution command ${cluster_name} kubectl exec -it pod -c container ${cluster_name} -- /bin/bash
then in the bash i want to also run this
stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status
i want to be able to achieve something like this in robot. be able to do something similar to ls in the pod
controlplane $ kubectl run --image=nginx web --restart=Never
pod/web created
controlplane $ kubectl get po
NAME READY STATUS RESTARTS AGE
web 0/1 ContainerCreating 0 4s
controlplane $ kubectl exec -it web -- /bin/bash
root@web:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
### Response: You don't need to open a bash in the container first. You can run the stolonctl command directly:
$ kubectl exec -it pod -c container ${cluster_name} -- stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status
| I am able to exec into a pod , and in that pod i will also like to run a stolonctl command at the same time into the pod i just exec into. Here is an example of what i will like to achieve. I tried using the first command first and then tried to see if i write the second command if it will work based on the first but it didnt.
Special execution command ${cluster_name} kubectl exec -it pod -c container ${cluster_name} -- /bin/bash
then in the bash i want to also run this
stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status
i want to be able to achieve something like this in robot. be able to do something similar to ls in the pod
controlplane $ kubectl run --image=nginx web --restart=Never
pod/web created
controlplane $ kubectl get po
NAME READY STATUS RESTARTS AGE
web 0/1 ContainerCreating 0 4s
controlplane $ kubectl exec -it web -- /bin/bash
root@web:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
|
|
It is possible to do a raw request using the AbsPath() method.
path := fmt.Sprintf("/apis/operators.coreos.com/v1alpha1/namespaces/%s/clusterserviceversions", namespace)
data, err := clientset.RESTClient().Get().
AbsPath(path).
DoRaw(ctx)
Also notice that if you want to define clientset using the interface (kubernetes.Interface) instead of the concrete type (*kubernetes.Clientset) the method clientset.RESTClient() is not directly accessible, but you can use the following one:
clientset.Discovery().RESTClient()
| ###Instruction: I'm trying to use the K8S Go client to list the ClusterServiceVersions.
It could be enough to have the raw response body.
I've tried this:
data, err := clientset.RESTClient().Get().Namespace(namespace).
Resource("ClusterServiceVersions").
DoRaw(context.TODO())
if err != nil {
panic(err.Error())
}
fmt.Printf("%v", string(data))
But it returns the following error:
panic: the server could not find the requested resource (get ClusterServiceVersions.meta.k8s.io)
How do I specify to use the operators.coreos.com group?
Looking at some existing code I've also tried to add
VersionedParams(&v1.ListOptions{}, scheme.ParameterCodec)
But it result in this other error:
panic: v1.ListOptions is not suitable for converting to "meta.k8s.io/v1" in scheme "pkg/runtime/scheme.go:100"
### Response: It is possible to do a raw request using the AbsPath() method.
path := fmt.Sprintf("/apis/operators.coreos.com/v1alpha1/namespaces/%s/clusterserviceversions", namespace)
data, err := clientset.RESTClient().Get().
AbsPath(path).
DoRaw(ctx)
Also notice that if you want to define clientset using the interface (kubernetes.Interface) instead of the concrete type (*kubernetes.Clientset) the method clientset.RESTClient() is not directly accessible, but you can use the following one:
clientset.Discovery().RESTClient()
| I'm trying to use the K8S Go client to list the ClusterServiceVersions.
It could be enough to have the raw response body.
I've tried this:
data, err := clientset.RESTClient().Get().Namespace(namespace).
Resource("ClusterServiceVersions").
DoRaw(context.TODO())
if err != nil {
panic(err.Error())
}
fmt.Printf("%v", string(data))
But it returns the following error:
panic: the server could not find the requested resource (get ClusterServiceVersions.meta.k8s.io)
How do I specify to use the operators.coreos.com group?
Looking at some existing code I've also tried to add
VersionedParams(&v1.ListOptions{}, scheme.ParameterCodec)
But it result in this other error:
panic: v1.ListOptions is not suitable for converting to "meta.k8s.io/v1" in scheme "pkg/runtime/scheme.go:100"
|
|
You can it with three dashes on your yaml file
like this
apiVersion: v1
kind: Service
metadata:
name: mock
spec:
...
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mock
spec:
Source : https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a
| ###Instruction: I want to make a YAML file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried
kubectl apply -f (name_file.yaml)
### Response: You can it with three dashes on your yaml file
like this
apiVersion: v1
kind: Service
metadata:
name: mock
spec:
...
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mock
spec:
Source : https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a
| I want to make a YAML file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried
kubectl apply -f (name_file.yaml)
|
|
There is a solution.
First I want to thank @yair-elmaliah for providing the solution in a different channel.
https://stackoverflow.com/users/16521581/yair-elmaliah
So, my adjusted setup is as follows.
The nodes in the Kubernetes cluster have the following labels:
Node Name Node Labels
worker-zone-a-1 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-2 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-3 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-4 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-b-1 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-2 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-3 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
And this is the adjusted deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
node-role.kubernetes.io/data: data
topologySpreadConstraints:
- topologyKey: topology.kubernetes.io/zone
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
- topologyKey: topology.kubernetes.io/zone-a
maxSkew: 1
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The first constraint (topologyKey: topology.kubernetes.io/zone) will distribute the 5 pods between zone a and zone b using a 3/2 or 2/3 ratio. This is good, but we cannot control where the 3 pods will be allocated.
The second constraint (topologyKey: topology.kubernetes.io/zone-a) will try to schedule one of the pods on a node that has the zone-a label.
Since all constraints are evaluated and combined together, this setup will tell the scheduler to deploy 4 pods based on zone name and the fifth pod based on zone-type.
| ###Instruction: I have the following setup in a kubernetes cluster:
Node Name Node Label
worker-zone-a-1 topology.kubernetes.io/zone=a
worker-zone-a-2 topology.kubernetes.io/zone=a
worker-zone-a-3 topology.kubernetes.io/zone=a
worker-zone-a-4 topology.kubernetes.io/zone=a
worker-zone-b-1 topology.kubernetes.io/zone=b
worker-zone-b-2 topology.kubernetes.io/zone=b
worker-zone-b-3 topology.kubernetes.io/zone=b
And I have the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The purpose of using topologySpreadConstraints is to distribute the Deployment in a way that zone-a will run 3 Pods and zone-b will run 2 Pods.
When applying the Deployment several times, sometimes the pods are distributed 3 for zone-a and 2 for zone-b and sometimes the pods are distributed 2 for zone-a and 3 for zone-b.
Is there a way to set the Pods distribution of the Deployment so it will always run 3 Pods in zone-a and 2 pods in zone-b?
Thanks :-)
### Response: There is a solution.
First I want to thank @yair-elmaliah for providing the solution in a different channel.
https://stackoverflow.com/users/16521581/yair-elmaliah
So, my adjusted setup is as follows.
The nodes in the Kubernetes cluster have the following labels:
Node Name Node Labels
worker-zone-a-1 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-2 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-3 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-4 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-b-1 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-2 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-3 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
And this is the adjusted deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
node-role.kubernetes.io/data: data
topologySpreadConstraints:
- topologyKey: topology.kubernetes.io/zone
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
- topologyKey: topology.kubernetes.io/zone-a
maxSkew: 1
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The first constraint (topologyKey: topology.kubernetes.io/zone) will distribute the 5 pods between zone a and zone b using a 3/2 or 2/3 ratio. This is good, but we cannot control where the 3 pods will be allocated.
The second constraint (topologyKey: topology.kubernetes.io/zone-a) will try to schedule one of the pods on a node that has the zone-a label.
Since all constraints are evaluated and combined together, this setup will tell the scheduler to deploy 4 pods based on zone name and the fifth pod based on zone-type.
| I have the following setup in a kubernetes cluster:
Node Name Node Label
worker-zone-a-1 topology.kubernetes.io/zone=a
worker-zone-a-2 topology.kubernetes.io/zone=a
worker-zone-a-3 topology.kubernetes.io/zone=a
worker-zone-a-4 topology.kubernetes.io/zone=a
worker-zone-b-1 topology.kubernetes.io/zone=b
worker-zone-b-2 topology.kubernetes.io/zone=b
worker-zone-b-3 topology.kubernetes.io/zone=b
And I have the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
The purpose of using topologySpreadConstraints is to distribute the Deployment in a way that zone-a will run 3 Pods and zone-b will run 2 Pods.
When applying the Deployment several times, sometimes the pods are distributed 3 for zone-a and 2 for zone-b and sometimes the pods are distributed 2 for zone-a and 3 for zone-b.
Is there a way to set the Pods distribution of the Deployment so it will always run 3 Pods in zone-a and 2 pods in zone-b?
Thanks :-)
|
|
Add annotation to that particular object and it should work
apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
argocd.argoproj.io/sync-options: Prune=false
Some Sync Options can defined as annotations in a specific resource. Most of the Sync Options are configured in the Application resource spec.syncPolicy.syncOptions attribute. Multiple Sync Options which are configured with the argocd.argoproj.io/sync-options annotation can be concatenated with a , in the annotation value; white spaces will be trimmed.
no-prune-resources
Or if you don’t want to apply auto sync on this object then you can try ApplyOutOfSyncOnly=false
selective-sync
| ###Instruction: I have an ArgoCD App which is generating symbols
apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
name: {{ include "myapp.fullname" . }}
type: Opaque
data:
{{- if .Values.password }}
password: {{ .Values.password | b64enc | quote }}
{{- else }}
password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
A second service is adding additional values to the secrets later. I don't want this secrets in my chart
Now when something is changed on the app, the secrets are recreated.
How can i change this behaviour in ArgoCD?
### Response: Add annotation to that particular object and it should work
apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
argocd.argoproj.io/sync-options: Prune=false
Some Sync Options can defined as annotations in a specific resource. Most of the Sync Options are configured in the Application resource spec.syncPolicy.syncOptions attribute. Multiple Sync Options which are configured with the argocd.argoproj.io/sync-options annotation can be concatenated with a , in the annotation value; white spaces will be trimmed.
no-prune-resources
Or if you don’t want to apply auto sync on this object then you can try ApplyOutOfSyncOnly=false
selective-sync
| I have an ArgoCD App which is generating symbols
apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
name: {{ include "myapp.fullname" . }}
type: Opaque
data:
{{- if .Values.password }}
password: {{ .Values.password | b64enc | quote }}
{{- else }}
password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
A second service is adding additional values to the secrets later. I don't want this secrets in my chart
Now when something is changed on the app, the secrets are recreated.
How can i change this behaviour in ArgoCD?
|
|
It may be caused by the scope.
$ represents the root of values.
In the range loop (or with etc), you should use . to represents the current element.
{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" $ }}-usersession
{{- end }}
| ###Instruction: I'm having a problem with a simple template-in-template case and I cannot get a working solution. Here's the roundup:
This is a standard template created by Helm itself:
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
Later I'd like to re-use app.fullname in this template:
{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" . }}-usersession
{{- end }}
When I test this configuration I get:
Error: Failed to render chart: exit status 1: install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /home/user/helm/app
Error: template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
helm.go:84: [debug] template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
What puzzles me is why there's a problem with $name while app.fullname template evaluates and is used in various places in the project. I bet this is explained in the documentation somewhere [or it's a bug] but I was unable to find the cause of this.
BTW: I'm using Helm v3.9.2.
### Response: It may be caused by the scope.
$ represents the root of values.
In the range loop (or with etc), you should use . to represents the current element.
{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" $ }}-usersession
{{- end }}
| I'm having a problem with a simple template-in-template case and I cannot get a working solution. Here's the roundup:
This is a standard template created by Helm itself:
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
Later I'd like to re-use app.fullname in this template:
{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" . }}-usersession
{{- end }}
When I test this configuration I get:
Error: Failed to render chart: exit status 1: install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /home/user/helm/app
Error: template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
helm.go:84: [debug] template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
What puzzles me is why there's a problem with $name while app.fullname template evaluates and is used in various places in the project. I bet this is explained in the documentation somewhere [or it's a bug] but I was unable to find the cause of this.
BTW: I'm using Helm v3.9.2.
|
|
I think I found it, the blog and update post by aws are very cryptic.
According to docs and console :-
All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS KMS.
Using KMS with EKS is additional encryption or a better way of envelope encryption. It allows deploying a defense-in-depth strategy for Kubernetes applications by encrypting Kubernetes secrets with a KMS key that you define and manage.
| ###Instruction: I was going by this update for EKS https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/ and this blog from AWS https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/.
This is a very cryptic line which never confirms whether EKS encrypts secrets or not by default
In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.
I did understand that:-
KMS with EKS will provide envelope encryption,like encrypting the DEK using CMK.
But it never mentioned that if I don't use this feature ( of course KMS will cost ), does EKS encrypts data by default?
Because Kubernetes by default does not encrypt data . Source
Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
### Response: I think I found it, the blog and update post by aws are very cryptic.
According to docs and console :-
All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS KMS.
Using KMS with EKS is additional encryption or a better way of envelope encryption. It allows deploying a defense-in-depth strategy for Kubernetes applications by encrypting Kubernetes secrets with a KMS key that you define and manage.
| I was going by this update for EKS https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/ and this blog from AWS https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/.
This is a very cryptic line which never confirms whether EKS encrypts secrets or not by default
In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.
I did understand that:-
KMS with EKS will provide envelope encryption,like encrypting the DEK using CMK.
But it never mentioned that if I don't use this feature ( of course KMS will cost ), does EKS encrypts data by default?
Because Kubernetes by default does not encrypt data . Source
Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
|
|
Thanks to the docs link I've stumbled upon today (I don't know how I've missed it when asking the question because I've spent quite some time browsing through the docs...) I found the information I was looking for. I feel like providing this answer because I find v1d3rm3's answer incomplete and not fully accurate.
The kubernetes docs confirm v1d3rm3's claim (which is btw the key to answering my question):
The created token is a signed JSON Web Token (JWT).
Since the token is JWT token the server can verify if it has signed it, hence no need to store it. JWTs expiry time is set not because the token is not associated with an object (it actually is, as we'll see below) but because the server has no way of invalidating a token (it would actually need to keep track of invalidated tokens because tokens aren't stored anywhere and any token with good signature is valid). To reduce the damage if a token gets stolen there is an expiry time.
Signed JWT token contains all the necessary information inside of it.
The decoded token (created with kubectl create token test-sa where test-sa is service account name) looks like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712616,
"iat": 1666709016,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709016,
"sub": "system:serviceaccount:default:test-sa"
}
Contrary to v1d3rm3 answer, This token IS associated with a service account automatically, as the kubernets docs link confirm and as we can also see from the token content above.
Suppose I have a secret I want to bind my token to (for example kubectl create token test-sa --bound-kind Secret --bound-name my-secret where test-sa is service account name and my-secret is the secret I'm binding token to), the decoded token will look like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712848,
"iat": 1666709248,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"secret": {
"name": "my-secret",
"uid": "2a44872f-1c1c-4f18-8214-884db5f351f2"
},
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709248,
"sub": "system:serviceaccount:default:test-sa"
}
Notice that binding happens inside the token, under kubernetes.io key and if you describe my-secret you will still not see the token. So the --bound-... flags weren't visibly (from secret object) doing anything because binding happens inside the token itself...
Instead of decoding JWT tokens, we can also see details in TokenRequest object with
kubectl create token test-sa -o yaml
| ###Instruction: This question concerns kubernetes v1.24 and up
So I can create tokens for service accounts with
kubectl create token myserviceaccount
The created token works and serves the purpose, but what I find confusing is that when I kubectl get sa SECRETS field of myserviceaccount is still 0. The token doesn't appear in kubectl get secrets either.
I've also seen that I can pass --bound-object-kind and --bound-object-name to kubectl create token but this doesn't seem to do anything (visible) either...
Is there a way to see created token? And what is the purpose of --bound.. flags?
### Response: Thanks to the docs link I've stumbled upon today (I don't know how I've missed it when asking the question because I've spent quite some time browsing through the docs...) I found the information I was looking for. I feel like providing this answer because I find v1d3rm3's answer incomplete and not fully accurate.
The kubernetes docs confirm v1d3rm3's claim (which is btw the key to answering my question):
The created token is a signed JSON Web Token (JWT).
Since the token is JWT token the server can verify if it has signed it, hence no need to store it. JWTs expiry time is set not because the token is not associated with an object (it actually is, as we'll see below) but because the server has no way of invalidating a token (it would actually need to keep track of invalidated tokens because tokens aren't stored anywhere and any token with good signature is valid). To reduce the damage if a token gets stolen there is an expiry time.
Signed JWT token contains all the necessary information inside of it.
The decoded token (created with kubectl create token test-sa where test-sa is service account name) looks like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712616,
"iat": 1666709016,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709016,
"sub": "system:serviceaccount:default:test-sa"
}
Contrary to v1d3rm3 answer, This token IS associated with a service account automatically, as the kubernets docs link confirm and as we can also see from the token content above.
Suppose I have a secret I want to bind my token to (for example kubectl create token test-sa --bound-kind Secret --bound-name my-secret where test-sa is service account name and my-secret is the secret I'm binding token to), the decoded token will look like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712848,
"iat": 1666709248,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"secret": {
"name": "my-secret",
"uid": "2a44872f-1c1c-4f18-8214-884db5f351f2"
},
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709248,
"sub": "system:serviceaccount:default:test-sa"
}
Notice that binding happens inside the token, under kubernetes.io key and if you describe my-secret you will still not see the token. So the --bound-... flags weren't visibly (from secret object) doing anything because binding happens inside the token itself...
Instead of decoding JWT tokens, we can also see details in TokenRequest object with
kubectl create token test-sa -o yaml
| This question concerns kubernetes v1.24 and up
So I can create tokens for service accounts with
kubectl create token myserviceaccount
The created token works and serves the purpose, but what I find confusing is that when I kubectl get sa SECRETS field of myserviceaccount is still 0. The token doesn't appear in kubectl get secrets either.
I've also seen that I can pass --bound-object-kind and --bound-object-name to kubectl create token but this doesn't seem to do anything (visible) either...
Is there a way to see created token? And what is the purpose of --bound.. flags?
|
|
Unfortunately, once a PVC is created and a PV is provisioned for it, the only thing you can change without creating a new one is the volume's size
The only straightforward way I could think of without leveraging CSI snapshots/clones, which you might not have access to (depends on how you created PVCs/PVs AFAIK), would be to create a new PVC and mount both volumes on a Deployment whose Pod has root access and the rsync command.
Running rsync -a /old/volume/mount/path /new/volume/mount/path on such a Pod should get you what you want.
However, you should make sure that you do so BEFORE deleting PVCs or any other resource using your PVs. By default, most of the default storage classes create volumes with reclaim policies that immediately delete the PV as soon as all resources using it are gone, so there's a small risk of data loss
| ###Instruction: I have Elasticsearch Data pods that are currently running on an AKS and are connected to Persistent Volumes that is using a Premium SSD Managed Disk Storage Class and I want to downgrade it to Standard SSD Managed Disk without losing the data I have on the currently used Persistent Volume.
I've created a new Storage Class that is defined with Standard SSD Managed Disk, but if I create a new PV from that it obviously doesn't keep the old data and I need to copy it somehow, so I was wondering what would be best practice switching PV's Storage Class.
### Response: Unfortunately, once a PVC is created and a PV is provisioned for it, the only thing you can change without creating a new one is the volume's size
The only straightforward way I could think of without leveraging CSI snapshots/clones, which you might not have access to (depends on how you created PVCs/PVs AFAIK), would be to create a new PVC and mount both volumes on a Deployment whose Pod has root access and the rsync command.
Running rsync -a /old/volume/mount/path /new/volume/mount/path on such a Pod should get you what you want.
However, you should make sure that you do so BEFORE deleting PVCs or any other resource using your PVs. By default, most of the default storage classes create volumes with reclaim policies that immediately delete the PV as soon as all resources using it are gone, so there's a small risk of data loss
| I have Elasticsearch Data pods that are currently running on an AKS and are connected to Persistent Volumes that is using a Premium SSD Managed Disk Storage Class and I want to downgrade it to Standard SSD Managed Disk without losing the data I have on the currently used Persistent Volume.
I've created a new Storage Class that is defined with Standard SSD Managed Disk, but if I create a new PV from that it obviously doesn't keep the old data and I need to copy it somehow, so I was wondering what would be best practice switching PV's Storage Class.
|
|
Yes, you can do it using the CoreDNS rewrite plugin. This is the official documentation and I'll give you an example how to implement it.
Edit the CoreDNS ConfigMap
kubectl edit cm -n kube-system coredns
Add this line inside the config:
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
Your cm is going to look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
Save the edit and delete your CoreDNS pods
kubectl delete pod -n kube-system --selector k8s-app=kube-dns
And test it inside a dummy pod or query directly to the CoreDNS
# dig app1.yourdomain.com
; <<>> DiG 9.16.33-Debian <<>> app1.yourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51020
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: c06f814fbf04a827 (echoed)
;; QUESTION SECTION:
;app1.yourdomain.com. IN A
;; ANSWER SECTION:
app1.default.svc.cluster.local. 30 IN A 10.110.113.195
;; Query time: 5 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Oct 26 04:49:47 UTC 2022
;; MSG SIZE rcvd: 107
| ###Instruction: The following chain describes how Pods that define an API are reached from the outside.
Client -> Route53 (.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod
Some pods, in addition to defining an API, communicate and use the API of others in the same cluster k8s. To allow communication between pods I can do it using the internal dns: eg. api1.ns.svc.cluster.local or using the Route53 api1.example.com domain.
The first case is more efficient but on the other hand I need to keep a list of the necessary services and namespaces for each pod.
The second case is easier to manage. I know that each API responds to * .example.com so I only need to know the subdomain to call. This approach is extremely inefficient:
Pod1 -> Route53 (api2.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod2
In this scenario I would like to know if there are known solutions for which a pod to communicate with another can use the same domain managed by Route53 but without leaving the cluster and maintaining internal traffic.
I know I can use a core dns rewrite but in that case I should still keep an updated list, also Route53 holds subdomains pointing to services outside the cluster, e.g. db.example.com
So the idea is an autodiscovery of the ingress and keep internal traffic if possible:
Pod1 -> k8sdns with api2.example.com ingress
-> Nginx -> Service
-> Pod2
Or
Pod1 -> k8sdns without db.example.com ingress
-> Route53 -> LoadBalancer
-> DB
Thanks
### Response: Yes, you can do it using the CoreDNS rewrite plugin. This is the official documentation and I'll give you an example how to implement it.
Edit the CoreDNS ConfigMap
kubectl edit cm -n kube-system coredns
Add this line inside the config:
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
Your cm is going to look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
Save the edit and delete your CoreDNS pods
kubectl delete pod -n kube-system --selector k8s-app=kube-dns
And test it inside a dummy pod or query directly to the CoreDNS
# dig app1.yourdomain.com
; <<>> DiG 9.16.33-Debian <<>> app1.yourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51020
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: c06f814fbf04a827 (echoed)
;; QUESTION SECTION:
;app1.yourdomain.com. IN A
;; ANSWER SECTION:
app1.default.svc.cluster.local. 30 IN A 10.110.113.195
;; Query time: 5 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Oct 26 04:49:47 UTC 2022
;; MSG SIZE rcvd: 107
| The following chain describes how Pods that define an API are reached from the outside.
Client -> Route53 (.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod
Some pods, in addition to defining an API, communicate and use the API of others in the same cluster k8s. To allow communication between pods I can do it using the internal dns: eg. api1.ns.svc.cluster.local or using the Route53 api1.example.com domain.
The first case is more efficient but on the other hand I need to keep a list of the necessary services and namespaces for each pod.
The second case is easier to manage. I know that each API responds to * .example.com so I only need to know the subdomain to call. This approach is extremely inefficient:
Pod1 -> Route53 (api2.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod2
In this scenario I would like to know if there are known solutions for which a pod to communicate with another can use the same domain managed by Route53 but without leaving the cluster and maintaining internal traffic.
I know I can use a core dns rewrite but in that case I should still keep an updated list, also Route53 holds subdomains pointing to services outside the cluster, e.g. db.example.com
So the idea is an autodiscovery of the ingress and keep internal traffic if possible:
Pod1 -> k8sdns with api2.example.com ingress
-> Nginx -> Service
-> Pod2
Or
Pod1 -> k8sdns without db.example.com ingress
-> Route53 -> LoadBalancer
-> DB
Thanks
|
|
As workaround (mentioned in github issue you linked) you can add "value: null" to configuration:
env:
- name: DATABASE_URL
value: null
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
| ###Instruction: I am getting the error
dry-run failed, reason: Invalid, error: Deployment.apps "server" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value:
"": may not be specified when `value` is not empty
when trying to override a value in flux Kustomization using patchesStrategicMerge like:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
The secret server-db-url exists and contains the key dburl
I am not sure if this has something to do with https://github.com/kubernetes/kubernetes/issues/46861
### Response: As workaround (mentioned in github issue you linked) you can add "value: null" to configuration:
env:
- name: DATABASE_URL
value: null
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
| I am getting the error
dry-run failed, reason: Invalid, error: Deployment.apps "server" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value:
"": may not be specified when `value` is not empty
when trying to override a value in flux Kustomization using patchesStrategicMerge like:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
The secret server-db-url exists and contains the key dburl
I am not sure if this has something to do with https://github.com/kubernetes/kubernetes/issues/46861
|
|
As @ Achraf Bentabib said
Kubernetes creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at
/var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/
Identify the node on which the Pod is running:
kubectl get pod pod-name -owide
SSH on that node, you can check which logging driver is being used by the node with:
If you are using docker then:
docker info | grep -i logging
If you are using kubernetes:
kubectl ssh node NODE_NAME
If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node
kubectl get pod pod-name ojsonpath='{.status.containerStatuses[0].containerID}'
Example:
var/log/containers/<pod-name>_<namespace>_<container-name-container-id>.log -> /var/log/pods/<some-uuid>/<container-name>_0.log
| ###Instruction: I have my app deployed to Kubernetes and it's producing some logs. I can see the logs by running kubectl logs -f <pod-id> -n staging, but I can't find where the logs are physically located on the pod. The /var/log/ folder is empty, and I can't find the logs anywhere else on the pod either.
Why is this happening, and where should the logs be?
### Response: As @ Achraf Bentabib said
Kubernetes creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at
/var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/
Identify the node on which the Pod is running:
kubectl get pod pod-name -owide
SSH on that node, you can check which logging driver is being used by the node with:
If you are using docker then:
docker info | grep -i logging
If you are using kubernetes:
kubectl ssh node NODE_NAME
If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node
kubectl get pod pod-name ojsonpath='{.status.containerStatuses[0].containerID}'
Example:
var/log/containers/<pod-name>_<namespace>_<container-name-container-id>.log -> /var/log/pods/<some-uuid>/<container-name>_0.log
| I have my app deployed to Kubernetes and it's producing some logs. I can see the logs by running kubectl logs -f <pod-id> -n staging, but I can't find where the logs are physically located on the pod. The /var/log/ folder is empty, and I can't find the logs anywhere else on the pod either.
Why is this happening, and where should the logs be?
|
|
The solution was to set the variable DOCKER_HOST before launching the skaffold dev command:
DOCKER_HOST="unix:///Users/<you>/.docker/run/docker.sock" skaffold dev
| ###Instruction: I get the following message when I run the skaffold dev command:
Build Failed. Cannot connect to the Docker daemon at unix:
///var/run/docker.sock. Check if docker is running.
Tools versions:
MacOS Desktop Docker: 4.13.0 (89412)
Kubernetes: v1.25.2
Skaffold: v2.0.0
Docker runs correctly in fact I can create resources on the cluster and create containers with the docker-cli commands. I successfully launch both docker info and docker version.
The command /Applications/Docker.app/Contents/MacOS/com.docker.diagnose check
reports
"No fatal errors detected."
(all tests pass).
I also tried setting the DOCKER_HOST variable:
DOCKER_HOST = /Users/<my folder>/.docker/run/docker.sock skaffold dev
Result:
invalid skaffold config: error getting docker client: unable to parse docker host `/Users/<my folder>/.docker/run/docker.sock`
My Skaffold.yaml file
apiVersion: skaffold/v3
kind: Config
metadata:
name: test
build:
local:
push: false
artifacts:
- image: <myimage>
context: <folder>
docker:
dockerfile: Dockerfile
manifests:
rawYaml:
- infra/k8s/deployment.yaml
How can I solve?
### Response: The solution was to set the variable DOCKER_HOST before launching the skaffold dev command:
DOCKER_HOST="unix:///Users/<you>/.docker/run/docker.sock" skaffold dev
| I get the following message when I run the skaffold dev command:
Build Failed. Cannot connect to the Docker daemon at unix:
///var/run/docker.sock. Check if docker is running.
Tools versions:
MacOS Desktop Docker: 4.13.0 (89412)
Kubernetes: v1.25.2
Skaffold: v2.0.0
Docker runs correctly in fact I can create resources on the cluster and create containers with the docker-cli commands. I successfully launch both docker info and docker version.
The command /Applications/Docker.app/Contents/MacOS/com.docker.diagnose check
reports
"No fatal errors detected."
(all tests pass).
I also tried setting the DOCKER_HOST variable:
DOCKER_HOST = /Users/<my folder>/.docker/run/docker.sock skaffold dev
Result:
invalid skaffold config: error getting docker client: unable to parse docker host `/Users/<my folder>/.docker/run/docker.sock`
My Skaffold.yaml file
apiVersion: skaffold/v3
kind: Config
metadata:
name: test
build:
local:
push: false
artifacts:
- image: <myimage>
context: <folder>
docker:
dockerfile: Dockerfile
manifests:
rawYaml:
- infra/k8s/deployment.yaml
How can I solve?
|
|
Pods aren't OOM killed at all. OOMKilled is a status ultimately caused by a kernel process (OOM Killer) that kills processes (containers are processes), which is then recognised by the kubelet which sets the status on the container. If the main container in a pod is killed then by default the pod will be restarted by the kubelet. A pod cannot be terminated, because a pod is a data structure rather than a process. Similarly, it cannot have a memory (or CPU) limit itself, rather it is limited by the sum of its component parts.
The article you reference uses imprecise language and I think this is causing some confusion. There is a better, shorter, article on medium that covers this more accurately, and a longer and much more in depth article here.
| ###Instruction: If I understand correctly the conditions for Kubernetes to OOM kill a pod or container (from komodor.com):
If a container uses more memory than its memory limit, it is terminated with an OOMKilled status. Similarly, if overall memory usage on all containers, or all pods on the node, exceeds the defined limit, one or more pods may be terminated.
This means that if a container in the pod exceeds the total memory it will be killed (the container) but not the pod itself. Similarly, if there are multiple containers in a pod and the pod itself exceeds its memory limitation, which is the sum of memory limits of all the containers in that pod - the pod will be OOM killed. However, the latter only seems possibly if one of the containers exceeds its memory allowance. In this case - wouldn't the container be killed first?
I'm trying to understand the actual conditions in which a pod is OOM killed instead of a container.
I've also noticed that when there is one container in the pod and that container is exceeding its memory allowance repeatedly - the pod and container are killed intermittently. I observed this - the container would restart, which would be observable by watching the logs from the pod, and every second time - the pod is killed and restarted, incrementing its restart count.
If it helps to understand the behavior - the QOS class of the pod is Burstable.
### Response: Pods aren't OOM killed at all. OOMKilled is a status ultimately caused by a kernel process (OOM Killer) that kills processes (containers are processes), which is then recognised by the kubelet which sets the status on the container. If the main container in a pod is killed then by default the pod will be restarted by the kubelet. A pod cannot be terminated, because a pod is a data structure rather than a process. Similarly, it cannot have a memory (or CPU) limit itself, rather it is limited by the sum of its component parts.
The article you reference uses imprecise language and I think this is causing some confusion. There is a better, shorter, article on medium that covers this more accurately, and a longer and much more in depth article here.
| If I understand correctly the conditions for Kubernetes to OOM kill a pod or container (from komodor.com):
If a container uses more memory than its memory limit, it is terminated with an OOMKilled status. Similarly, if overall memory usage on all containers, or all pods on the node, exceeds the defined limit, one or more pods may be terminated.
This means that if a container in the pod exceeds the total memory it will be killed (the container) but not the pod itself. Similarly, if there are multiple containers in a pod and the pod itself exceeds its memory limitation, which is the sum of memory limits of all the containers in that pod - the pod will be OOM killed. However, the latter only seems possibly if one of the containers exceeds its memory allowance. In this case - wouldn't the container be killed first?
I'm trying to understand the actual conditions in which a pod is OOM killed instead of a container.
I've also noticed that when there is one container in the pod and that container is exceeding its memory allowance repeatedly - the pod and container are killed intermittently. I observed this - the container would restart, which would be observable by watching the logs from the pod, and every second time - the pod is killed and restarted, incrementing its restart count.
If it helps to understand the behavior - the QOS class of the pod is Burstable.
|
|
This is how you can complete your desired commands:
kubectl exec -it <POD_NAME> -c <CONTAINER_NAME> -- /bin/bash
cat /run/secrets/kubernetes.io/serviceaccount/token && echo
NOTE: if you only have 1 container in your pod, you can omit -c <CONTAINER_NAME>
-it is short for stdin and tty -> this is your [COMMAND]
/bin/bash is an argument here, you can pass more than one
-- separates the arguments you want to pass
After the first command is ran, you're inside a bash shell and can run whatever other commands inside the container.
| ###Instruction: I wan to create service account with token in Kubernetes. I tried this:
Full log:
root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl create sa cicd
serviceaccount/cicd created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 5s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
pod/cicd created
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjoxNjY2NzkyNTIxLCJpYXQiOjE2NjY3ODg5MjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyMSwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.iBkpVDQ_w_UZmbr3PnpouwtQlLz9FzJs_cJ7IYbY WUphBM4NO4o8gPgBfnHGPG3uFVbEDbgdY2TsuxHKss0FosiCdjYBiLn8dp_SQd1Rdk0TMYGCLAOWRgZE XjpmXMLBcHtC5TexJY-bIpvw7Ni4Xls5XPbGpfqL_fcPuUQR3Gurkmk7gPSly77jRKSaF-kzj0oq78MPtwHu92g5hnIZs7ZLaMLzo9EvDRT092RVZXiVF0FkmflnUPNiyKxainrfvWTiTAlYSZreX6JfGjimklTAKCue4w9CqWZGNyGGumqH02ucMQ
xjAiHS6J_Goxyaho8QEvFsEhkVqNFndzbw
root@vmi1026661:~# kubectl create token cicd --duration=999999h
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjo1MjY2Nzg1MzI2LCJpYXQiOjE2NjY3ODg5MjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyNiwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.N1V7i0AgW3DihJDWcGbM0kDvFH_nWodPlqZjLSHM KvaRAfmujOxSk084mrmjkZwIzWGanA6pkTQHiBIAGh8UhR7ijo4J6S58I-5Dj4gu2UWVOpaBzDBrKqBD SapFw9PjKpZYCHjsXTCzx6Df8q-bAEk_lpc0CsfpbXQl2jpJm3TTtQp1GKuIc53k5VKz9ON8MXcHY8lEfNs78ew8GiaoX6M4_5LmjSNVMHtyRy-Z_oIH9yK8LcHLxh0wqMS7RyW9UKN_9-qH1h01NwrFFOQWpbstFVuQKAnI-RyNEZDc9FZMNwYd_n
MwaKv54oNLx4TniOSOWxS7ZcEyP5b7U8mgBw
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
secret/cicd created
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ClusterRoleBind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cicd
namespace: default
EOF
clusterrolebinding.rbac.authorization.k8s.io/ClusterRoleBind created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 60s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 12s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZ XJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2Nvd W50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291b nQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OGY3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3M iIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLP lDMYgF_kzO7LV5B92voDjIPlXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtw GLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZB_lJPjZ90-lkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQ
T6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~# kubectl describe sa cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: cicd
Events: <none>
root@vmi1026661:~# kubectl get sa cicd -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-10-26T12:54:45Z"
name: cicd
namespace: default
resourceVersion: "2206462"
uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 82s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 34s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW5
0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlc
m5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OG
Y3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLPlDMYgF_kzO7LV5-02voDjIP
lXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtwGLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZ
B_lJPjZ9DtBblkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQT6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~#
root@vmi1026661:~#
As you can see I get error:
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
Do you kno0w what should be the appropriate command to complete this step?
EDIT: here is the result from troubleshooting
root@vmi1026661:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
cicd 1/1 Running 0 67m
repo-docker-registry-78d6c5bdb5-r68jb 0/1 Pending 0 16d
root@vmi1026661:~# kubectl describe pod cicd
Name: cicd
Namespace: default
Priority: 0
Service Account: cicd
Node: vmi1026660/38.242.240.39
Start Time: Wed, 26 Oct 2022 14:54:57 +0200
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.13
IPs:
IP: 10.244.1.13
Containers:
cicd:
Container ID: containerd://ab44fc463f97316ba807efce0c82e276cf06326e1d03846c1f6186484ff9fcbb
Image: nginx
Image ID: docker.io/library/nginx@sha256:47a8d86548c232e44625d813b45fd92e81d07c639092cd1f9a49d98e1fb5f737
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 26 Oct 2022 14:55:22 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xggpn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-xggpn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root@vmi1026661:~#
### Response: This is how you can complete your desired commands:
kubectl exec -it <POD_NAME> -c <CONTAINER_NAME> -- /bin/bash
cat /run/secrets/kubernetes.io/serviceaccount/token && echo
NOTE: if you only have 1 container in your pod, you can omit -c <CONTAINER_NAME>
-it is short for stdin and tty -> this is your [COMMAND]
/bin/bash is an argument here, you can pass more than one
-- separates the arguments you want to pass
After the first command is ran, you're inside a bash shell and can run whatever other commands inside the container.
| I wan to create service account with token in Kubernetes. I tried this:
Full log:
root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl create sa cicd
serviceaccount/cicd created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 5s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
pod/cicd created
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjoxNjY2NzkyNTIxLCJpYXQiOjE2NjY3ODg5MjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyMSwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.iBkpVDQ_w_UZmbr3PnpouwtQlLz9FzJs_cJ7IYbY WUphBM4NO4o8gPgBfnHGPG3uFVbEDbgdY2TsuxHKss0FosiCdjYBiLn8dp_SQd1Rdk0TMYGCLAOWRgZE XjpmXMLBcHtC5TexJY-bIpvw7Ni4Xls5XPbGpfqL_fcPuUQR3Gurkmk7gPSly77jRKSaF-kzj0oq78MPtwHu92g5hnIZs7ZLaMLzo9EvDRT092RVZXiVF0FkmflnUPNiyKxainrfvWTiTAlYSZreX6JfGjimklTAKCue4w9CqWZGNyGGumqH02ucMQ
xjAiHS6J_Goxyaho8QEvFsEhkVqNFndzbw
root@vmi1026661:~# kubectl create token cicd --duration=999999h
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjo1MjY2Nzg1MzI2LCJpYXQiOjE2NjY3ODg5MjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyNiwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.N1V7i0AgW3DihJDWcGbM0kDvFH_nWodPlqZjLSHM KvaRAfmujOxSk084mrmjkZwIzWGanA6pkTQHiBIAGh8UhR7ijo4J6S58I-5Dj4gu2UWVOpaBzDBrKqBD SapFw9PjKpZYCHjsXTCzx6Df8q-bAEk_lpc0CsfpbXQl2jpJm3TTtQp1GKuIc53k5VKz9ON8MXcHY8lEfNs78ew8GiaoX6M4_5LmjSNVMHtyRy-Z_oIH9yK8LcHLxh0wqMS7RyW9UKN_9-qH1h01NwrFFOQWpbstFVuQKAnI-RyNEZDc9FZMNwYd_n
MwaKv54oNLx4TniOSOWxS7ZcEyP5b7U8mgBw
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
secret/cicd created
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ClusterRoleBind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cicd
namespace: default
EOF
clusterrolebinding.rbac.authorization.k8s.io/ClusterRoleBind created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 60s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 12s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZ XJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2Nvd W50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291b nQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OGY3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3M iIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLP lDMYgF_kzO7LV5B92voDjIPlXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtw GLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZB_lJPjZ90-lkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQ
T6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~# kubectl describe sa cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: cicd
Events: <none>
root@vmi1026661:~# kubectl get sa cicd -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-10-26T12:54:45Z"
name: cicd
namespace: default
resourceVersion: "2206462"
uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 82s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 34s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW5
0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlc
m5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OG
Y3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLPlDMYgF_kzO7LV5-02voDjIP
lXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtwGLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZ
B_lJPjZ9DtBblkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQT6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~#
root@vmi1026661:~#
As you can see I get error:
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
Do you kno0w what should be the appropriate command to complete this step?
EDIT: here is the result from troubleshooting
root@vmi1026661:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
cicd 1/1 Running 0 67m
repo-docker-registry-78d6c5bdb5-r68jb 0/1 Pending 0 16d
root@vmi1026661:~# kubectl describe pod cicd
Name: cicd
Namespace: default
Priority: 0
Service Account: cicd
Node: vmi1026660/38.242.240.39
Start Time: Wed, 26 Oct 2022 14:54:57 +0200
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.13
IPs:
IP: 10.244.1.13
Containers:
cicd:
Container ID: containerd://ab44fc463f97316ba807efce0c82e276cf06326e1d03846c1f6186484ff9fcbb
Image: nginx
Image ID: docker.io/library/nginx@sha256:47a8d86548c232e44625d813b45fd92e81d07c639092cd1f9a49d98e1fb5f737
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 26 Oct 2022 14:55:22 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xggpn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-xggpn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root@vmi1026661:~#
|
|
Generally, stdin is a read-only stream for retrieving input written to your program, while stdout is a write-only stream for sending output written by your program. In other words, nobody can read from /dev/stdout, except Chuck Norris.
By default, stdout is "pointing" to your terminal. But it is possible to redirect stdout from your terminal to a file. This redirection is set up before your program is started.
What usually happens, is the following: The container runtime redirects stdout of the process of your container to a file on the node where your container is running (e.g., /var/log/containers/<container-name>-<container-id>.log). When you request logs with kubectl logs, kubectl connects to kube-apiserver, which connects to the kubelet on the node running your container and asks it to send back the content from the log file.
Also take a look at https://kubernetes.io/docs/concepts/cluster-administration/logging/ which explains the various logging design approaches.
A solution, which from a security and portability perspective you would definitely NOT implement, is to add a hostPath mount in your container mounting the /var/log/containers directory of your node and to access the container log directly.
A proper solution might be to change the command of your image and to write output to stdout of your container and also to a local file within your container. This can be achieved using the tee command. Your application can then read back the log from this file. But keep in mind, that without proper rotation, the log file will grow until your container is terminated.
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
containers:
- image: bash:latest
name: log-to-stdout-and-file
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /tmp/test.log'
A little more complex solution would be, to replace the log file in the container with a named pipe file created with mkfifo. This avoids the growing file size problem (as long as your application is continuously reading the log from the named pipe file).
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
# the init container creates the fifo in an empty dir mount
initContainers:
- image: bash:latest
name: create-fifo
command:
- bash
- -c
- mkfifo /var/log/myapp/log
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# the actual app uses tee to write the log to stdout and to the fifo
containers:
- image: bash:latest
name: log-to-stdout-and-fifo
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /var/log/myapp/log'
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# this sidecar container is only for testing purposes, it reads the
# content written to the fifo (this is usually done by the app itself)
#- image: bash:latest
# name: log-reader
# command:
# - bash
# - -c
# - cat /var/log/myapp/log
# volumeMounts:
# - name: ed
# mountPath: /var/log/myapp
volumes:
- name: ed
emptyDir: {}
| ###Instruction: I have an application designed to run as a K8s application, and it imports some dependencies (that I don't own) that run exec.Cmds. This is fine, except I want to capture those logs. For some reason, when I do:
r := bufio.NewReader(os.Stdout)
...
line, err := r.ReadString('\n')
An error is thrown saying that /dev/stdout is a bad file descriptor. How can this be? Isn't that the standard local destination for console output?
kubectl logs seems to be able to capture the output, and more specifically, our central log forwarder is able to capture it as well. But trying to capture logs from the kube API server inside the container that's actually generating those logs seems kinda silly... Is there a better way to do this?
### Response: Generally, stdin is a read-only stream for retrieving input written to your program, while stdout is a write-only stream for sending output written by your program. In other words, nobody can read from /dev/stdout, except Chuck Norris.
By default, stdout is "pointing" to your terminal. But it is possible to redirect stdout from your terminal to a file. This redirection is set up before your program is started.
What usually happens, is the following: The container runtime redirects stdout of the process of your container to a file on the node where your container is running (e.g., /var/log/containers/<container-name>-<container-id>.log). When you request logs with kubectl logs, kubectl connects to kube-apiserver, which connects to the kubelet on the node running your container and asks it to send back the content from the log file.
Also take a look at https://kubernetes.io/docs/concepts/cluster-administration/logging/ which explains the various logging design approaches.
A solution, which from a security and portability perspective you would definitely NOT implement, is to add a hostPath mount in your container mounting the /var/log/containers directory of your node and to access the container log directly.
A proper solution might be to change the command of your image and to write output to stdout of your container and also to a local file within your container. This can be achieved using the tee command. Your application can then read back the log from this file. But keep in mind, that without proper rotation, the log file will grow until your container is terminated.
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
containers:
- image: bash:latest
name: log-to-stdout-and-file
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /tmp/test.log'
A little more complex solution would be, to replace the log file in the container with a named pipe file created with mkfifo. This avoids the growing file size problem (as long as your application is continuously reading the log from the named pipe file).
apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
# the init container creates the fifo in an empty dir mount
initContainers:
- image: bash:latest
name: create-fifo
command:
- bash
- -c
- mkfifo /var/log/myapp/log
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# the actual app uses tee to write the log to stdout and to the fifo
containers:
- image: bash:latest
name: log-to-stdout-and-fifo
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /var/log/myapp/log'
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# this sidecar container is only for testing purposes, it reads the
# content written to the fifo (this is usually done by the app itself)
#- image: bash:latest
# name: log-reader
# command:
# - bash
# - -c
# - cat /var/log/myapp/log
# volumeMounts:
# - name: ed
# mountPath: /var/log/myapp
volumes:
- name: ed
emptyDir: {}
| I have an application designed to run as a K8s application, and it imports some dependencies (that I don't own) that run exec.Cmds. This is fine, except I want to capture those logs. For some reason, when I do:
r := bufio.NewReader(os.Stdout)
...
line, err := r.ReadString('\n')
An error is thrown saying that /dev/stdout is a bad file descriptor. How can this be? Isn't that the standard local destination for console output?
kubectl logs seems to be able to capture the output, and more specifically, our central log forwarder is able to capture it as well. But trying to capture logs from the kube API server inside the container that's actually generating those logs seems kinda silly... Is there a better way to do this?
|
|
In order to get custom metrics within kube_statefulset_labels we need to add
--metric-labels-allowlist as a flag in kube-state-metrics. (docs)
In this particular example it would be:
--metric-labels-allowlist=statefulsets=[label_network]
| ###Instruction: For example I have StatefulSet with custom labels/annotations applied on it.
Everyone mentions these two metrics should be used, because kube-state-metrics should generate labels/annotations as well.
kube_statefulset_annotations
kube_statefulset_labels
The thing is, I can see only default ones (job,instance, namespace,...) but not additionally added labels/annotations.
Example of the manifest I am testing with:
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
label_network: "111"
labels:
app: testing-labels
label_STATEFULSET_LABEL: "111"
label_network: "111"
name: testing-labels
namespace: sre-test
spec:
selector:
matchLabels:
app: testing-labels
serviceName: testing-labels-headless
template:
metadata:
labels:
app: testing-labels
label_network: "111"
I've added so many different labels/annotations but the kube_statefulset_labels{statefulset='testing-labels'} returns:
kube_statefulset_labels{container="kube-rbac-proxy-main", instance="10.2.23.229:8443", job="kube-state-metrics", namespace="sre-test", prometheus="aws-monitoring/k8s", prometheus_replica="prometheus-k8s-1", statefulset="testing-labels"}
...which certainly doesn't contain any extra label. Any idea would be helpful? 🙏🏼
Versions:
kube-state-metrics:v2.5.0
kube-rbac-proxy:v0.12.0
### Response: In order to get custom metrics within kube_statefulset_labels we need to add
--metric-labels-allowlist as a flag in kube-state-metrics. (docs)
In this particular example it would be:
--metric-labels-allowlist=statefulsets=[label_network]
| For example I have StatefulSet with custom labels/annotations applied on it.
Everyone mentions these two metrics should be used, because kube-state-metrics should generate labels/annotations as well.
kube_statefulset_annotations
kube_statefulset_labels
The thing is, I can see only default ones (job,instance, namespace,...) but not additionally added labels/annotations.
Example of the manifest I am testing with:
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
label_network: "111"
labels:
app: testing-labels
label_STATEFULSET_LABEL: "111"
label_network: "111"
name: testing-labels
namespace: sre-test
spec:
selector:
matchLabels:
app: testing-labels
serviceName: testing-labels-headless
template:
metadata:
labels:
app: testing-labels
label_network: "111"
I've added so many different labels/annotations but the kube_statefulset_labels{statefulset='testing-labels'} returns:
kube_statefulset_labels{container="kube-rbac-proxy-main", instance="10.2.23.229:8443", job="kube-state-metrics", namespace="sre-test", prometheus="aws-monitoring/k8s", prometheus_replica="prometheus-k8s-1", statefulset="testing-labels"}
...which certainly doesn't contain any extra label. Any idea would be helpful? 🙏🏼
Versions:
kube-state-metrics:v2.5.0
kube-rbac-proxy:v0.12.0
|
|
If you want the server address from the kubeconfig file, just read it from your config variable:
package main
import (
"flag"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
fmt.Printf("server: %s\n", config.Host)
}
If you're curious what other fields are available on the rest.Config object, a quick solution is to print out the config variable using the %+v format specifier:
fmt.Printf("%+v\n", config)
For more details, look at the reference documentation.
| ###Instruction: Basically, my kubeconfig file has:
apiVersion: v1
clusters:
- cluster:
server: <OAM ip address> this is what I want
(...)
I want to get the server address.
Previously searching , I've found this solution:
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
panic(err)
}
nodeip := []corev1.NodeAddress{}
for i := 0; i < len(nodes.Items); i++ {
nodeip = nodes.Items[i].Status.Addresses
fmt.Println(nodeip[0].Address)
}
fmt.Println(nodes.Items[0].Status.Addresses)
But it gives me the Internal IP, not the OAM server IP (which is inside the Kubernetes config file)
### Response: If you want the server address from the kubeconfig file, just read it from your config variable:
package main
import (
"flag"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
fmt.Printf("server: %s\n", config.Host)
}
If you're curious what other fields are available on the rest.Config object, a quick solution is to print out the config variable using the %+v format specifier:
fmt.Printf("%+v\n", config)
For more details, look at the reference documentation.
| Basically, my kubeconfig file has:
apiVersion: v1
clusters:
- cluster:
server: <OAM ip address> this is what I want
(...)
I want to get the server address.
Previously searching , I've found this solution:
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
panic(err)
}
nodeip := []corev1.NodeAddress{}
for i := 0; i < len(nodes.Items); i++ {
nodeip = nodes.Items[i].Status.Addresses
fmt.Println(nodeip[0].Address)
}
fmt.Println(nodes.Items[0].Status.Addresses)
But it gives me the Internal IP, not the OAM server IP (which is inside the Kubernetes config file)
|
|
I think its because you are using kops-darwin-amd64. This is for mac. I think you should be using kops-linux-amd64 instead for linux.
| ###Instruction: I went through the steps listed here: https://kubernetes.io/docs/setup/production-environment/tools/kops/
After moving the kops file to /usr/local/bin/ and renaming to kops, I tried to confirm if it was in fact installed and executable by trying 'kops --help' and 'kops --version'/'kops version' and neither command worked. Any idea what the issue might be?
Edit: Here's what I did step by step
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
sudo chmod +x kops-darwin-amd64
sudo mv kops-darwin-amd64 /usr/local/bin/kops
It's a t2.micro Ubuntu 20.04 EC2 Instance.
Tried to confirm if kops was properly installed and executable by entering 'kops --help' and 'kops --version' and also 'kops version' but they all return this error:
-bash: /usr/local/bin/kops: cannot execute binary file: Exec format error
### Response: I think its because you are using kops-darwin-amd64. This is for mac. I think you should be using kops-linux-amd64 instead for linux.
| I went through the steps listed here: https://kubernetes.io/docs/setup/production-environment/tools/kops/
After moving the kops file to /usr/local/bin/ and renaming to kops, I tried to confirm if it was in fact installed and executable by trying 'kops --help' and 'kops --version'/'kops version' and neither command worked. Any idea what the issue might be?
Edit: Here's what I did step by step
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
sudo chmod +x kops-darwin-amd64
sudo mv kops-darwin-amd64 /usr/local/bin/kops
It's a t2.micro Ubuntu 20.04 EC2 Instance.
Tried to confirm if kops was properly installed and executable by entering 'kops --help' and 'kops --version' and also 'kops version' but they all return this error:
-bash: /usr/local/bin/kops: cannot execute binary file: Exec format error
|
|
The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.
Helm has the official documentation on how to recover from that scenario:
https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest
| ###Instruction: I get the below error in my helm upgrade stage. I did the following change apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"
### Response: The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.
Helm has the official documentation on how to recover from that scenario:
https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest
| I get the below error in my helm upgrade stage. I did the following change apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"
|
|
import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
| ###Instruction: I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.
try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
However, I see two problems with this.
If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.
Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.
I'm basically looking to replace the kubectl delete --wait functionality with a python script.
Thanks for any insights into this.
### Response: import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
| I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.
try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
However, I see two problems with this.
If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.
Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.
I'm basically looking to replace the kubectl delete --wait functionality with a python script.
Thanks for any insights into this.
|
|
Run the below command to get the list of Docker Hosts..
docker context ls
output will be something like below
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux unix:///home/sc2302/.docker/desktop/docker.sock
rootless Rootless mode unix:///run/user/1001/docker.sock
Now, from the output's, select the context you want to use.. ex: to switch to default context
docker context use default
| ###Instruction: I recently changed the docker daemon from my local Docker Desktop to local minikube following these instructions.
@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i
After running some tests, I want to change it back to my previous setup. I already tried to change some environment variable but it did not succeeded.
SET DOCKER_HOST=tcp://127.0.0.1:2375
### Response: Run the below command to get the list of Docker Hosts..
docker context ls
output will be something like below
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux unix:///home/sc2302/.docker/desktop/docker.sock
rootless Rootless mode unix:///run/user/1001/docker.sock
Now, from the output's, select the context you want to use.. ex: to switch to default context
docker context use default
| I recently changed the docker daemon from my local Docker Desktop to local minikube following these instructions.
@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i
After running some tests, I want to change it back to my previous setup. I already tried to change some environment variable but it did not succeeded.
SET DOCKER_HOST=tcp://127.0.0.1:2375
|
|
You can do this adding a "tolerations" array.
If you want to ignore all tolerations, we can do this with an "operator=exists" condition.
kubectl --namespace=$your_ns run $your_pod --image $your_image \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"},"tolerations":[{"operator":"exists"}]}}' \
--command python3 script.py \
[--dry-run=client -o yaml]
Also note: if you want to use a YAML file, the "dry-run" and "-o yaml" option would help you generate a first copy. Maybe you would be more comfortable with a plaintext file testing your changes.
| ###Instruction: I am using Kubectl run command to execute pod on a specific node.
kubectl --namespace=ns run pod1 --image image1 \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"}}}' \
--command python3 script.py
Sometimes pod is going on pending status and it cannot progress to running phase.
How can I add tolerations to the run command?
Note: I do not have a yaml file.
Any help is appreciated
### Response: You can do this adding a "tolerations" array.
If you want to ignore all tolerations, we can do this with an "operator=exists" condition.
kubectl --namespace=$your_ns run $your_pod --image $your_image \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"},"tolerations":[{"operator":"exists"}]}}' \
--command python3 script.py \
[--dry-run=client -o yaml]
Also note: if you want to use a YAML file, the "dry-run" and "-o yaml" option would help you generate a first copy. Maybe you would be more comfortable with a plaintext file testing your changes.
| I am using Kubectl run command to execute pod on a specific node.
kubectl --namespace=ns run pod1 --image image1 \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"}}}' \
--command python3 script.py
Sometimes pod is going on pending status and it cannot progress to running phase.
How can I add tolerations to the run command?
Note: I do not have a yaml file.
Any help is appreciated
|
|
As far as I know, this error "SkuNotAvailable" is either a capacity issue in the region or that your SUBSCRIPTION doesn't have access to that specific size
You could once verify that by running the below Azure cli command
az vm list-skus --location centralus --size Standard_D --all --output table
If a SKU isn't available for your subscription in a location or zone that meets your business needs, submit a SKU request to Azure Support.
If the subscription doesn't have access, please reach out to azure subscription and quota mgmt support team through as support case to check and make sure it's available to use the particular size on your subscription in case they cannot enable that for any reason, there will be an appropriate explanation.
At this point there is nothing can be done at the AKS side.
| ###Instruction: For one of the test AKS clusters I am trying to update, it gives the following error.
Error: SkuNotAvailable.
Message: The requested VM size for resource "Following SKUs have failed for capacity restrictions:
Standard_D4s_v4' is currently not available in location 'SouthAfricaNorth'. Please try another size or deploy to a different location or different size.
I have checked and found that the quota is available in the subscription for this SKU and region selected.
Now cluster and pools went in to failed status
### Response: As far as I know, this error "SkuNotAvailable" is either a capacity issue in the region or that your SUBSCRIPTION doesn't have access to that specific size
You could once verify that by running the below Azure cli command
az vm list-skus --location centralus --size Standard_D --all --output table
If a SKU isn't available for your subscription in a location or zone that meets your business needs, submit a SKU request to Azure Support.
If the subscription doesn't have access, please reach out to azure subscription and quota mgmt support team through as support case to check and make sure it's available to use the particular size on your subscription in case they cannot enable that for any reason, there will be an appropriate explanation.
At this point there is nothing can be done at the AKS side.
| For one of the test AKS clusters I am trying to update, it gives the following error.
Error: SkuNotAvailable.
Message: The requested VM size for resource "Following SKUs have failed for capacity restrictions:
Standard_D4s_v4' is currently not available in location 'SouthAfricaNorth'. Please try another size or deploy to a different location or different size.
I have checked and found that the quota is available in the subscription for this SKU and region selected.
Now cluster and pools went in to failed status
|
|
Using PowerShell on Windows, you must explicitly ensure that the stdout lines emitted by curl.exe are separated with Unix-format LF-only newlines, \n, when PowerShell passes them on to bash, given that bash, like other Unix shells, doesn't recognize Windows-format CRLF newlines, \r\n:
The simplest way to avoid the problem is to call via cmd /c:
cmd /c 'curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0'
cmd.exe's pipeline (|) (as well as its redirection operator, >), unlike PowerShell's (see below), acts as a raw byte conduit, so it simply streams whatever bytes curl.exe outputs to the receiving bash call, unaltered.
Fixing the problem on the PowerShell side requires more work, and is inherently slower:
(
(
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh
) -join "`n"
) + "`n" | bash -s v0.22.0
Note: `n is a PowerShell escape sequence that produces a literal LF character, analogous to \n in certain bash contexts.
Note:
It is important to note that, as of PowerShell 7.2.x, passing raw bytes through the pipeline is not supported: external-program stdout output is invariably decoded into .NET strings on reading, and re-encoded based on the $OutputEncoding preference variable when writing to an(other) external program.
See this answer for more information, and GitHub issue #1908 for potential future support for raw byte streaming between external programs and on redirection to a file.
That is, PowerShell invariably interprets output from external programs, such as curl.exe, as text, and sends it line by line through the pipeline, as .NET string objects (the PowerShell pipeline in general conducts (.NET) objects).
Note that these lines (strings) do not have a trailing newline themselves; that is, the information about what specific newline sequences originally separated the lines is lost at that point (PowerShell itself recognizes CRLF and LF newlines interchangeably).
However, if the receiving command is also an external program, PowerShell adds a trailing platform-native newline to each line, which on Windows is a CRLF newline - this is what caused the problem.
By collecting the lines in an array up front, using (...), they can be sent as a single, LF-separated multi-line string, using the -joinoperator, as shown above.
Note that PowerShell appends a trailing platform-native newline to this single, multi-line string too, but a stray \r\n at the very end of the input is in effect ignored by bash, assuming that the last true input line ends in \n, which is what the extra + "`n" at the end of the expression ensures.
However, there are scenarios where this trailing CRLF newline does cause problems - see this answer for an example and workarounds via the platform-native shell.
| ###Instruction: I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the official documentation, but I keep getting the error below. What could possibly be wrong?
This is the result from the command:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.
I haven't found one tackling installing a file using curl.
I'll gladly accept any help.
### Response: Using PowerShell on Windows, you must explicitly ensure that the stdout lines emitted by curl.exe are separated with Unix-format LF-only newlines, \n, when PowerShell passes them on to bash, given that bash, like other Unix shells, doesn't recognize Windows-format CRLF newlines, \r\n:
The simplest way to avoid the problem is to call via cmd /c:
cmd /c 'curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0'
cmd.exe's pipeline (|) (as well as its redirection operator, >), unlike PowerShell's (see below), acts as a raw byte conduit, so it simply streams whatever bytes curl.exe outputs to the receiving bash call, unaltered.
Fixing the problem on the PowerShell side requires more work, and is inherently slower:
(
(
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh
) -join "`n"
) + "`n" | bash -s v0.22.0
Note: `n is a PowerShell escape sequence that produces a literal LF character, analogous to \n in certain bash contexts.
Note:
It is important to note that, as of PowerShell 7.2.x, passing raw bytes through the pipeline is not supported: external-program stdout output is invariably decoded into .NET strings on reading, and re-encoded based on the $OutputEncoding preference variable when writing to an(other) external program.
See this answer for more information, and GitHub issue #1908 for potential future support for raw byte streaming between external programs and on redirection to a file.
That is, PowerShell invariably interprets output from external programs, such as curl.exe, as text, and sends it line by line through the pipeline, as .NET string objects (the PowerShell pipeline in general conducts (.NET) objects).
Note that these lines (strings) do not have a trailing newline themselves; that is, the information about what specific newline sequences originally separated the lines is lost at that point (PowerShell itself recognizes CRLF and LF newlines interchangeably).
However, if the receiving command is also an external program, PowerShell adds a trailing platform-native newline to each line, which on Windows is a CRLF newline - this is what caused the problem.
By collecting the lines in an array up front, using (...), they can be sent as a single, LF-separated multi-line string, using the -joinoperator, as shown above.
Note that PowerShell appends a trailing platform-native newline to this single, multi-line string too, but a stray \r\n at the very end of the input is in effect ignored by bash, assuming that the last true input line ends in \n, which is what the extra + "`n" at the end of the expression ensures.
However, there are scenarios where this trailing CRLF newline does cause problems - see this answer for an example and workarounds via the platform-native shell.
| I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the official documentation, but I keep getting the error below. What could possibly be wrong?
This is the result from the command:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.
I haven't found one tackling installing a file using curl.
I'll gladly accept any help.
|
|
The error code 12 indicates that the PowerDNS recursor did not receive a response from any of the authoritative servers for the queried domain within the configured timeout. This could be due to network issues, firewall rules, rate limiting, or misconfiguration of the recursor or the authoritative servers.
Possible solutions
There are a few things you can try to resolve this timeout error:
Check the network connectivity and latency between your python pod and your recursor pod, and between your recursor pod and the authoritative servers. You can use tools like ping, traceroute, or dig to diagnose network problems.
Check the firewall rules on your k8s cluster and on the authoritative servers. Make sure they allow UDP and TCP traffic on port 53 for DNS queries and responses. You can use tools like iptables, nftables, or ufw to manage firewall rules.
Check the rate limiting settings on your recursor and on the authoritative servers. Rate limiting is a mechanism to prevent denial-of-service attacks or abuse of DNS resources by limiting the number of queries per second from a given source. You can use tools like pdnsutil or pdns_control to configure rate limiting on PowerDNS recursor and authoritative servers.
Check the configuration of your recursor and the authoritative servers. Make sure they have the correct IP addresses, domain names, and DNSSEC settings. You can use tools like pdnsutil or pdns_control to manage PowerDNS configuration files and settings.
Examples
Here are some examples of how to use the tools mentioned above to troubleshoot the timeout error:
To ping the recursor pod from the python pod, you can use the following command:
import subprocess
recursor_pod_ip = "10.0.0.1" # replace with the actual IP address of the recursor pod
ping_result = subprocess.run(["ping", "-c", "4", recursor_pod_ip], capture_output=True)
print(ping_result.stdout.decode())
This will send four ICMP packets to the recursor pod and print the output. You should see something like this:
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.102 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.101 ms
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3060ms
rtt min/avg/max/mdev = 0.098/0.106/0.123/0.010 ms
This indicates that the network connectivity and latency between the python pod and the recursor pod are good.
To traceroute the authoritative server from the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- traceroute 8.8.8.8
This will trace the route taken by packets from the recursor pod to the authoritative server at 8.8.8.8 (Google DNS). You should see something like this:
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.123 ms 0.098 ms 0.102 ms
2 10.0.1.1 (10.0.1.1) 0.456 ms 0.432 ms 0.419 ms
3 10.0.2.1 (10.0.2.1) 0.789 ms 0.765 ms 0.752 ms
4 192.168.0.1 (192.168.0.1) 1.123 ms 1.098 ms 1.085 ms
5 192.168.1.1 (192.168.1.1) 1.456 ms 1.432 ms 1.419 ms
6 192.168.2.1 (192.168.2.1) 1.789 ms 1.765 ms 1.752 ms
7 192.168.3.1 (192.168.3.1) 2.123 ms 2.098 ms 2.085 ms
8 192.168.4.1 (192.168.4.1) 2.456 ms 2.432 ms 2.419 ms
9 192.168.5.1 (192.168.5.1) 2.789 ms 2.765 ms 2.752 ms
10 8.8.8.8 (8.8.8.8) 3.123 ms 3.098 ms 3.085 ms
This indicates that the route to the authoritative server is clear and there are no firewall blocks or network issues.
To dig the domain name from the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- dig example.com
This will send a DNS query for the domain name example.com to the recursor pod and print the response. You should see something like this:
; <<>> DiG 9.11.5-P4-5.1ubuntu2.1-Ubuntu <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12345
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 3600 IN A 93.184.216.34
;; Query time: 12 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Tue Jun 15 12:34:56 UTC 2021
;; MSG SIZE rcvd: 56
This indicates that the recursor pod received a valid response from the authoritative server for the domain name example.com.
To check the rate limiting settings on the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- pdns_control get-all
This will print all the configuration settings of the recursor pod. You should look for the following settings:
max-cache-entries=1000000
max-packetcache-entries=500000
max-recursion-depth=40
max-tcp-clients=128
max-udp-queries-per-round=1000
max-udp-queries-per-second=10000
These settings control the maximum number of cache entries, TCP clients, UDP queries, and recursion depth that the recursor pod can handle. You can adjust them according to your needs and resources. You can use the following command to set a new value for a setting:
kubectl exec -it recursor-pod -- pdns_control set max-udp-queries-per-second 20000
This will set the maximum number of UDP queries per second to 20000.
To check the configuration of the authoritative server at 8.8.8.8, you can use the following command:
dig +short CHAOS TXT version.bind @8.8.8.8
This will send a DNS query for the version of the authoritative server at 8.8.8.8. You should see something like this:
"google-public-dns-a.google.com"
This indicates that the authoritative server is running Google Public DNS, which is a well-known and reliable DNS service. You can check the documentation of Google Public DNS for more information on its configuration and features. You can also use the following command to check the DNSSEC status of the authoritative server:
dig +short CHAOS TXT id.server @8.8.8.8
This will send a DNS query for the identity of the authoritative server at 8.8.8.8. You should see something like this:
"edns0"
This indicates that the authoritative server supports EDNS0, which is an extension of the DNS protocol that enables DNSSEC and other features. You can check the documentation of EDNS0 for more information on its functionality and benefits.
| ###Instruction: I'm running PowerDNS recursor inside my k8s cluster. My python script is on a different pod that is doing rdns to my powerdns rescursor app. I have my hpa Max replica set to 8. However, I do not think the load is the problem here. I'm unsure what to do to resolve this timeout error that I'm getting below. I can increase the replicas to solve the problem temporarily, and then it would happen again.
[ipmetadata][MainThread][source.py][144][WARNING]: dns_error code=12, message=Timeout while contacting DNS servers
It seems like my pods are rejecting incoming traffic therefore it's outputting the dns_error code=12.
Here is part of my script that's running the rdns
return_value = {
'rdns': None
}
try:
async for attempt in AsyncRetrying(stop=stop_after_attempt(3)):
with attempt:
try:
if ip:
result = await self._resolver.query(ip_address(ip).reverse_pointer, 'PTR')
return_value['rdns'] = result.name
return return_value
except DNSError as dns_error:
# 1 = DNS server returned answer with no data
# 4 = Domain name not found
# (seems to just be a failure of rdns lookup no sense in retrying)
# 11 = Could not contact DNS servers
if int(dns_error.args[0]) in [1, 4, 11]:
return return_value
LOG.warning('dns_error code=%d, message=%s, ip=%s', dns_error.args[0], dns_error.args[1], ip)
raise
except RetryError as retry_ex:
inner_exception = retry_ex.last_attempt.exception()
if isinstance(inner_exception, DNSError):
# 12 = Timeout while contacting DNS servers
LOG.error('dns_error code=%d, message=%s, ip=%s', inner_exception.args[0], inner_exception.args[1], ip)
else:
LOG.exception('rnds lookup failed')
return return_value
### Response: The error code 12 indicates that the PowerDNS recursor did not receive a response from any of the authoritative servers for the queried domain within the configured timeout. This could be due to network issues, firewall rules, rate limiting, or misconfiguration of the recursor or the authoritative servers.
Possible solutions
There are a few things you can try to resolve this timeout error:
Check the network connectivity and latency between your python pod and your recursor pod, and between your recursor pod and the authoritative servers. You can use tools like ping, traceroute, or dig to diagnose network problems.
Check the firewall rules on your k8s cluster and on the authoritative servers. Make sure they allow UDP and TCP traffic on port 53 for DNS queries and responses. You can use tools like iptables, nftables, or ufw to manage firewall rules.
Check the rate limiting settings on your recursor and on the authoritative servers. Rate limiting is a mechanism to prevent denial-of-service attacks or abuse of DNS resources by limiting the number of queries per second from a given source. You can use tools like pdnsutil or pdns_control to configure rate limiting on PowerDNS recursor and authoritative servers.
Check the configuration of your recursor and the authoritative servers. Make sure they have the correct IP addresses, domain names, and DNSSEC settings. You can use tools like pdnsutil or pdns_control to manage PowerDNS configuration files and settings.
Examples
Here are some examples of how to use the tools mentioned above to troubleshoot the timeout error:
To ping the recursor pod from the python pod, you can use the following command:
import subprocess
recursor_pod_ip = "10.0.0.1" # replace with the actual IP address of the recursor pod
ping_result = subprocess.run(["ping", "-c", "4", recursor_pod_ip], capture_output=True)
print(ping_result.stdout.decode())
This will send four ICMP packets to the recursor pod and print the output. You should see something like this:
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.102 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.101 ms
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3060ms
rtt min/avg/max/mdev = 0.098/0.106/0.123/0.010 ms
This indicates that the network connectivity and latency between the python pod and the recursor pod are good.
To traceroute the authoritative server from the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- traceroute 8.8.8.8
This will trace the route taken by packets from the recursor pod to the authoritative server at 8.8.8.8 (Google DNS). You should see something like this:
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.123 ms 0.098 ms 0.102 ms
2 10.0.1.1 (10.0.1.1) 0.456 ms 0.432 ms 0.419 ms
3 10.0.2.1 (10.0.2.1) 0.789 ms 0.765 ms 0.752 ms
4 192.168.0.1 (192.168.0.1) 1.123 ms 1.098 ms 1.085 ms
5 192.168.1.1 (192.168.1.1) 1.456 ms 1.432 ms 1.419 ms
6 192.168.2.1 (192.168.2.1) 1.789 ms 1.765 ms 1.752 ms
7 192.168.3.1 (192.168.3.1) 2.123 ms 2.098 ms 2.085 ms
8 192.168.4.1 (192.168.4.1) 2.456 ms 2.432 ms 2.419 ms
9 192.168.5.1 (192.168.5.1) 2.789 ms 2.765 ms 2.752 ms
10 8.8.8.8 (8.8.8.8) 3.123 ms 3.098 ms 3.085 ms
This indicates that the route to the authoritative server is clear and there are no firewall blocks or network issues.
To dig the domain name from the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- dig example.com
This will send a DNS query for the domain name example.com to the recursor pod and print the response. You should see something like this:
; <<>> DiG 9.11.5-P4-5.1ubuntu2.1-Ubuntu <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12345
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 3600 IN A 93.184.216.34
;; Query time: 12 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Tue Jun 15 12:34:56 UTC 2021
;; MSG SIZE rcvd: 56
This indicates that the recursor pod received a valid response from the authoritative server for the domain name example.com.
To check the rate limiting settings on the recursor pod, you can use the following command:
kubectl exec -it recursor-pod -- pdns_control get-all
This will print all the configuration settings of the recursor pod. You should look for the following settings:
max-cache-entries=1000000
max-packetcache-entries=500000
max-recursion-depth=40
max-tcp-clients=128
max-udp-queries-per-round=1000
max-udp-queries-per-second=10000
These settings control the maximum number of cache entries, TCP clients, UDP queries, and recursion depth that the recursor pod can handle. You can adjust them according to your needs and resources. You can use the following command to set a new value for a setting:
kubectl exec -it recursor-pod -- pdns_control set max-udp-queries-per-second 20000
This will set the maximum number of UDP queries per second to 20000.
To check the configuration of the authoritative server at 8.8.8.8, you can use the following command:
dig +short CHAOS TXT version.bind @8.8.8.8
This will send a DNS query for the version of the authoritative server at 8.8.8.8. You should see something like this:
"google-public-dns-a.google.com"
This indicates that the authoritative server is running Google Public DNS, which is a well-known and reliable DNS service. You can check the documentation of Google Public DNS for more information on its configuration and features. You can also use the following command to check the DNSSEC status of the authoritative server:
dig +short CHAOS TXT id.server @8.8.8.8
This will send a DNS query for the identity of the authoritative server at 8.8.8.8. You should see something like this:
"edns0"
This indicates that the authoritative server supports EDNS0, which is an extension of the DNS protocol that enables DNSSEC and other features. You can check the documentation of EDNS0 for more information on its functionality and benefits.
| I'm running PowerDNS recursor inside my k8s cluster. My python script is on a different pod that is doing rdns to my powerdns rescursor app. I have my hpa Max replica set to 8. However, I do not think the load is the problem here. I'm unsure what to do to resolve this timeout error that I'm getting below. I can increase the replicas to solve the problem temporarily, and then it would happen again.
[ipmetadata][MainThread][source.py][144][WARNING]: dns_error code=12, message=Timeout while contacting DNS servers
It seems like my pods are rejecting incoming traffic therefore it's outputting the dns_error code=12.
Here is part of my script that's running the rdns
return_value = {
'rdns': None
}
try:
async for attempt in AsyncRetrying(stop=stop_after_attempt(3)):
with attempt:
try:
if ip:
result = await self._resolver.query(ip_address(ip).reverse_pointer, 'PTR')
return_value['rdns'] = result.name
return return_value
except DNSError as dns_error:
# 1 = DNS server returned answer with no data
# 4 = Domain name not found
# (seems to just be a failure of rdns lookup no sense in retrying)
# 11 = Could not contact DNS servers
if int(dns_error.args[0]) in [1, 4, 11]:
return return_value
LOG.warning('dns_error code=%d, message=%s, ip=%s', dns_error.args[0], dns_error.args[1], ip)
raise
except RetryError as retry_ex:
inner_exception = retry_ex.last_attempt.exception()
if isinstance(inner_exception, DNSError):
# 12 = Timeout while contacting DNS servers
LOG.error('dns_error code=%d, message=%s, ip=%s', inner_exception.args[0], inner_exception.args[1], ip)
else:
LOG.exception('rnds lookup failed')
return return_value
|
|
If you are just looking to give your workload an admin role, you can use the prebuilt cluster-admin cluster role which should be available on every k8s cluster.
See the docs for more details - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
| ###Instruction: Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?
Here is ClusterRole that I am trying:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
The command is:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
### Response: If you are just looking to give your workload an admin role, you can use the prebuilt cluster-admin cluster role which should be available on every k8s cluster.
See the docs for more details - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
| Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?
Here is ClusterRole that I am trying:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
The command is:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
|
|
To start off, I need to be clear about a few things:
Based on the tags to the question, I see we are in PowerShell rather than a linux/unix or even Windows cmd shell
In spite of this, we are using Unix curl (probably curl.exe), and not the PowerShell alias for Invoke-WebRequest. We know this because of the -sL argument. If Powershell was using the alias, we'd see a completely different error.
Next, I need to talk briefly about line endings. Instead of just a single LF (\n) character as seen in Unix/linux and expected by bash, Windows by default uses the two-character LF/CR pair (\n\r) for line endings.
With all that background out of the way, I can now explain what's causing the problem. It's this single pipe character:
|
This a PowerShell pipe, not a Unix pipe, so the operation puts the output of the curl program in the PowerShell pipeline in order to send it to the bash interpreter. Each line is an individual item on the pipeline, and as such no longer includes any original line breaks. PowerShell pipeline will "correct" this before calling bash using the default line ending for the system, which in this case is the LF/CR pair used by Windows. Now when bash tries to interpret the input, it sees an extra \r character after every line and doesn't know what to do with it.
The trick is most of what we might do in Powershell to strip out those extra characters is still gonna get sent through another pipe after we're done. I guess we could tell curl to write the file to disk without ever using a pipe, and then tell bash to run the saved file, but that's awkward, extra work, and much slower.
But we can do a little better. PowerShell by default treats each line returned by curl as a separate item on the pipeline. We can "trick" it to instead putting one big item on the pipeline using the -join operation. That will give us one big string that can go on the pipeline as a single element. It will still end up with an extra \r character, but by the time bash sees it the script will have done it's work.
Code to make this work is found in the other answer, and they deserve all the credit for the solution. The purpose of my post is to do a little better job explaining what's going on: why we have a problem, and why the solution works, since I had to read through that answer a couple times to really get it.
| ###Instruction: I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the official documentation, but I keep getting the error below. What could possibly be wrong?
This is the result from the command:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.
I haven't found one tackling installing a file using curl.
I'll gladly accept any help.
### Response: To start off, I need to be clear about a few things:
Based on the tags to the question, I see we are in PowerShell rather than a linux/unix or even Windows cmd shell
In spite of this, we are using Unix curl (probably curl.exe), and not the PowerShell alias for Invoke-WebRequest. We know this because of the -sL argument. If Powershell was using the alias, we'd see a completely different error.
Next, I need to talk briefly about line endings. Instead of just a single LF (\n) character as seen in Unix/linux and expected by bash, Windows by default uses the two-character LF/CR pair (\n\r) for line endings.
With all that background out of the way, I can now explain what's causing the problem. It's this single pipe character:
|
This a PowerShell pipe, not a Unix pipe, so the operation puts the output of the curl program in the PowerShell pipeline in order to send it to the bash interpreter. Each line is an individual item on the pipeline, and as such no longer includes any original line breaks. PowerShell pipeline will "correct" this before calling bash using the default line ending for the system, which in this case is the LF/CR pair used by Windows. Now when bash tries to interpret the input, it sees an extra \r character after every line and doesn't know what to do with it.
The trick is most of what we might do in Powershell to strip out those extra characters is still gonna get sent through another pipe after we're done. I guess we could tell curl to write the file to disk without ever using a pipe, and then tell bash to run the saved file, but that's awkward, extra work, and much slower.
But we can do a little better. PowerShell by default treats each line returned by curl as a separate item on the pipeline. We can "trick" it to instead putting one big item on the pipeline using the -join operation. That will give us one big string that can go on the pipeline as a single element. It will still end up with an extra \r character, but by the time bash sees it the script will have done it's work.
Code to make this work is found in the other answer, and they deserve all the credit for the solution. The purpose of my post is to do a little better job explaining what's going on: why we have a problem, and why the solution works, since I had to read through that answer a couple times to really get it.
| I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the official documentation, but I keep getting the error below. What could possibly be wrong?
This is the result from the command:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.
I haven't found one tackling installing a file using curl.
I'll gladly accept any help.
|
|
- mountPath: /etc/nats-server-conf/server.conf
The above setting will make a Pod mount server.conf as a directory, so try the below instead:
- mountPath: /etc/nats-server-conf
| ###Instruction: I am trying to deploy nats in k8s cluster. I need to override default server config.
Tried creating a configmap with --from-file and attached it to deployment, but it gives me the following error
nats-server: read /etc/nats-server-conf/server.conf: is a directory
ConfigMap
k describe configmaps nats-server-conf
Name: nats-server-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
server.conf:
----
accounts: {
\$SYS: {
users: [{user: sys, password: pass}]
}
}
BinaryData
====
Events: <none>
Following is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-depl
spec:
replicas: 1
selector:
matchLabels:
app: nats
template:
metadata:
labels:
app: nats
spec:
containers:
- name: nats
image: nats
volumeMounts:
- mountPath: /etc/nats-server-conf/server.conf
name: nats-server-conf
args:
[
'-p',
'4222',
'-m',
'8222',
'-js',
'-c',
'/etc/nats-server-conf/server.conf'
]
volumes:
- configMap:
name: nats-server-conf
name: nats-server-conf
Thank you.
### Response: - mountPath: /etc/nats-server-conf/server.conf
The above setting will make a Pod mount server.conf as a directory, so try the below instead:
- mountPath: /etc/nats-server-conf
| I am trying to deploy nats in k8s cluster. I need to override default server config.
Tried creating a configmap with --from-file and attached it to deployment, but it gives me the following error
nats-server: read /etc/nats-server-conf/server.conf: is a directory
ConfigMap
k describe configmaps nats-server-conf
Name: nats-server-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
server.conf:
----
accounts: {
\$SYS: {
users: [{user: sys, password: pass}]
}
}
BinaryData
====
Events: <none>
Following is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-depl
spec:
replicas: 1
selector:
matchLabels:
app: nats
template:
metadata:
labels:
app: nats
spec:
containers:
- name: nats
image: nats
volumeMounts:
- mountPath: /etc/nats-server-conf/server.conf
name: nats-server-conf
args:
[
'-p',
'4222',
'-m',
'8222',
'-js',
'-c',
'/etc/nats-server-conf/server.conf'
]
volumes:
- configMap:
name: nats-server-conf
name: nats-server-conf
Thank you.
|
|
apiVersion: skaffold/v3
kind: Config
build:
artifacts:
- image: fifapes123/auth
context: auth
sync:
manual:
- src: src/**/*.ts
dest: .
docker:
dockerfile: Dockerfile
local:
push: false
manifests:
rawYaml:
- ./infra/k8s/*
deploy:
kubectl: {}
Try the above YAML config. You should update your skaffold.yaml to the latest apiVersion (apiVersion: v3alpha1). This can be easily done by skaffold fix command
| ###Instruction: apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: fifapes123/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
I am getting error on line number 5?I am using the skaffold/v2alpha3 in which manifests under kubectl is allowed then why i am getting "property manifests is not allowed"?
### Response: apiVersion: skaffold/v3
kind: Config
build:
artifacts:
- image: fifapes123/auth
context: auth
sync:
manual:
- src: src/**/*.ts
dest: .
docker:
dockerfile: Dockerfile
local:
push: false
manifests:
rawYaml:
- ./infra/k8s/*
deploy:
kubectl: {}
Try the above YAML config. You should update your skaffold.yaml to the latest apiVersion (apiVersion: v3alpha1). This can be easily done by skaffold fix command
| apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: fifapes123/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
I am getting error on line number 5?I am using the skaffold/v2alpha3 in which manifests under kubectl is allowed then why i am getting "property manifests is not allowed"?
|
|
The command provided with exec will be executed without a shell. This means that environment variables will not be available. Thus the error specifying missing executable path in $PATH. Only executables installed in default $PATH (like /bin/) can be executed.
You can load the shell and provide $command as a command to be executed;
kubectl exec -n $NAMESPACE --container $container -it $pod -- /bin/sh -c "$COMMAND"
| ###Instruction: I'm trying to create a script in order to run multiple kubectl exec commands against multiple pods with multiple containers. The script seems to generate the command just fine but errors out when attempting to run it.
example command that is generated: kubectl -n <namespace> exec <pod_name> -c <container_name> -- openssl version
When I copy the generated command and run it directly it works fine, but if I try to run the command within the script I get an error.
OCI runtime exec failed: exec failed: unable to start container process: exec: "openssl version": executable file not found in $PATH: unknown
command terminated with exit code 126
snippet from .sh file:
for pod in $PODS; do
CONTAINERS=($(kubectl -n $NAMESPACE get pods $pod -o jsonpath='{.spec.containers[*].name}' | tr -s '[[:space:]]' '\n'))
header "{pod: \"$pod\", containers: \"$(echo $CONTAINERS | tr -d '\n')\"}"
if [ "$DRYRUN" != "true" ]; then
for container in $CONTAINERS; do
echo "COMMAND BEING RUN: \"kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND\""
kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND
done
fi
done
### Response: The command provided with exec will be executed without a shell. This means that environment variables will not be available. Thus the error specifying missing executable path in $PATH. Only executables installed in default $PATH (like /bin/) can be executed.
You can load the shell and provide $command as a command to be executed;
kubectl exec -n $NAMESPACE --container $container -it $pod -- /bin/sh -c "$COMMAND"
| I'm trying to create a script in order to run multiple kubectl exec commands against multiple pods with multiple containers. The script seems to generate the command just fine but errors out when attempting to run it.
example command that is generated: kubectl -n <namespace> exec <pod_name> -c <container_name> -- openssl version
When I copy the generated command and run it directly it works fine, but if I try to run the command within the script I get an error.
OCI runtime exec failed: exec failed: unable to start container process: exec: "openssl version": executable file not found in $PATH: unknown
command terminated with exit code 126
snippet from .sh file:
for pod in $PODS; do
CONTAINERS=($(kubectl -n $NAMESPACE get pods $pod -o jsonpath='{.spec.containers[*].name}' | tr -s '[[:space:]]' '\n'))
header "{pod: \"$pod\", containers: \"$(echo $CONTAINERS | tr -d '\n')\"}"
if [ "$DRYRUN" != "true" ]; then
for container in $CONTAINERS; do
echo "COMMAND BEING RUN: \"kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND\""
kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND
done
fi
done
|
|
Use NodePorts to expose the service nodePort on all nodes in the cluster.
From https://kubernetes.io/docs/concepts/services-networking/service/
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.
If you want to expose your service outside the cluster, use LoadBalancer type:
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
or use ingress-controller which is a reverse-proxy that routs traffics from outside to your cluster:
https://github.com/kubernetes/ingress-nginx
| ###Instruction: I have following configuration of a service:
apiVersion: v1
kind: Service
metadata:
name: academy-backend-service
spec:
selector:
app: academy-backend-app
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
Behind this service there is a deployment that runs a docker image of a spring boot application that expose port 8081.
When I try to reach the application from browser on http://localhost:30081 I don't get anything (not reachable). However if I connect inside minikube cluster, the application is available on http:{servcieip}:8081.
Any clues what is not configured properly? I thought that nodePort is enough.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
academy-backend-service NodePort 10.97.44.87 <none> 8081:30081/TCP 34m
### Response: Use NodePorts to expose the service nodePort on all nodes in the cluster.
From https://kubernetes.io/docs/concepts/services-networking/service/
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.
If you want to expose your service outside the cluster, use LoadBalancer type:
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
or use ingress-controller which is a reverse-proxy that routs traffics from outside to your cluster:
https://github.com/kubernetes/ingress-nginx
| I have following configuration of a service:
apiVersion: v1
kind: Service
metadata:
name: academy-backend-service
spec:
selector:
app: academy-backend-app
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
Behind this service there is a deployment that runs a docker image of a spring boot application that expose port 8081.
When I try to reach the application from browser on http://localhost:30081 I don't get anything (not reachable). However if I connect inside minikube cluster, the application is available on http:{servcieip}:8081.
Any clues what is not configured properly? I thought that nodePort is enough.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
academy-backend-service NodePort 10.97.44.87 <none> 8081:30081/TCP 34m
|
|
You can use xargs command like this:
command1 | xargs -I{} command2 {}
Xargs will replace the output from command1 with the empty {}. For example, if the output of command1 is '1 2 3', then Xargs will invoke commands: 'command2 1', 'command2 2', and 'command2 3'.
So in your case, you can use
kubectl get deployment -n web --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' --sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | tail -r | tail -n +6 | xargs -I{} kubectl -n web delete deployment {}
'tail -r' will reverse the order, and 'tail -n +6' will select all rows except the first 5.
| ###Instruction: I want to make a command which can clear all old deployments. For example, I have deployments in a namespace
kubectl -n web get deploy --sort-by=.metadata.creationTimestamp
myproject-static-staging-master 1/1 1 1 54d
myproject-static-staging-task-13373 1/1 1 1 20d
myproject-static-staging-task-13274 1/1 1 1 19d
myproject-static-staging-task-13230 1/1 1 1 19d
myproject-static-staging-task-13323 1/1 1 1 19d
myproject-static-staging-task-13264 1/1 1 1 18d
myproject-static-staging-task-13319 1/1 1 1 13d
myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
I want to save only that's (5 newest)
myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
I tried that command
kubectl get deployment -n web --template '{{range
.items}}{{.metadata.name}}{{"\n"}}{{end}}'
--sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | head -n 5 | xargs -r kubectl -n web delete
deployment
but it is no correct.
### Response: You can use xargs command like this:
command1 | xargs -I{} command2 {}
Xargs will replace the output from command1 with the empty {}. For example, if the output of command1 is '1 2 3', then Xargs will invoke commands: 'command2 1', 'command2 2', and 'command2 3'.
So in your case, you can use
kubectl get deployment -n web --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' --sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | tail -r | tail -n +6 | xargs -I{} kubectl -n web delete deployment {}
'tail -r' will reverse the order, and 'tail -n +6' will select all rows except the first 5.
| I want to make a command which can clear all old deployments. For example, I have deployments in a namespace
kubectl -n web get deploy --sort-by=.metadata.creationTimestamp
myproject-static-staging-master 1/1 1 1 54d
myproject-static-staging-task-13373 1/1 1 1 20d
myproject-static-staging-task-13274 1/1 1 1 19d
myproject-static-staging-task-13230 1/1 1 1 19d
myproject-static-staging-task-13323 1/1 1 1 19d
myproject-static-staging-task-13264 1/1 1 1 18d
myproject-static-staging-task-13319 1/1 1 1 13d
myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
I want to save only that's (5 newest)
myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
I tried that command
kubectl get deployment -n web --template '{{range
.items}}{{.metadata.name}}{{"\n"}}{{end}}'
--sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | head -n 5 | xargs -r kubectl -n web delete
deployment
but it is no correct.
|
|
Found the issue.
I missed to annotate the KSA with the email address of the IAM service account.
After these two commands,
kubectl create serviceaccount KSA_NAME
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "GSA_NAME:PROJECT_ID.svc.id.goog[KSA_NAME]"
I had to run this command:
kubectl annotate serviceaccount KSA_NAME \
--namespace default \
iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
| ###Instruction: I have a GKE Autopilot cluster with one service to publish a message to GCP Pub/Sub topic for testing.
I have created a KSA for the deployment, have used GCP Workload Identity Management to authorize the KSA to act as a GCP Service Account (GSA). Then I gave the GSA the Pub/Sub editor role.
The following commands are what I used:
kubectl create serviceaccount KSA_NAME
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[KSA_NAME]"
But even after doing all of this, I receive the following error when I try to publish a message to the topic.
ERROR:google.cloud.pubsub_v1.publisher._batch.thread:Failed to publish 1 messages.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 65, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "User not authorized to perform this action."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.192.42:443 {grpc_message:"User not authorized to perform this action.", grpc_status:7, created_time:"2022-11-01T10:27:21.972013149+00:00"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 272, in _commit
timeout=self._commit_timeout,
File "/usr/local/lib/python3.7/site-packages/google/pubsub_v1/services/publisher/client.py", line 613, in publish
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 288, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 99, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.PermissionDenied: 403 User not authorized to perform this action.
Here is the concerned code:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/new_pub/<pages>')
def new_pubsub(pages: int):
publish_message(pages)
return json.dumps({'success': True}), 200, {
'ContentType': 'application/json'}
from google.cloud import pubsub_v1
import json
def publish_message(data):
d = {"message": data}
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path("MY_PROJECT", "TOPIC_NAME")
publisher.publish(topic_path, json.dumps(d).encode('utf-8'))
### Response: Found the issue.
I missed to annotate the KSA with the email address of the IAM service account.
After these two commands,
kubectl create serviceaccount KSA_NAME
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "GSA_NAME:PROJECT_ID.svc.id.goog[KSA_NAME]"
I had to run this command:
kubectl annotate serviceaccount KSA_NAME \
--namespace default \
iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
| I have a GKE Autopilot cluster with one service to publish a message to GCP Pub/Sub topic for testing.
I have created a KSA for the deployment, have used GCP Workload Identity Management to authorize the KSA to act as a GCP Service Account (GSA). Then I gave the GSA the Pub/Sub editor role.
The following commands are what I used:
kubectl create serviceaccount KSA_NAME
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[KSA_NAME]"
But even after doing all of this, I receive the following error when I try to publish a message to the topic.
ERROR:google.cloud.pubsub_v1.publisher._batch.thread:Failed to publish 1 messages.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 65, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "User not authorized to perform this action."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.192.42:443 {grpc_message:"User not authorized to perform this action.", grpc_status:7, created_time:"2022-11-01T10:27:21.972013149+00:00"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 272, in _commit
timeout=self._commit_timeout,
File "/usr/local/lib/python3.7/site-packages/google/pubsub_v1/services/publisher/client.py", line 613, in publish
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 288, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 99, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.PermissionDenied: 403 User not authorized to perform this action.
Here is the concerned code:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/new_pub/<pages>')
def new_pubsub(pages: int):
publish_message(pages)
return json.dumps({'success': True}), 200, {
'ContentType': 'application/json'}
from google.cloud import pubsub_v1
import json
def publish_message(data):
d = {"message": data}
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path("MY_PROJECT", "TOPIC_NAME")
publisher.publish(topic_path, json.dumps(d).encode('utf-8'))
|
|
Kubernetes is an orchestration service it doesn't provide the feature of Auth Z/Auth N.
You have options like creating the Auth service in the backend or using the service Mesh or API gateway to implement the AuthZ/Auth N at the edge.
If you are looking for more functionality i would recommend implementing the Kong API gateway with K8s and Keycloak as an identity provider.
With you can implement the AuthZ/AuthN, rate limiting, API key auth, Basic auth, JWT etc.
You can read my article for implementation : https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56
Article implementing Oauth and OIDC however with Kong and Keycloak you can change the Auth option as per need, it's flexible.
So at the kong level, you can create the API and route and upon that you can set the security policy as you have use case. Kong also supports the UI dashboard (Konga) so it would be easy to manage also.
| ###Instruction: Currently, I am developing some back-end API, let's call them A, B and C. I'm willing to use Kubernetes for orchestration.
I need to have Authentication and authorization for only A and B APIs.
How could I implement authentication and authorization for only some api?
### Response: Kubernetes is an orchestration service it doesn't provide the feature of Auth Z/Auth N.
You have options like creating the Auth service in the backend or using the service Mesh or API gateway to implement the AuthZ/Auth N at the edge.
If you are looking for more functionality i would recommend implementing the Kong API gateway with K8s and Keycloak as an identity provider.
With you can implement the AuthZ/AuthN, rate limiting, API key auth, Basic auth, JWT etc.
You can read my article for implementation : https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56
Article implementing Oauth and OIDC however with Kong and Keycloak you can change the Auth option as per need, it's flexible.
So at the kong level, you can create the API and route and upon that you can set the security policy as you have use case. Kong also supports the UI dashboard (Konga) so it would be easy to manage also.
| Currently, I am developing some back-end API, let's call them A, B and C. I'm willing to use Kubernetes for orchestration.
I need to have Authentication and authorization for only A and B APIs.
How could I implement authentication and authorization for only some api?
|
|
Since version 1.22 there is a beta feature that helps you do that. You can add the annotation controller.kubernetes.io/pod-deletion-cost with a value in the range [-2147483647, 2147483647] and this will cause pods with lower value to be killed first. Default is 0, so anything negative on one pod will cause a pod to get killed during downscaling, e.g.
kubectl annotate pods my-pod-12345678-abcde controller.kubernetes.io/pod-deletion-cost=-1000
Link to discussion about the implementation of this feature: Scale down a deployment by removing specific pods (PodDeletionCost) #2255
Link to the documentation: ReplicaSet / Pod deletion cost
| ###Instruction: I need to scale a set of pods that run queue-based workers. Jobs for workers can run for a long time (hours) and should not get interrupted. The number of pods is based on the length of the worker queue. Scaling would be either using the horizontal autoscaler using custom metrics, or a simple controller that changes the number of replicas.
Problem with either solution is that, when scaling down, there is no control over which pod(s) get terminated. At any given time, most workers are likely working on short running jobs, idle, or (more rare) processing a long running job. I'd like to avoid killing the long running job workers, idle or short running job workers can be terminated without issue.
What would be a way to do this with low complexity? One thing I can think of is to do this based on CPU usage of the pods. Not ideal, but it could be good enough. Another method could be that workers somehow expose a priority indicating whether they are the preferred pod to be deleted. This priority could change every time a worker picks up a new job though.
Eventually all jobs will be short running and this problem will go away, but that is a longer term goal for now.
### Response: Since version 1.22 there is a beta feature that helps you do that. You can add the annotation controller.kubernetes.io/pod-deletion-cost with a value in the range [-2147483647, 2147483647] and this will cause pods with lower value to be killed first. Default is 0, so anything negative on one pod will cause a pod to get killed during downscaling, e.g.
kubectl annotate pods my-pod-12345678-abcde controller.kubernetes.io/pod-deletion-cost=-1000
Link to discussion about the implementation of this feature: Scale down a deployment by removing specific pods (PodDeletionCost) #2255
Link to the documentation: ReplicaSet / Pod deletion cost
| I need to scale a set of pods that run queue-based workers. Jobs for workers can run for a long time (hours) and should not get interrupted. The number of pods is based on the length of the worker queue. Scaling would be either using the horizontal autoscaler using custom metrics, or a simple controller that changes the number of replicas.
Problem with either solution is that, when scaling down, there is no control over which pod(s) get terminated. At any given time, most workers are likely working on short running jobs, idle, or (more rare) processing a long running job. I'd like to avoid killing the long running job workers, idle or short running job workers can be terminated without issue.
What would be a way to do this with low complexity? One thing I can think of is to do this based on CPU usage of the pods. Not ideal, but it could be good enough. Another method could be that workers somehow expose a priority indicating whether they are the preferred pod to be deleted. This priority could change every time a worker picks up a new job though.
Eventually all jobs will be short running and this problem will go away, but that is a longer term goal for now.
|
|
I had this problem this week, I solved this by access by accessing: Manage Jenkins => Manage Nodes and Clouds => Configure Clouds => choose your kubernetes Cluster => Kubernetes Cloud details... => set Concurrency Limit to a bigger number(if your current is 10 set it to 20)
| ###Instruction: I'm using kubernetes plugin in jenkins to do the testing on kubernetes cluster through jenkins pipeline as code. The cluster details change frequently, so I'm configuring the kubernetes plugin through groovy scripts just before the testing stage.
Problem : Jenkins is not able to create pod for testing on the cluster. If I check the configuration of that particular kubernetes cloud in the configure system console, it is as per expectation (the IP, token, jenkins url etc).'Test Connection' is also successful.
Tried adding sleep time after configuring the plugin, but no luck.
Any idea what could be happening here?
Thanks in advance!
If I manually create a new kubernetes cloud through console and copy the same details manually, the pipeline is able to create the pods and perform tasks further.
Jenkins logs:
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘XXXXXX-bdd-runner-21-XXXXXXX’ are offline.
Jenkins version: 2.150.3
Kubernetes plugin version: 1.14.5
### Response: I had this problem this week, I solved this by access by accessing: Manage Jenkins => Manage Nodes and Clouds => Configure Clouds => choose your kubernetes Cluster => Kubernetes Cloud details... => set Concurrency Limit to a bigger number(if your current is 10 set it to 20)
| I'm using kubernetes plugin in jenkins to do the testing on kubernetes cluster through jenkins pipeline as code. The cluster details change frequently, so I'm configuring the kubernetes plugin through groovy scripts just before the testing stage.
Problem : Jenkins is not able to create pod for testing on the cluster. If I check the configuration of that particular kubernetes cloud in the configure system console, it is as per expectation (the IP, token, jenkins url etc).'Test Connection' is also successful.
Tried adding sleep time after configuring the plugin, but no luck.
Any idea what could be happening here?
Thanks in advance!
If I manually create a new kubernetes cloud through console and copy the same details manually, the pipeline is able to create the pods and perform tasks further.
Jenkins logs:
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘XXXXXX-bdd-runner-21-XXXXXXX’ are offline.
Jenkins version: 2.150.3
Kubernetes plugin version: 1.14.5
|
|
Allow the pods to communicate if they are running on port 8080 within
the namespace.
We will only open and accept requests on port 8080 to satisfy the above request.
The communication from and to the pods running on port 8080. No pods
running on port 8080 from other namespaces to allowed.
Using namespace selector to filter out the traffic from specific namespace.
Ensure the NetworkPolicy doesn’t allow other pods that are running
other than port 8080.
We have applied the network policy with port as input on the namespace level
check the namespace label
kubectl get namespace netpol --show-labels
Example YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cka-netpol
namespace: netpol
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: netpol #Use label accordingly
ports:
- protocol: TCP
port: 8080
You check more example and use this link for ref : https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/09-allow-traffic-only-to-a-port.md
| ###Instruction: Create an NetworkPolicy named cka-netpol in the namespace netpol.
1] Allow the pods to communicate if they are running on port 8080 within the namespace.
2] Ensure the NetworkPolicy doesn’t allow other pods that are running other than port 8080.
3] The communication from and to the pods running on port 8080. No pods running on port 8080 from other namespaces to allowed.
I want yaml file with some description theoretically.
### Response:
Allow the pods to communicate if they are running on port 8080 within
the namespace.
We will only open and accept requests on port 8080 to satisfy the above request.
The communication from and to the pods running on port 8080. No pods
running on port 8080 from other namespaces to allowed.
Using namespace selector to filter out the traffic from specific namespace.
Ensure the NetworkPolicy doesn’t allow other pods that are running
other than port 8080.
We have applied the network policy with port as input on the namespace level
check the namespace label
kubectl get namespace netpol --show-labels
Example YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cka-netpol
namespace: netpol
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: netpol #Use label accordingly
ports:
- protocol: TCP
port: 8080
You check more example and use this link for ref : https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/09-allow-traffic-only-to-a-port.md
| Create an NetworkPolicy named cka-netpol in the namespace netpol.
1] Allow the pods to communicate if they are running on port 8080 within the namespace.
2] Ensure the NetworkPolicy doesn’t allow other pods that are running other than port 8080.
3] The communication from and to the pods running on port 8080. No pods running on port 8080 from other namespaces to allowed.
I want yaml file with some description theoretically.
|
|
Finally I figured out the missing part. I had to add the following statement to the not working ingress object:
annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Please see the complete ingress object:
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- myapp.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp.onazure.com
http:
paths:
- path: /testing/(.*)
pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8000
| ###Instruction: currently I'm trying the following setup:
I have:
one cluster
one Ingress Controller
one url (myapp.onazure.com)
two namespaces for two applications default and default-test
two deployments, ingress objects, services for the namespaces
I can easily reach my app from the default namespace with path based routing '/' as a prefix rule
Now i have tried to configure the second namespace and following rule: /testing to hit another service
Unfortunately i get an HTTP404 when i try to hit the following URL myapp.onazure.com/testing/openapi.json
What did I miss?
Working Ingress 1
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi
rules:
- host: - myapp-region1.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
Not working Ingress 2
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object-testing
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
#nginx.ingress.kubernetes.io/rewrite-target: /testing
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp-region1.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
Maybe I am missing a rewrite target to simply '/' in the testing namespace ingress?
### Response: Finally I figured out the missing part. I had to add the following statement to the not working ingress object:
annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Please see the complete ingress object:
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- myapp.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp.onazure.com
http:
paths:
- path: /testing/(.*)
pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8000
| currently I'm trying the following setup:
I have:
one cluster
one Ingress Controller
one url (myapp.onazure.com)
two namespaces for two applications default and default-test
two deployments, ingress objects, services for the namespaces
I can easily reach my app from the default namespace with path based routing '/' as a prefix rule
Now i have tried to configure the second namespace and following rule: /testing to hit another service
Unfortunately i get an HTTP404 when i try to hit the following URL myapp.onazure.com/testing/openapi.json
What did I miss?
Working Ingress 1
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi
rules:
- host: - myapp-region1.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
Not working Ingress 2
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object-testing
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
#nginx.ingress.kubernetes.io/rewrite-target: /testing
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp-region1.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
Maybe I am missing a rewrite target to simply '/' in the testing namespace ingress?
|
|
Looking at the error it is clear that node doesnt have sufficient memory and cpu to schedule/run the workload. Check the node configuration and ensure that resources are available on the node to host the workload
Cannot schedule pods: Insufficient cpu.
PodUnschedulable Reason Cannot schedule pods: Insufficient cpu.
Learn more Source gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp gmail-sender-7944d6d4d4-kdlds PodUnschedulable
Cannot schedule pods: Insufficient memory.
| ###Instruction: I have a container written in go. It deploys and runs on my DockerDesktop & on my Kubernetes cluster in DockerDesktop.
I have pushed the same container to Artefact Repository and it fails to deploy.
So I deployed it to CloudRun, and it works! Very confused.
My GKE cluster is autopilot so I assume the are no resource issues.
I expected to get a running container however i got
Cannot schedule pods: Insufficient cpu.
PodUnschedulable
Reason
Cannot schedule pods: Insufficient cpu.
Learn more
Source
gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp
gmail-sender-7944d6d4d4-kdlds
PodUnschedulable Cannot schedule pods: Insufficient memory.
My deployment file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: gmail-sender
labels:
app: gmail-sender
spec:
replicas: 1
selector:
matchLabels:
app: gmail-sender
template:
metadata:
labels:
app: gmail-sender
spec:
containers:
- name: gmail-sender
image: europe-west2-docker.pkg.dev/ea-website-359514/gmail-sender/gmail-sender:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8099
---
apiVersion: v1
kind: Service
metadata:
name: gmail-sender-cluster-ip
labels:
app: gmail-sender
spec:
ports:
- port: 8099
protocol: TCP
### Response: Looking at the error it is clear that node doesnt have sufficient memory and cpu to schedule/run the workload. Check the node configuration and ensure that resources are available on the node to host the workload
Cannot schedule pods: Insufficient cpu.
PodUnschedulable Reason Cannot schedule pods: Insufficient cpu.
Learn more Source gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp gmail-sender-7944d6d4d4-kdlds PodUnschedulable
Cannot schedule pods: Insufficient memory.
| I have a container written in go. It deploys and runs on my DockerDesktop & on my Kubernetes cluster in DockerDesktop.
I have pushed the same container to Artefact Repository and it fails to deploy.
So I deployed it to CloudRun, and it works! Very confused.
My GKE cluster is autopilot so I assume the are no resource issues.
I expected to get a running container however i got
Cannot schedule pods: Insufficient cpu.
PodUnschedulable
Reason
Cannot schedule pods: Insufficient cpu.
Learn more
Source
gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp
gmail-sender-7944d6d4d4-kdlds
PodUnschedulable Cannot schedule pods: Insufficient memory.
My deployment file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: gmail-sender
labels:
app: gmail-sender
spec:
replicas: 1
selector:
matchLabels:
app: gmail-sender
template:
metadata:
labels:
app: gmail-sender
spec:
containers:
- name: gmail-sender
image: europe-west2-docker.pkg.dev/ea-website-359514/gmail-sender/gmail-sender:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8099
---
apiVersion: v1
kind: Service
metadata:
name: gmail-sender-cluster-ip
labels:
app: gmail-sender
spec:
ports:
- port: 8099
protocol: TCP
|
|
Kustomize isn't a templating system and doesn't do variable substitution. It can perform a variety of YAML patching tricks, so one option you have is to start with a base manifest like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
spec:
tls:
- hosts: []
secretName:
And then patch it in your kustomization.yaml files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ingressname
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
What I've shown here works well if you have an application consisting of a single Ingress and you want to produce multiple variants (maybe one per cluster, or per namespace, or something). That is, you have:
A Deployment
A Service
An Ingress
(etc.)
Then you would have one directory for each variant of the app, giving you a layout something like:
.
├── base
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
├── variant1
│ └── kustomization.yaml
└── variant2
└── kustomization.yaml
If your application has multiple Ingress resources, and you want to apply the same patch to all of them, Kustomize can do that. If you were to modify the patch in your kustomization.yaml so that it looks like this instead:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ".*"
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
This would apply the same patch to all matching Ingress resources (which is "all of them", in this case, because we used .* as our match expression).
| ###Instruction: I am trying to deploy a K8s application using Kustomize. Up to now I have done simple implementations where we have a few of the K8s files such as ingress.yaml with something like the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
namespace: namespace
labels:
app: appname
spec:
tls:
- hosts:
- $(variable1)
secretName: $(variable2)-tls
Under my overlays directory for each environment, I then have another kustomize.yaml which gives the values in a configmap:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path
configMapGenerator:
- behavior: merge
literals:
- variable1=foo
- variable2=bar
name: configmapname
images:
- name: imagename
newName: registryurl
This works well, but now I need to do something more complicated. Say for example I have multiple ingress. Instead of creating multiple base ingress yaml files, how can I have one base yaml file that creates every ingress based on the values in my overlay file? Is that possible?
### Response: Kustomize isn't a templating system and doesn't do variable substitution. It can perform a variety of YAML patching tricks, so one option you have is to start with a base manifest like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
spec:
tls:
- hosts: []
secretName:
And then patch it in your kustomization.yaml files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ingressname
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
What I've shown here works well if you have an application consisting of a single Ingress and you want to produce multiple variants (maybe one per cluster, or per namespace, or something). That is, you have:
A Deployment
A Service
An Ingress
(etc.)
Then you would have one directory for each variant of the app, giving you a layout something like:
.
├── base
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
├── variant1
│ └── kustomization.yaml
└── variant2
└── kustomization.yaml
If your application has multiple Ingress resources, and you want to apply the same patch to all of them, Kustomize can do that. If you were to modify the patch in your kustomization.yaml so that it looks like this instead:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ".*"
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
This would apply the same patch to all matching Ingress resources (which is "all of them", in this case, because we used .* as our match expression).
| I am trying to deploy a K8s application using Kustomize. Up to now I have done simple implementations where we have a few of the K8s files such as ingress.yaml with something like the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
namespace: namespace
labels:
app: appname
spec:
tls:
- hosts:
- $(variable1)
secretName: $(variable2)-tls
Under my overlays directory for each environment, I then have another kustomize.yaml which gives the values in a configmap:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path
configMapGenerator:
- behavior: merge
literals:
- variable1=foo
- variable2=bar
name: configmapname
images:
- name: imagename
newName: registryurl
This works well, but now I need to do something more complicated. Say for example I have multiple ingress. Instead of creating multiple base ingress yaml files, how can I have one base yaml file that creates every ingress based on the values in my overlay file? Is that possible?
|
|
One option you have here is to enable private IP addresses for the nodes in your cluster. The --enable-private-nodes flag will make it so the nodes in your cluster get private IP addresses (rather than the default public, internet-facing IP addresses).
Note that in this case, you would still be able to SSH into these nodes, but only from within your VPC network.
Also note that this means you would not be able to access NodePort type services from outside of your VPC network. Instead, you would need to use a LoadBalancer type service (or provide some other way to route traffic to your service from outside of the cluster, if required).
If you'd like to prevent SSH access even from within your VPC network, your easiest option would likely be to configure a firewall rule to deny SSH traffic to your nodes (TCP/UDP/SCTP port 22). Use network tags (the --tags flag) to target your GKE nodes.
Something along the lines of:
gcloud compute firewall-rules create fw-d-i-ssh-to-gke-nodes \
--network NETWORK_NAME \
--action deny \
--direction ingress \
--rules tcp:22,udp:22,sctp:22 \
--source-ranges 0.0.0.0/0 \
--priority 65534 \
--target-tags my-gke-node-network-tag
Finally, one last option I'll mention for creating a hardened GKE cluster is to use Google's safer-cluster Terraform module. This is an opinionated setup of a GKE cluster that follows many of the principles laid out in Google's cluster hardening guide and the Terraform module takes care of a lot of the nitty-gritty boilerplate here.
| ###Instruction: I am running a regional GKE kubernetes cluster in is-central1-b us-central-1-c and us-central1-f. I am running 1.21.14-gke.700. I am adding a confidential node pool to the cluster with this command.
gcloud container node-pools create card-decrpyt-confidential-pool-1 \
--cluster=svcs-dev-1 \
--disk-size=100GB \
--disk-type=pd-standard \
--enable-autorepair \
--enable-autoupgrade \
--enable-gvnic \
--image-type=COS_CONTAINERD \
--machine-type="n2d-standard-2" \
--max-pods-per-node=8 \
--max-surge-upgrade=1 \
--max-unavailable-upgrade=1 \
--min-nodes=4 \
--node-locations=us-central1-b,us-central1-c,us-central1-f \
--node-taints=dedicatednode=card-decrypt:NoSchedule \
--node-version=1.21.14-gke.700 \
--num-nodes=4 \
--region=us-central1 \
--sandbox="type=gvisor" \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--service-account="card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com" \
--shielded-integrity-monitoring \
--shielded-secure-boot \
--tags=testingdonotuse \
--workload-metadata=GKE_METADATA \
--enable-confidential-nodes
This creates a node pool but there is one problem... I can still SSH to the instances that the node pool creates. This is unacceptable for my use case as these node pools need to be as secure as possible. I went into my node pool and created a new machine template with ssh turned off using an instance template based off the one created for my node pool.
gcloud compute instance-templates create card-decrypt-instance-template \
--project=corp-dev-project
--machine-type=n2d-standard-2
--network-interface=aliases=gke-svcs-dev-1-pods-10a0a3cd:/28,nic-type=GVNIC,subnet=corp-dev-project-private-subnet,no-address
--metadata=block-project-ssh-keys=true,enable-oslogin=true
--maintenance-policy=TERMINATE --provisioning-model=STANDARD
--service-account=card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com
--scopes=https://www.googleapis.com/auth/cloud-platform
--region=us-central1 --min-cpu-platform=AMD\ Milan
--tags=testingdonotuse,gke-svcs-dev-1-10a0a3cd-node
--create-disk=auto-delete=yes,boot=yes,device-name=card-decrpy-instance-template,image=projects/confidential-vm-images/global/images/cos-89-16108-766-5,mode=rw,size=100,type=pd-standard
--shielded-secure-boot
--shielded-vtpm -
-shielded-integrity-monitoring
--labels=component=gke,goog-gke-node=,team=platform --reservation-affinity=any
When I change the instance templates of the nodes in the node pool the new instances come online but they do not attach to the node pool. The cluster is always trying to repair itself and I can't change any settings until I delete all the nodes in the pool. I don't receive any errors.
What do I need to do to disable ssh into the node pool nodes with the original node pool I created or with the new instance template I created. I have tried a bunch of different configurations with a new node pool and the cluster and have not had any luck. I've tried different tags network configs and images. None of these have worked.
Other info:
The cluster was not originally a confidential cluster. The confidential nodes are the first of its kind added to the cluster.
### Response: One option you have here is to enable private IP addresses for the nodes in your cluster. The --enable-private-nodes flag will make it so the nodes in your cluster get private IP addresses (rather than the default public, internet-facing IP addresses).
Note that in this case, you would still be able to SSH into these nodes, but only from within your VPC network.
Also note that this means you would not be able to access NodePort type services from outside of your VPC network. Instead, you would need to use a LoadBalancer type service (or provide some other way to route traffic to your service from outside of the cluster, if required).
If you'd like to prevent SSH access even from within your VPC network, your easiest option would likely be to configure a firewall rule to deny SSH traffic to your nodes (TCP/UDP/SCTP port 22). Use network tags (the --tags flag) to target your GKE nodes.
Something along the lines of:
gcloud compute firewall-rules create fw-d-i-ssh-to-gke-nodes \
--network NETWORK_NAME \
--action deny \
--direction ingress \
--rules tcp:22,udp:22,sctp:22 \
--source-ranges 0.0.0.0/0 \
--priority 65534 \
--target-tags my-gke-node-network-tag
Finally, one last option I'll mention for creating a hardened GKE cluster is to use Google's safer-cluster Terraform module. This is an opinionated setup of a GKE cluster that follows many of the principles laid out in Google's cluster hardening guide and the Terraform module takes care of a lot of the nitty-gritty boilerplate here.
| I am running a regional GKE kubernetes cluster in is-central1-b us-central-1-c and us-central1-f. I am running 1.21.14-gke.700. I am adding a confidential node pool to the cluster with this command.
gcloud container node-pools create card-decrpyt-confidential-pool-1 \
--cluster=svcs-dev-1 \
--disk-size=100GB \
--disk-type=pd-standard \
--enable-autorepair \
--enable-autoupgrade \
--enable-gvnic \
--image-type=COS_CONTAINERD \
--machine-type="n2d-standard-2" \
--max-pods-per-node=8 \
--max-surge-upgrade=1 \
--max-unavailable-upgrade=1 \
--min-nodes=4 \
--node-locations=us-central1-b,us-central1-c,us-central1-f \
--node-taints=dedicatednode=card-decrypt:NoSchedule \
--node-version=1.21.14-gke.700 \
--num-nodes=4 \
--region=us-central1 \
--sandbox="type=gvisor" \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--service-account="card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com" \
--shielded-integrity-monitoring \
--shielded-secure-boot \
--tags=testingdonotuse \
--workload-metadata=GKE_METADATA \
--enable-confidential-nodes
This creates a node pool but there is one problem... I can still SSH to the instances that the node pool creates. This is unacceptable for my use case as these node pools need to be as secure as possible. I went into my node pool and created a new machine template with ssh turned off using an instance template based off the one created for my node pool.
gcloud compute instance-templates create card-decrypt-instance-template \
--project=corp-dev-project
--machine-type=n2d-standard-2
--network-interface=aliases=gke-svcs-dev-1-pods-10a0a3cd:/28,nic-type=GVNIC,subnet=corp-dev-project-private-subnet,no-address
--metadata=block-project-ssh-keys=true,enable-oslogin=true
--maintenance-policy=TERMINATE --provisioning-model=STANDARD
--service-account=card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com
--scopes=https://www.googleapis.com/auth/cloud-platform
--region=us-central1 --min-cpu-platform=AMD\ Milan
--tags=testingdonotuse,gke-svcs-dev-1-10a0a3cd-node
--create-disk=auto-delete=yes,boot=yes,device-name=card-decrpy-instance-template,image=projects/confidential-vm-images/global/images/cos-89-16108-766-5,mode=rw,size=100,type=pd-standard
--shielded-secure-boot
--shielded-vtpm -
-shielded-integrity-monitoring
--labels=component=gke,goog-gke-node=,team=platform --reservation-affinity=any
When I change the instance templates of the nodes in the node pool the new instances come online but they do not attach to the node pool. The cluster is always trying to repair itself and I can't change any settings until I delete all the nodes in the pool. I don't receive any errors.
What do I need to do to disable ssh into the node pool nodes with the original node pool I created or with the new instance template I created. I have tried a bunch of different configurations with a new node pool and the cluster and have not had any luck. I've tried different tags network configs and images. None of these have worked.
Other info:
The cluster was not originally a confidential cluster. The confidential nodes are the first of its kind added to the cluster.
|
|
if you are using the request-transformer Enterprise version you can do it.
config.append.headers optional
Type: array of string elements List of headername:value pairs. If the
header is not set, set it with the given value. If it is already set,
a new header with the same name and the new value will be set.
This field is referenceable, which means it can be securely stored as
a secret in a vault. References must follow a specific format.
Ref : https://docs.konghq.com/hub/kong-inc/request-transformer-advanced/
There is no direct way as much i have tried if not exterprise version, but you can update or write the plugin code and create the docker image and run it.
Here is example code which append the headers : https://github.com/Kong/kong-plugin-request-transformer/blob/master/kong/plugins/request-transformer/access.lua#L228
You can read article do build custom docker image with plugin code : https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd
| ###Instruction: I have this KongPlugin on Kubernetes
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
config:
remove:
headers:
- Authorization
append:
headers:
- Authorization:Basic <SECRET>
plugin: request-transformer
Is it possible to move the header Authorization to a Kubernetes secret?
### Response: if you are using the request-transformer Enterprise version you can do it.
config.append.headers optional
Type: array of string elements List of headername:value pairs. If the
header is not set, set it with the given value. If it is already set,
a new header with the same name and the new value will be set.
This field is referenceable, which means it can be securely stored as
a secret in a vault. References must follow a specific format.
Ref : https://docs.konghq.com/hub/kong-inc/request-transformer-advanced/
There is no direct way as much i have tried if not exterprise version, but you can update or write the plugin code and create the docker image and run it.
Here is example code which append the headers : https://github.com/Kong/kong-plugin-request-transformer/blob/master/kong/plugins/request-transformer/access.lua#L228
You can read article do build custom docker image with plugin code : https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd
| I have this KongPlugin on Kubernetes
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
config:
remove:
headers:
- Authorization
append:
headers:
- Authorization:Basic <SECRET>
plugin: request-transformer
Is it possible to move the header Authorization to a Kubernetes secret?
|
|
Deleting old log and index data seems to be the responsibility of S3, not Loki. You'll need to add one or more lifecycle rules to your buckets to handle this.
https://grafana.com/docs/loki/latest/operations/storage/retention/#table-manager
"When using S3 or GCS, the bucket storing the chunks needs to have the expiry policy set correctly. For more details check S3’s documentation or GCS’s documentation."
IMO the Loki documentation is very weak on this topic, I'd like it if they talked about this in more detail.
| ###Instruction: I am using Loki v2.4.2 and have configured S3 as a storage backend for both index and chunk.
I want to ensure that all logs older than 90 days are deleted without risk of corruption. The documentation about retention is confusing, and steps are not clear. Should I just set TTL on object storage on root prefix i.e., /. Or should I configure something like this? I don't want to run the compactor.
table_manager:
retention_deletes_enabled: true
retention_period: 2160h
Here is my Loki configuration. Please suggest what changes should be made in this configuration and the corresponding S3 TTL. I don't want to run the compactor.
config:
# existingSecret:
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /data/loki/wal
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
## Different ring configs can be used. E.g. Consul
# ring:
# store: consul
# replication_factor: 1
# consul:
# host: "consul:8500"
# prefix: ""
# http_client_timeout: "20s"
# consistent_reads: true
limits_config:
max_query_series: 5000
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
schema_config:
configs:
- from: 2021-09-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ap-southeast-1/loki-s3-bucket
boltdb_shipper:
active_index_directory: /data/loki/boltdb-shipper-active
cache_location: /data/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
filesystem:
directory: /data/loki/chunks
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: filesystem
### Response: Deleting old log and index data seems to be the responsibility of S3, not Loki. You'll need to add one or more lifecycle rules to your buckets to handle this.
https://grafana.com/docs/loki/latest/operations/storage/retention/#table-manager
"When using S3 or GCS, the bucket storing the chunks needs to have the expiry policy set correctly. For more details check S3’s documentation or GCS’s documentation."
IMO the Loki documentation is very weak on this topic, I'd like it if they talked about this in more detail.
| I am using Loki v2.4.2 and have configured S3 as a storage backend for both index and chunk.
I want to ensure that all logs older than 90 days are deleted without risk of corruption. The documentation about retention is confusing, and steps are not clear. Should I just set TTL on object storage on root prefix i.e., /. Or should I configure something like this? I don't want to run the compactor.
table_manager:
retention_deletes_enabled: true
retention_period: 2160h
Here is my Loki configuration. Please suggest what changes should be made in this configuration and the corresponding S3 TTL. I don't want to run the compactor.
config:
# existingSecret:
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /data/loki/wal
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
## Different ring configs can be used. E.g. Consul
# ring:
# store: consul
# replication_factor: 1
# consul:
# host: "consul:8500"
# prefix: ""
# http_client_timeout: "20s"
# consistent_reads: true
limits_config:
max_query_series: 5000
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
schema_config:
configs:
- from: 2021-09-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ap-southeast-1/loki-s3-bucket
boltdb_shipper:
active_index_directory: /data/loki/boltdb-shipper-active
cache_location: /data/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
filesystem:
directory: /data/loki/chunks
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: filesystem
|
|
I believe if you set holdApplicationUntilProxyStarts to true will solve your issue.
You can get more information about it here:
https://istio.io/latest/docs/ops/common-problems/injection/
Hope it can help you.
| ###Instruction: I have problem with start order of istio-sidecar and main application in kubernetes.
When pod starts, main application getting error "connection refused" for external services. When istio-envoy proxy ready, main application starts correctly on next attempt.
While Istio is not ready, the main application has time to crash and restart 2-3 times
how do I make the main application wait for istio-sidecar to start and only then start running itself?
### Response: I believe if you set holdApplicationUntilProxyStarts to true will solve your issue.
You can get more information about it here:
https://istio.io/latest/docs/ops/common-problems/injection/
Hope it can help you.
| I have problem with start order of istio-sidecar and main application in kubernetes.
When pod starts, main application getting error "connection refused" for external services. When istio-envoy proxy ready, main application starts correctly on next attempt.
While Istio is not ready, the main application has time to crash and restart 2-3 times
how do I make the main application wait for istio-sidecar to start and only then start running itself?
|
|
First of all you should expose MySQL port 3306 to outside of a container.
Assume host, on which Docker runs MySQL container, has 192.168.0.100 IP address. Then you can telnet 192.168.0.100:3306 to be sure port is exposed and available.
Here is an example from Dockers documentation :
docker run -p 192.168.0.100:3306:3306/tcp mysql-container-name bash
This binds port 3306 of the container to TCP port 3306 on 192.168.0.100 of the host machine.
Then you CMS can connect to it within minikube.
Try to deploy dnsutils under minikube. It provides network tools like nslookup, telnet etc. It very helps to resolve network issues within minikube. You can connect to it sudo kubernetes exec -it dnsutils -n default -- bash.
| ###Instruction: I wanted to deploy Strapi CMS to Kubernetes. On my local machine, I am trying to do this with Minikube. The structure of the project is MySQL in a different container outside of the cluster. I want to access the MySQL database from inside the cluster via this IP 172.17.0.2:3306
The Database is outside of the cluster and lives in a docker container. But the Strapi project lives in a cluster of Kubernetes.
This is my deployment YAML file for doing the Kubernetes stuff:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cg-api
spec:
selector:
matchLabels:
app: cg-api
replicas: 2
template:
metadata:
labels:
app: cg-api
spec:
containers:
- name: cg-api
image: alirezahd/cg-api
env:
- name: DATABASE_HOST
value: "172.17.0.2"
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 1337
### Response: First of all you should expose MySQL port 3306 to outside of a container.
Assume host, on which Docker runs MySQL container, has 192.168.0.100 IP address. Then you can telnet 192.168.0.100:3306 to be sure port is exposed and available.
Here is an example from Dockers documentation :
docker run -p 192.168.0.100:3306:3306/tcp mysql-container-name bash
This binds port 3306 of the container to TCP port 3306 on 192.168.0.100 of the host machine.
Then you CMS can connect to it within minikube.
Try to deploy dnsutils under minikube. It provides network tools like nslookup, telnet etc. It very helps to resolve network issues within minikube. You can connect to it sudo kubernetes exec -it dnsutils -n default -- bash.
| I wanted to deploy Strapi CMS to Kubernetes. On my local machine, I am trying to do this with Minikube. The structure of the project is MySQL in a different container outside of the cluster. I want to access the MySQL database from inside the cluster via this IP 172.17.0.2:3306
The Database is outside of the cluster and lives in a docker container. But the Strapi project lives in a cluster of Kubernetes.
This is my deployment YAML file for doing the Kubernetes stuff:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cg-api
spec:
selector:
matchLabels:
app: cg-api
replicas: 2
template:
metadata:
labels:
app: cg-api
spec:
containers:
- name: cg-api
image: alirezahd/cg-api
env:
- name: DATABASE_HOST
value: "172.17.0.2"
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 1337
|
|
it could be due to the function changing the context and defined in values.yaml
Normally with range, we can use the $ for global scope, appName: {{ include "common.fullname" $ }}
When tested the same template by keeping the static value of the appName it worked for me, so there is no issue with access from values.yaml unless nil is getting set at .Values.ip.prod.service2.
in other case as you mentioned {{ (.Values.ip.prod).service2 }} multiple level nesting will solve issue.
| ###Instruction: I'm trying to assign static IPs for Load Balancers in GKE to services by storing them in the values.yaml file as:
ip:
sandbox:
service1: xxx.xxx.201.74
service2: xxx.xxx.80.114
dev:
service1: xxx.xxx.249.203
service2: xxx.xxx.197.77
test:
service1: xxx.xxx.123.212
service2: xxx.xxx.194.133
prod:
service1: xxx.xx.244.211
service2: xxx.xxx.207.177
All works fine till I want to deploy to prod and that will fail as:
Error: UPGRADE FAILED: template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
helm.go:94: [debug] template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
and the part for service2-service.yaml looks like:
apiVersion: v1
kind: Service
metadata:
annotations:
appName: {{ include "common.fullname" . }}
componentName: service2
labels:
io.kompose.service: service2
name: service2
spec:
ports:
- name: "{{ .Values.service.service2.ports.name }}"
port: {{ .Values.service.service2.ports.port }}
protocol: {{ .Values.service.service2.ports.protocol }}
targetPort: {{ .Values.service.service2.ports.port }}
type: LoadBalancer
{{ if eq .Values.target.deployment.namespace "sandbox" }}
loadBalancerIP: {{ .Values.ip.sandbox.service2 }}
{{ else if eq .Values.target.deployment.namespace "dev" }}
loadBalancerIP: {{ .Values.ip.dev.service2 }}
{{ else if eq .Values.target.deployment.namespace "test" }}
loadBalancerIP: {{ .Values.ip.test.service2 }}
{{ else if eq .Values.target.deployment.namespace "prod" }}
loadBalancerIP: {{ .Values.ip.prod.service2 }}
{{ else }}
{{ end }}
selector:
io.kompose.service: service2
status:
loadBalancer: {}
Any clue why is complaining that is nil (empty)?
### Response: it could be due to the function changing the context and defined in values.yaml
Normally with range, we can use the $ for global scope, appName: {{ include "common.fullname" $ }}
When tested the same template by keeping the static value of the appName it worked for me, so there is no issue with access from values.yaml unless nil is getting set at .Values.ip.prod.service2.
in other case as you mentioned {{ (.Values.ip.prod).service2 }} multiple level nesting will solve issue.
| I'm trying to assign static IPs for Load Balancers in GKE to services by storing them in the values.yaml file as:
ip:
sandbox:
service1: xxx.xxx.201.74
service2: xxx.xxx.80.114
dev:
service1: xxx.xxx.249.203
service2: xxx.xxx.197.77
test:
service1: xxx.xxx.123.212
service2: xxx.xxx.194.133
prod:
service1: xxx.xx.244.211
service2: xxx.xxx.207.177
All works fine till I want to deploy to prod and that will fail as:
Error: UPGRADE FAILED: template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
helm.go:94: [debug] template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
and the part for service2-service.yaml looks like:
apiVersion: v1
kind: Service
metadata:
annotations:
appName: {{ include "common.fullname" . }}
componentName: service2
labels:
io.kompose.service: service2
name: service2
spec:
ports:
- name: "{{ .Values.service.service2.ports.name }}"
port: {{ .Values.service.service2.ports.port }}
protocol: {{ .Values.service.service2.ports.protocol }}
targetPort: {{ .Values.service.service2.ports.port }}
type: LoadBalancer
{{ if eq .Values.target.deployment.namespace "sandbox" }}
loadBalancerIP: {{ .Values.ip.sandbox.service2 }}
{{ else if eq .Values.target.deployment.namespace "dev" }}
loadBalancerIP: {{ .Values.ip.dev.service2 }}
{{ else if eq .Values.target.deployment.namespace "test" }}
loadBalancerIP: {{ .Values.ip.test.service2 }}
{{ else if eq .Values.target.deployment.namespace "prod" }}
loadBalancerIP: {{ .Values.ip.prod.service2 }}
{{ else }}
{{ end }}
selector:
io.kompose.service: service2
status:
loadBalancer: {}
Any clue why is complaining that is nil (empty)?
|
|
Few things to keep in mind:
Each controller is responsible for exactly one resource.
Reconcile request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.
You can create a second controller without the resource definition. In your main file, both controllers will be registered.
This could be useful if the CRDs are not related at all or if the external resource references the internal one, so you can make changes to the internal resource in the external reconciler.
kubebuilder create api --group other --version v2 --kind External \
--resource=false --controller=true
This gives you a controller with a SetupWithManager method that looks like the below.
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
Note how the For method is commented out because you need to import the resource to watch from somewhere else and reference it.
import (
...
otherv2 "other.io/external/api/v2"
)
...
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&otherv2.External{}).
Complete(r)
}
If you cannot import the external resource you could fall back to mocking it yourself but this is probably not a very clean way. You should really try to import it from the other controller project.
kubebuilder edit --multigroup=true
kubebuilder create api --group=other --version v2 --kind External \
--resource --controller
Another way is when the resources are related to each other such that the internal resource has a reference in its spec to the external resource and knows how to get the external resource in its spec, when it reconciles. An example of this can be found here https://book.kubebuilder.io/reference/watching-resources/externally-managed.html
type InternalSpec struct {
// Name of an external resource
ExternalResource string `json:"externalResource,omitempty"`
}
This means that in each reconciliation loop, the controller will look up the external resource and use it to manage the internal resource.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
internal := examplev1.Internal{}
if err := r.Get(context.TODO(), types.NamespacedName{
Name: req.Name,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
external := otherv2.External{}
if err := r.Get(context.TODO(), types.NamespacedName{
// note how the name is taken from the internal spec
Name: internal.Spec.ExternalResource,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
// do something with internal and external here
return ctrl.Result{}, nil
}
The problem with this is, that when the internal resource does not change, no reconciliation event will be triggered, even when the external resource has changed. To work around that, we can trigger the reconciliation by watching the external resource. Note the Watches method:
func (r *InternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplev1.Main{}).
Watches(
&source.Kind{Type: &otherv2.ExternalResource{}},
handler.EnqueueRequestsFromMapFunc(r.triggerReconcileBecauseExternalHasChanged),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
In order to know for which internal object we should trigger an event, we use a mapping function to look up all the internal that have a reference to the external resource.
func (r *InternalReconciler) triggerReconcileBecauseExternalHasChanged(o client.Object) []reconcile.Request {
usedByInternals := &examplev1.InternalList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(".spec.ExternalResource", o.GetName()),
Namespace: o.GetNamespace(),
}
err := r.List(context.TODO(), usedByInternals, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(usedByInternals.Items))
for i, item := range usedByInternals.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
Since you updated your question, I suggest doing something like below.
I am creating a new project and 2 controllers. Note on the second controller command no resource is created along with the controller. this is because the controller
will watch an external resource.
mkdir demo && cd demo
go mod init example.io/demo
kubebuilder init --domain example.io --repo example.io/demo --plugins=go/v4-alpha
kubebuilder create api --group=demo --version v1 --kind Internal --controller --resource
kubebuilder create api --group=other --version v2 --kind External --controller --resource=false
$ tree controllers
controllers
├── external_controller.go
├── internal_controller.go
└── suite_test.go
Now we need some shared logic, for example by adding this to the controllers package. We will call this from both reconcilers.
// the interface may need tweaking
// depending on what you want to do with
// the reconiler
type reconciler interface {
client.Reader
client.Writer
client.StatusClient
}
func sharedLogic(r reconciler, kobj *demov1.Internal) (ctrl.Result, error) {
// do your shared logic here operating on the internal object struct
// this works out because the external controller will call this passing the
// internal object
return ctrl.Result{}, nil
}
Here is an example for the internal reconciler.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
And in the external reconciler we do the same.
func (r *ExternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// note, we can use the internal object here as long as the external object
// does contain the same fields we want. That means when unmarshalling the extra
// fields are dropped. If this cannot be done, you could first unmarshal into the external
// resource and then assign the fields you need to the internal one, before passing it down
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// note the external resource is imported from another project
// you may be able to watch this without import by creating a minimal
// type with the right GKV
For(otherv2.External{}).
Complete(r)
}
| ###Instruction: I’ve k8s operator which works as expected, I need to add a “watch” to other operator CRD (not mine), to make it simple lets call it extCR and our operator cr called inCR,
I tried the following but there is an issue how its right to trigger the reconcile.
func (r *Insiconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&Inv1alpha1.Iget{}}).
Watches(&source.Kind{Type: &ext.Se{}}, handler.EnqueueRequestsFromMapFunc(r.FWatch)).
Complete(r)
}
func (r *Insiconciler) FWatch(c client.Object) []reconcile.Request {
val := c.(*ivi.Srv)
req := reconcile.Request{NamespacedName: types.NamespacedName{Name: val.Name, Namespace: val.Namespace}}
return []reconcile.Request{req}
}
The problem here that I trigger the reconcile with the extCR , I want inside the FWatch to update the inCR and start the reconcile with inCR and not with extCR, how can I do it ?
I mean, to avoid something like the following code as sometimes the reconcile is done for the inCR and sometimes for the extCR and than I can get some ugly if's
func (r *Insiconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var inCR FOO
var extCR BAR
if err := r.Get(ctx, req.NamespacedName, &inCR); err != nil {
return ctrl.Result{}, err
}
if err := r.Get(ctx, req.NamespacedName, &extCR); err != nil {
return ctrl.Result{}, err
}
I want to know what is the right/clean way to handle such case
case when you need to listen to externalCR (not part of your controller) and also internalCR (from your controller) .
One more thing - the CR are different GVK but the exteranlCR contain lot of fields which is not required, just some of them. but the required fields is having the same names on both cr's
update
type inCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec inSpec `json:"spec,omitempty"` / / ————————here is the difference
Status InsightTargetStatus `json:"status,omitempty"`
}
//————— This is defined on other program which is not owned by us, therefore cannot “reuse”
type Bar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec extSpec `json:"spec,omitempty"` // ———————here is the difference
Status ServiceStatus `json:"status,omitempty"`
}
And inSpec is having the following fields (subset of extSpec)
type inSpec struct {
name string
age int
}
and extSpec have those fields and many more which is not related
type extSpec struct {
name string
age int
foo string // not relevant
bar string // not relevant
bazz string // not relevant
}
at the end, Inside the reconcile I need to move the relevant fields to some functions. exactly same functions just take sometime the fields from extCR and sometimes for inCR, according to the event that happens (like updating the extCR or update the inCR by users )
Update2
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
in := c.(*inCR)
vPass , e := vps.Get(ctx, r.Client, in.Spec.foo, in.Spec.bar)
return ctrl.Result{}, nil
}
But for extCR I should do the following
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
ext := c.(*extCR)
vPass , e := vps.Get(ctx, r.Client, ext.Spec.val.foo, ext.Spec.val.bar)
return ctrl.Result{}, nil
}
### Response: Few things to keep in mind:
Each controller is responsible for exactly one resource.
Reconcile request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.
You can create a second controller without the resource definition. In your main file, both controllers will be registered.
This could be useful if the CRDs are not related at all or if the external resource references the internal one, so you can make changes to the internal resource in the external reconciler.
kubebuilder create api --group other --version v2 --kind External \
--resource=false --controller=true
This gives you a controller with a SetupWithManager method that looks like the below.
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
Note how the For method is commented out because you need to import the resource to watch from somewhere else and reference it.
import (
...
otherv2 "other.io/external/api/v2"
)
...
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&otherv2.External{}).
Complete(r)
}
If you cannot import the external resource you could fall back to mocking it yourself but this is probably not a very clean way. You should really try to import it from the other controller project.
kubebuilder edit --multigroup=true
kubebuilder create api --group=other --version v2 --kind External \
--resource --controller
Another way is when the resources are related to each other such that the internal resource has a reference in its spec to the external resource and knows how to get the external resource in its spec, when it reconciles. An example of this can be found here https://book.kubebuilder.io/reference/watching-resources/externally-managed.html
type InternalSpec struct {
// Name of an external resource
ExternalResource string `json:"externalResource,omitempty"`
}
This means that in each reconciliation loop, the controller will look up the external resource and use it to manage the internal resource.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
internal := examplev1.Internal{}
if err := r.Get(context.TODO(), types.NamespacedName{
Name: req.Name,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
external := otherv2.External{}
if err := r.Get(context.TODO(), types.NamespacedName{
// note how the name is taken from the internal spec
Name: internal.Spec.ExternalResource,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
// do something with internal and external here
return ctrl.Result{}, nil
}
The problem with this is, that when the internal resource does not change, no reconciliation event will be triggered, even when the external resource has changed. To work around that, we can trigger the reconciliation by watching the external resource. Note the Watches method:
func (r *InternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplev1.Main{}).
Watches(
&source.Kind{Type: &otherv2.ExternalResource{}},
handler.EnqueueRequestsFromMapFunc(r.triggerReconcileBecauseExternalHasChanged),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
In order to know for which internal object we should trigger an event, we use a mapping function to look up all the internal that have a reference to the external resource.
func (r *InternalReconciler) triggerReconcileBecauseExternalHasChanged(o client.Object) []reconcile.Request {
usedByInternals := &examplev1.InternalList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(".spec.ExternalResource", o.GetName()),
Namespace: o.GetNamespace(),
}
err := r.List(context.TODO(), usedByInternals, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(usedByInternals.Items))
for i, item := range usedByInternals.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
Since you updated your question, I suggest doing something like below.
I am creating a new project and 2 controllers. Note on the second controller command no resource is created along with the controller. this is because the controller
will watch an external resource.
mkdir demo && cd demo
go mod init example.io/demo
kubebuilder init --domain example.io --repo example.io/demo --plugins=go/v4-alpha
kubebuilder create api --group=demo --version v1 --kind Internal --controller --resource
kubebuilder create api --group=other --version v2 --kind External --controller --resource=false
$ tree controllers
controllers
├── external_controller.go
├── internal_controller.go
└── suite_test.go
Now we need some shared logic, for example by adding this to the controllers package. We will call this from both reconcilers.
// the interface may need tweaking
// depending on what you want to do with
// the reconiler
type reconciler interface {
client.Reader
client.Writer
client.StatusClient
}
func sharedLogic(r reconciler, kobj *demov1.Internal) (ctrl.Result, error) {
// do your shared logic here operating on the internal object struct
// this works out because the external controller will call this passing the
// internal object
return ctrl.Result{}, nil
}
Here is an example for the internal reconciler.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
And in the external reconciler we do the same.
func (r *ExternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// note, we can use the internal object here as long as the external object
// does contain the same fields we want. That means when unmarshalling the extra
// fields are dropped. If this cannot be done, you could first unmarshal into the external
// resource and then assign the fields you need to the internal one, before passing it down
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// note the external resource is imported from another project
// you may be able to watch this without import by creating a minimal
// type with the right GKV
For(otherv2.External{}).
Complete(r)
}
| I’ve k8s operator which works as expected, I need to add a “watch” to other operator CRD (not mine), to make it simple lets call it extCR and our operator cr called inCR,
I tried the following but there is an issue how its right to trigger the reconcile.
func (r *Insiconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&Inv1alpha1.Iget{}}).
Watches(&source.Kind{Type: &ext.Se{}}, handler.EnqueueRequestsFromMapFunc(r.FWatch)).
Complete(r)
}
func (r *Insiconciler) FWatch(c client.Object) []reconcile.Request {
val := c.(*ivi.Srv)
req := reconcile.Request{NamespacedName: types.NamespacedName{Name: val.Name, Namespace: val.Namespace}}
return []reconcile.Request{req}
}
The problem here that I trigger the reconcile with the extCR , I want inside the FWatch to update the inCR and start the reconcile with inCR and not with extCR, how can I do it ?
I mean, to avoid something like the following code as sometimes the reconcile is done for the inCR and sometimes for the extCR and than I can get some ugly if's
func (r *Insiconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var inCR FOO
var extCR BAR
if err := r.Get(ctx, req.NamespacedName, &inCR); err != nil {
return ctrl.Result{}, err
}
if err := r.Get(ctx, req.NamespacedName, &extCR); err != nil {
return ctrl.Result{}, err
}
I want to know what is the right/clean way to handle such case
case when you need to listen to externalCR (not part of your controller) and also internalCR (from your controller) .
One more thing - the CR are different GVK but the exteranlCR contain lot of fields which is not required, just some of them. but the required fields is having the same names on both cr's
update
type inCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec inSpec `json:"spec,omitempty"` / / ————————here is the difference
Status InsightTargetStatus `json:"status,omitempty"`
}
//————— This is defined on other program which is not owned by us, therefore cannot “reuse”
type Bar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec extSpec `json:"spec,omitempty"` // ———————here is the difference
Status ServiceStatus `json:"status,omitempty"`
}
And inSpec is having the following fields (subset of extSpec)
type inSpec struct {
name string
age int
}
and extSpec have those fields and many more which is not related
type extSpec struct {
name string
age int
foo string // not relevant
bar string // not relevant
bazz string // not relevant
}
at the end, Inside the reconcile I need to move the relevant fields to some functions. exactly same functions just take sometime the fields from extCR and sometimes for inCR, according to the event that happens (like updating the extCR or update the inCR by users )
Update2
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
in := c.(*inCR)
vPass , e := vps.Get(ctx, r.Client, in.Spec.foo, in.Spec.bar)
return ctrl.Result{}, nil
}
But for extCR I should do the following
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
ext := c.(*extCR)
vPass , e := vps.Get(ctx, r.Client, ext.Spec.val.foo, ext.Spec.val.bar)
return ctrl.Result{}, nil
}
|
|
I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at pulumi's k8s repository.
Looking at the history of https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.
import * as k8s from "@pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in Chart.yaml and install the chart from your local path.
| ###Instruction: How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?
Pulumi up Diagnostics:
kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0
ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml
Source:
...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
Any ideas as to fixing this oddity between the version error and the actual cluster version?
I've tried:
Deleting everything and starting over.
Updating to the latest ArgoCD install yaml.
### Response: I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at pulumi's k8s repository.
Looking at the history of https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.
import * as k8s from "@pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in Chart.yaml and install the chart from your local path.
| How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?
Pulumi up Diagnostics:
kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0
ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml
Source:
...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
Any ideas as to fixing this oddity between the version error and the actual cluster version?
I've tried:
Deleting everything and starting over.
Updating to the latest ArgoCD install yaml.
|
|
An easy way not involving any other databases/pods is to run the first pod as an init container, mount a volume that is shared in both containers and use the JSON file in the next python program. (Also, this approach does not need a persistent volume, just a shared one), see this example:
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Also, depending on the complexity of these jobs, would recommend taking a look at Argo workflows or any dag-related job schedulers.
| ###Instruction: I have a python program launching a batch job. The job outputs a json file, I'd like to know what is the easiest way to get this result back to the python program that launched it.
So far I thought of these solutions:
Upload the json file to S3 (pretty heavy)
Display it in the pod logs then read the logs from the python program (pretty hacky/dirty)
Mount a PVC, launch a second pod with the same PVC, and create a shared disk between this pod and the job (pretty overkill)
The json file is pretty lightweight. Isn't there a solution to do something like adding some metadata to the pod when the job completes? The python program can then just poll those metadata.
### Response: An easy way not involving any other databases/pods is to run the first pod as an init container, mount a volume that is shared in both containers and use the JSON file in the next python program. (Also, this approach does not need a persistent volume, just a shared one), see this example:
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Also, depending on the complexity of these jobs, would recommend taking a look at Argo workflows or any dag-related job schedulers.
| I have a python program launching a batch job. The job outputs a json file, I'd like to know what is the easiest way to get this result back to the python program that launched it.
So far I thought of these solutions:
Upload the json file to S3 (pretty heavy)
Display it in the pod logs then read the logs from the python program (pretty hacky/dirty)
Mount a PVC, launch a second pod with the same PVC, and create a shared disk between this pod and the job (pretty overkill)
The json file is pretty lightweight. Isn't there a solution to do something like adding some metadata to the pod when the job completes? The python program can then just poll those metadata.
|
|
I got the answer from the Keycloak GitHub support forums https://github.com/keycloak/keycloak/discussions/14682.
Running kc.sh start automatically upgrades the DB on first install, and the first container running this command automatically locks the DB until migration is complete. So its not required to change my helm chart.
| ###Instruction: We have a Keycloak 15.1.1 deployment on Kubernetes with multiple replicas with the AWS RDS Postgres 13 Database backend. I did not find any upgrade guide or experience of other people regarding this setup or even with Kubernetes with Postgres using PVC upgrading Keycloak with multiple major version changes.
Does anyone have any experience with the Keycloak upgrade on Kubernetes?
I went through the change log and was able to run Keycloak locally using docker-compose only in HTTP mode as we terminate SSL at the reverse proxy.
From upgrade instructions from Keycloak documentation is the following strategy is the right one without losing any data
Update the docker image with a new image running only in HTTP mode in our helm charts
Initially start the new deployment with only a single replica so that the database schema changes are applied
kc.sh start --spi-connections-jpa-default-migration-strategy=update
When I tried to upgrade my local deployment with the above command, Keycloak was not accessible until the next restart.
Restart the deployment with more replicas with command
kc.sh start --optimized
### Response: I got the answer from the Keycloak GitHub support forums https://github.com/keycloak/keycloak/discussions/14682.
Running kc.sh start automatically upgrades the DB on first install, and the first container running this command automatically locks the DB until migration is complete. So its not required to change my helm chart.
| We have a Keycloak 15.1.1 deployment on Kubernetes with multiple replicas with the AWS RDS Postgres 13 Database backend. I did not find any upgrade guide or experience of other people regarding this setup or even with Kubernetes with Postgres using PVC upgrading Keycloak with multiple major version changes.
Does anyone have any experience with the Keycloak upgrade on Kubernetes?
I went through the change log and was able to run Keycloak locally using docker-compose only in HTTP mode as we terminate SSL at the reverse proxy.
From upgrade instructions from Keycloak documentation is the following strategy is the right one without losing any data
Update the docker image with a new image running only in HTTP mode in our helm charts
Initially start the new deployment with only a single replica so that the database schema changes are applied
kc.sh start --spi-connections-jpa-default-migration-strategy=update
When I tried to upgrade my local deployment with the above command, Keycloak was not accessible until the next restart.
Restart the deployment with more replicas with command
kc.sh start --optimized
|
|
based on below error, appears your cert format is not right.
no valid PEM formatted block found
Is your original cert in PEM format? you can decode the cert data in secret and double check, using a command like below (you might need to install jq command, or you can copy the tls.crt data manually and decode it with base64 -d command):
kubectl get secret your-secret-name -n your-namespace -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
below is what I did using a self-signed test cert/key file.
kubectl get secret mytest-ssl-secret -o json
{
"apiVersion": "v1",
"data": {
"tls.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGVENDQXYyZ0F3SUJBZ0lVWG12blRrcGtqMlhiQkx...tLS0K",
"tls.key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQkt...tLS0K"
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2022-11-05T04:22:12Z",
"name": "mytest-ssl-secret",
"namespace": "default",
"resourceVersion": "2024434",
"uid": "d63dce3d-8e5c-478a-be9e-815e59e4bd21"
},
"type": "kubernetes.io/tls"
}
kubectl get secret mytest-ssl-secret -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIUXmvnTkpkj2XbBLRJo+mpBfp4mvAwDQYJKoZIhvcNAQEL
BQAwgZkxCzAJBgNVBAYTAkNOMRAwDgYDVQQIDAdKaWFuZ3N1MQ0wCwYDVQQHDARX
dXhpMRowGAYDVQQKDBFUZXN0IGxpbWl0ZWQgSW5jLjELMAkGA1UECwwCSVQxHDAa
BgNVBAMME3Rlc3QwMDguZXhhbXBsZS5jb20xIjAgBgkqhkiG9w0BCQEWE3Rlc3Qw
...
XO8B+zyFRP1PZnCAkeUdvh6rpMbVHWvfM0QOG4m736b9FK1VmjTG4do=
-----END CERTIFICATE-----
| ###Instruction: I'm actually trying for a couple of days to setup TLS on my ingress controller.
My Kubernetes cluster is hosted on Azure (AKS).
When I check the Ingress controller logs, I get this :
W1104 08:49:29.472478 7 backend_ssl.go:45] Error obtaining X.509 certificate: unexpected error creating SSL Cert: no valid PEM formatted block found
W1104 08:49:29.472595 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate
W1104 08:49:29.472611 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate
Here is my myapp-ingress.yml
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: myapp-test
namespace: myapp-test
spec:
ingressClassName: nginx
tls:
- hosts:
- test-app.myapp.io
- test-api.myapp.io
secretName: myapp-tls
rules:
- host: test-app.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-frontend
port:
number: 80
- host: test-api.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-backend-monolith
port:
number: 80
Here is my Secret.yml
kind: Secret
apiVersion: v1
metadata:
name: myapp-tls
namespace: myapp-test
data:
tls.crt: >-
BASE64 ENCODED CRT FILE CONTENT
tls.key: >-
BASE64 ENCODED KEY FILE CONTENT
type: kubernetes.io/tls
I actully tried to create ingresses and/or secrets in every namespaces. But Ingress controller still can't find SSL certificate.
### Response: based on below error, appears your cert format is not right.
no valid PEM formatted block found
Is your original cert in PEM format? you can decode the cert data in secret and double check, using a command like below (you might need to install jq command, or you can copy the tls.crt data manually and decode it with base64 -d command):
kubectl get secret your-secret-name -n your-namespace -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
below is what I did using a self-signed test cert/key file.
kubectl get secret mytest-ssl-secret -o json
{
"apiVersion": "v1",
"data": {
"tls.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGVENDQXYyZ0F3SUJBZ0lVWG12blRrcGtqMlhiQkx...tLS0K",
"tls.key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQkt...tLS0K"
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2022-11-05T04:22:12Z",
"name": "mytest-ssl-secret",
"namespace": "default",
"resourceVersion": "2024434",
"uid": "d63dce3d-8e5c-478a-be9e-815e59e4bd21"
},
"type": "kubernetes.io/tls"
}
kubectl get secret mytest-ssl-secret -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIUXmvnTkpkj2XbBLRJo+mpBfp4mvAwDQYJKoZIhvcNAQEL
BQAwgZkxCzAJBgNVBAYTAkNOMRAwDgYDVQQIDAdKaWFuZ3N1MQ0wCwYDVQQHDARX
dXhpMRowGAYDVQQKDBFUZXN0IGxpbWl0ZWQgSW5jLjELMAkGA1UECwwCSVQxHDAa
BgNVBAMME3Rlc3QwMDguZXhhbXBsZS5jb20xIjAgBgkqhkiG9w0BCQEWE3Rlc3Qw
...
XO8B+zyFRP1PZnCAkeUdvh6rpMbVHWvfM0QOG4m736b9FK1VmjTG4do=
-----END CERTIFICATE-----
| I'm actually trying for a couple of days to setup TLS on my ingress controller.
My Kubernetes cluster is hosted on Azure (AKS).
When I check the Ingress controller logs, I get this :
W1104 08:49:29.472478 7 backend_ssl.go:45] Error obtaining X.509 certificate: unexpected error creating SSL Cert: no valid PEM formatted block found
W1104 08:49:29.472595 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate
W1104 08:49:29.472611 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate
Here is my myapp-ingress.yml
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: myapp-test
namespace: myapp-test
spec:
ingressClassName: nginx
tls:
- hosts:
- test-app.myapp.io
- test-api.myapp.io
secretName: myapp-tls
rules:
- host: test-app.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-frontend
port:
number: 80
- host: test-api.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-backend-monolith
port:
number: 80
Here is my Secret.yml
kind: Secret
apiVersion: v1
metadata:
name: myapp-tls
namespace: myapp-test
data:
tls.crt: >-
BASE64 ENCODED CRT FILE CONTENT
tls.key: >-
BASE64 ENCODED KEY FILE CONTENT
type: kubernetes.io/tls
I actully tried to create ingresses and/or secrets in every namespaces. But Ingress controller still can't find SSL certificate.
|
|
I think there are two possibilities:
You did not installed the CRD resources
You are using Strimzi version which is too old and does not support the v1beta2 API
Judging that you are trying to use Kafka 2.4.0, I guess the second option is more likely. If you ŕeally want to do that, you should make sure to use the documentation, examples and everything from the version of Strimzi you use- they should be useing one of the older APIs (v1alpha1 or v1beta1).
But in general, I would strongly recommend you to use the latest version of Strimzi and not a version which is several years old.
One more note: If you want to configure the SASL authentication for your Kafka Connect cluster, you should do it in the .spec.authentication section of the custom resource and not in .spec.config.
| ###Instruction: I'm trying work with Strimzi to create kafka-connect cluster and encountering the following error
unable to recognize "kafka-connect.yaml": no matches for kind "KafkaConnect" in
version "kafka.strimzi.io/v1beta2"
Here's the kafka-connect.yaml I have:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect
namespace: connect
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 2.4.0
replicas: 1
bootstrapServers: host:port
tls:
trustedCertificates:
- secretName: connectorsecret
certificate: cert
config:
group.id: o
offset.storage.topic: strimzi-connect-cluster-offsets
config.storage.topic: strimzi-connect-cluster-configs
status.storage.topic: strimzi-connect-cluster-status
sasl.mechanism: scram-sha-256
security.protocol: SASL_SSL
secretName: connectorsecret
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=username password=password
Then I tried to apply the config via kubectl apply -f kafka-connect.yaml
Is there anything necessities to create resources using Strimzi or something I'm doing wrong?
### Response: I think there are two possibilities:
You did not installed the CRD resources
You are using Strimzi version which is too old and does not support the v1beta2 API
Judging that you are trying to use Kafka 2.4.0, I guess the second option is more likely. If you ŕeally want to do that, you should make sure to use the documentation, examples and everything from the version of Strimzi you use- they should be useing one of the older APIs (v1alpha1 or v1beta1).
But in general, I would strongly recommend you to use the latest version of Strimzi and not a version which is several years old.
One more note: If you want to configure the SASL authentication for your Kafka Connect cluster, you should do it in the .spec.authentication section of the custom resource and not in .spec.config.
| I'm trying work with Strimzi to create kafka-connect cluster and encountering the following error
unable to recognize "kafka-connect.yaml": no matches for kind "KafkaConnect" in
version "kafka.strimzi.io/v1beta2"
Here's the kafka-connect.yaml I have:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect
namespace: connect
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 2.4.0
replicas: 1
bootstrapServers: host:port
tls:
trustedCertificates:
- secretName: connectorsecret
certificate: cert
config:
group.id: o
offset.storage.topic: strimzi-connect-cluster-offsets
config.storage.topic: strimzi-connect-cluster-configs
status.storage.topic: strimzi-connect-cluster-status
sasl.mechanism: scram-sha-256
security.protocol: SASL_SSL
secretName: connectorsecret
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=username password=password
Then I tried to apply the config via kubectl apply -f kafka-connect.yaml
Is there anything necessities to create resources using Strimzi or something I'm doing wrong?
|
|
This is the default behavior of minikube because tainting the control plane node is not a Kubernetes requirement especially for the dev/test environments.
You may manually taint your master node using:
kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule
After you apply the taint, drain the master node so the pods scheduled on it shift to the worker node.
kubectl drain minikube --ignore-daemonsets
| ###Instruction: I am starting up a cluster using minikube and found the node to be started as control-plane node, but interestingly I found the taints to be none. Because the taints is none, I see pods being scheduled on this node.
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 76d v1.24.3
minikube-m02 Ready <none> 18m v1.24.3
And below is the taint information,
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 21 Aug 2022 22:27:29 -0500
Taints: <none>
Any idea why there is this behaviour ?
### Response: This is the default behavior of minikube because tainting the control plane node is not a Kubernetes requirement especially for the dev/test environments.
You may manually taint your master node using:
kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule
After you apply the taint, drain the master node so the pods scheduled on it shift to the worker node.
kubectl drain minikube --ignore-daemonsets
| I am starting up a cluster using minikube and found the node to be started as control-plane node, but interestingly I found the taints to be none. Because the taints is none, I see pods being scheduled on this node.
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 76d v1.24.3
minikube-m02 Ready <none> 18m v1.24.3
And below is the taint information,
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 21 Aug 2022 22:27:29 -0500
Taints: <none>
Any idea why there is this behaviour ?
|
|
You can remove the value tag:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
| ###Instruction: OpenShift ConfigMap for Jenkins pod template:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>100Gi</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
Spits out:
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: "100Gi"
I'm trying to achieve this:
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: ""
But I don't know what to put here:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>???</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
These do not seem to work:
<value>""</value>
<value>''</value>
<value></value>
<value/>
### Response: You can remove the value tag:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
| OpenShift ConfigMap for Jenkins pod template:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>100Gi</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
Spits out:
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: "100Gi"
I'm trying to achieve this:
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: ""
But I don't know what to put here:
<org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>???</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
These do not seem to work:
<value>""</value>
<value>''</value>
<value></value>
<value/>
|
|
I faced the same requirement as you.
from
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
change to
tls:
- hosts:
- www.<myDomain>.com
secretName: web-ssl
- hosts:
- api.<myDomain>.com
secretName: web-ssl
Help me to solve the issue!
| ###Instruction: I have setup a backend and frontend service running on Kubernetes. Frontend would be www.<myDomain>.com and backend would be api.<myDomain>.com
I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <myDomain>.com should cover both www. and api..
Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the www.<myDomain>.com working with https. Things doesn't work when I tried to add the api.<myDomain>.com
I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again
Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.allow-http: "true"
cert-manager.io/issuer: letsencrypt-staging
spec:
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
rules:
- host: "www.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: angular-service
port:
number: 80
- host: "api.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: spring-boot-service
port:
number: 8080
### Response: I faced the same requirement as you.
from
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
change to
tls:
- hosts:
- www.<myDomain>.com
secretName: web-ssl
- hosts:
- api.<myDomain>.com
secretName: web-ssl
Help me to solve the issue!
| I have setup a backend and frontend service running on Kubernetes. Frontend would be www.<myDomain>.com and backend would be api.<myDomain>.com
I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <myDomain>.com should cover both www. and api..
Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the www.<myDomain>.com working with https. Things doesn't work when I tried to add the api.<myDomain>.com
I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again
Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.allow-http: "true"
cert-manager.io/issuer: letsencrypt-staging
spec:
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
rules:
- host: "www.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: angular-service
port:
number: 80
- host: "api.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: spring-boot-service
port:
number: 8080
|
|
You can have 2 types of PVs:
dynamic provisioned by StorageClasses
manually/static created by admins
Dynamically -> this is often used within cloud, for instance when you want to mount an Azure blob/file to a pod. In this case you don't have control over PV name, StorageClass create and bound random created PVs.
Manually -> this will give more control, you can assign a specific name to PV, a specific StorageClass that has Retain policy (do not delete PV after Released by Pod). In result is much easier to reuse that PV, knowing it name and StorageClass membership.
| ###Instruction: I created PV as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: "my-storage"
claimRef:
name: foo-pvc
namespace: foo
Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?
Can someone help me to understand this?
### Response: You can have 2 types of PVs:
dynamic provisioned by StorageClasses
manually/static created by admins
Dynamically -> this is often used within cloud, for instance when you want to mount an Azure blob/file to a pod. In this case you don't have control over PV name, StorageClass create and bound random created PVs.
Manually -> this will give more control, you can assign a specific name to PV, a specific StorageClass that has Retain policy (do not delete PV after Released by Pod). In result is much easier to reuse that PV, knowing it name and StorageClass membership.
| I created PV as follows:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: "my-storage"
claimRef:
name: foo-pvc
namespace: foo
Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?
Can someone help me to understand this?
|
|
@Dunski : I have checked in many ways this *,0-30 23,0-7 * * * expression could stop at 07:59 min only but not yet 07:30 am.
As @jordanm suggested we have only a way to run two jobs from :
11 pm to 7 am expression * 23,0-7 * * * (“At every minute past hour 23 and every hour from 0 through 7.”) and then
7 am to 7:30 am 0-30 7 * * * (“At every minute from 0 through 30 past hour 7.”).
| ###Instruction: Crontab to run a job every minute from 11pm to 7:30am
I have this so far which is every minute from 11pm to 7:00am
the problem is the half hour.
* 23,0-7 * * *
You can play around with it here crontab_guru
Any ideas?
### Response: @Dunski : I have checked in many ways this *,0-30 23,0-7 * * * expression could stop at 07:59 min only but not yet 07:30 am.
As @jordanm suggested we have only a way to run two jobs from :
11 pm to 7 am expression * 23,0-7 * * * (“At every minute past hour 23 and every hour from 0 through 7.”) and then
7 am to 7:30 am 0-30 7 * * * (“At every minute from 0 through 30 past hour 7.”).
| Crontab to run a job every minute from 11pm to 7:30am
I have this so far which is every minute from 11pm to 7:00am
the problem is the half hour.
* 23,0-7 * * *
You can play around with it here crontab_guru
Any ideas?
|
|
I also stumbled into this problem and Hierarchical Namespaces seem like a decent solution, as you can give the permissions to a single "static" namespace. Every namespace afterwards will inherit the permissions. Hope it helps.
| ###Instruction: We're creating dynamic test environments for our developers. Each environment goes into one namespace called test-<something>, where <something> is entered by the developer when creating the environment (we use Gitlab-CI for the automation).
We want to grant them limited access to the K8s API to see deployments, exec into pods for instance. So the plan is to apply a (cluster)role (yet to decide) like this.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: "{{ ns }}"
name: "test-{{ ns }}"
rules:
- apiGroups: ["apps"]
resources: ["deploys"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
However we preferably don't want to apply it for all namespaces but only the test-* ones.
We could add the creation of the namespaced role and rolebinding during the app deploy, but that would mean granting our Gitlab-CI runner the permission to create and delete roles and rolebindings. We're concerned by the security implications of this and the possible privilege escalations.
Is it possible to create a clusterrolebinding limited to a regexp-ed set of namespaces?
Alternatively, if we want to grant the permissions via the automation, is it possible to limit the namespaces where the rolebindings can be created by the runner?
We looked at the docs but couldn't find such things
Worst case scenario is probably to go for the clusterrolebinding route and not give too many privileges to the automation. So asking if we can find a better way
Thanks in advance
### Response: I also stumbled into this problem and Hierarchical Namespaces seem like a decent solution, as you can give the permissions to a single "static" namespace. Every namespace afterwards will inherit the permissions. Hope it helps.
| We're creating dynamic test environments for our developers. Each environment goes into one namespace called test-<something>, where <something> is entered by the developer when creating the environment (we use Gitlab-CI for the automation).
We want to grant them limited access to the K8s API to see deployments, exec into pods for instance. So the plan is to apply a (cluster)role (yet to decide) like this.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: "{{ ns }}"
name: "test-{{ ns }}"
rules:
- apiGroups: ["apps"]
resources: ["deploys"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
However we preferably don't want to apply it for all namespaces but only the test-* ones.
We could add the creation of the namespaced role and rolebinding during the app deploy, but that would mean granting our Gitlab-CI runner the permission to create and delete roles and rolebindings. We're concerned by the security implications of this and the possible privilege escalations.
Is it possible to create a clusterrolebinding limited to a regexp-ed set of namespaces?
Alternatively, if we want to grant the permissions via the automation, is it possible to limit the namespaces where the rolebindings can be created by the runner?
We looked at the docs but couldn't find such things
Worst case scenario is probably to go for the clusterrolebinding route and not give too many privileges to the automation. So asking if we can find a better way
Thanks in advance
|
|
In order to observe pod events, you need to use informers. Informers have built-in optimizations to avoid overloading API servers.
There is a patch method available in the PodInterface that allows you to add a label to a pod.
Here is a sample code for your reference. In the main function, the informer code is added, and the LabelPod function implements the label logic.
package main
import (
"context"
"encoding/json"
"fmt"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
clientSet := GetK8sClient()
labelOptions := informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.LabelSelector = GetLabelSelectorForDeployment("deployment-name", "namespace-name")
})
informers := informers.NewSharedInformerFactoryWithOptions(clientSet, 10*time.Second, informers.WithNamespace("namespace-name"), labelOptions)
podInformer := informers.Core().V1().Pods()
podInformer.Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: handleAdd,
},
)
informers.Start(wait.NeverStop)
informers.WaitForCacheSync(wait.NeverStop)
}
func GetLabelSelectorForDeployment(Name string, Namespace string) string {
clientSet := GetK8sClient()
k8sClient := clientSet.AppsV1()
deployment, _ := k8sClient.Deployments(Namespace).Get(context.Background(), Name, metav1.GetOptions{})
labelSet := labels.Set(deployment.Spec.Selector.MatchLabels)
return string(labelSet.AsSelector().String())
}
func handleAdd(obj interface{}) {
k8sClient := GetK8sClient().CoreV1()
pod := obj.(*v1.Pod)
fmt.Println("Pod", pod.GetName(), pod.Spec.NodeName, pod.Spec.Containers)
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/testLabel",
Value: "testValue",
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr := k8sClient.Pods(pod.GetNamespace()).Patch(context.Background(), pod.GetName(), types.JSONPatchType, payloadBytes, metav1.PatchOptions{})
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
func GetK8sClient() *kubernetes.Clientset {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
return clientset
}
| ###Instruction: I want to create a controller and listen to the pod events when new pod is created (by a deployment) then add all labels belong to deployment to the created pod, is this possible at scale with client-go?
### Response: In order to observe pod events, you need to use informers. Informers have built-in optimizations to avoid overloading API servers.
There is a patch method available in the PodInterface that allows you to add a label to a pod.
Here is a sample code for your reference. In the main function, the informer code is added, and the LabelPod function implements the label logic.
package main
import (
"context"
"encoding/json"
"fmt"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
clientSet := GetK8sClient()
labelOptions := informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.LabelSelector = GetLabelSelectorForDeployment("deployment-name", "namespace-name")
})
informers := informers.NewSharedInformerFactoryWithOptions(clientSet, 10*time.Second, informers.WithNamespace("namespace-name"), labelOptions)
podInformer := informers.Core().V1().Pods()
podInformer.Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: handleAdd,
},
)
informers.Start(wait.NeverStop)
informers.WaitForCacheSync(wait.NeverStop)
}
func GetLabelSelectorForDeployment(Name string, Namespace string) string {
clientSet := GetK8sClient()
k8sClient := clientSet.AppsV1()
deployment, _ := k8sClient.Deployments(Namespace).Get(context.Background(), Name, metav1.GetOptions{})
labelSet := labels.Set(deployment.Spec.Selector.MatchLabels)
return string(labelSet.AsSelector().String())
}
func handleAdd(obj interface{}) {
k8sClient := GetK8sClient().CoreV1()
pod := obj.(*v1.Pod)
fmt.Println("Pod", pod.GetName(), pod.Spec.NodeName, pod.Spec.Containers)
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/testLabel",
Value: "testValue",
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr := k8sClient.Pods(pod.GetNamespace()).Patch(context.Background(), pod.GetName(), types.JSONPatchType, payloadBytes, metav1.PatchOptions{})
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
func GetK8sClient() *kubernetes.Clientset {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
return clientset
}
| I want to create a controller and listen to the pod events when new pod is created (by a deployment) then add all labels belong to deployment to the created pod, is this possible at scale with client-go?
|
|
I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.
| ###Instruction: I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
### Response: I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.
| I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
|
|
If you check in dockerhub the image is there:
https://hub.docker.com/r/istio/examples-bookinfo-reviews-v1/tags
So the error that you need to deal with is context deadline exceeded while trying to pull it from dockerhub. This is likely a networking error (a generic Go error saying it took too long), depending on where your cluster is running you can do manually a docker pull from the nodes and that should work.
EDIT: for minikube do a minikube ssh and then a docker pull
| ###Instruction: I am trying to deploy istio's sample bookinfo application using the below command:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
from here
but each time I am getting ImagePullBackoff error like this:
NAME READY STATUS RESTARTS AGE
details-v1-c74755ddf-m878f 2/2 Running 0 6m32s
productpage-v1-778ddd95c6-pdqsk 2/2 Running 0 6m32s
ratings-v1-5564969465-956bq 2/2 Running 0 6m32s
reviews-v1-56f6655686-j7lb6 1/2 ImagePullBackOff 0 6m32s
reviews-v2-6b977f8ff5-55tgm 1/2 ImagePullBackOff 0 6m32s
reviews-v3-776b979464-9v7x5 1/2 ImagePullBackOff 0 6m32s
For error details, I have run :
kubectl describe pod reviews-v1-56f6655686-j7lb6
Which returns these:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m41s default-scheduler Successfully assigned default/reviews-v1-56f6655686-j7lb6 to minikube
Normal Pulled 7m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 7m39s kubelet Created container istio-init
Normal Started 7m39s kubelet Started container istio-init
Warning Failed 5m39s kubelet Failed to pull image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 5m39s kubelet Error: ErrImagePull
Normal Pulled 5m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 5m39s kubelet Created container istio-proxy
Normal Started 5m39s kubelet Started container istio-proxy
Normal BackOff 5m36s (x3 over 5m38s) kubelet Back-off pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Warning Failed 5m36s (x3 over 5m38s) kubelet Error: ImagePullBackOff
Normal Pulling 5m25s (x2 over 7m38s) kubelet Pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Do I need to build dockerfile first and push it to the local repository? There are no clear instructions there or I failed to find any.
Can anybody help?
### Response: If you check in dockerhub the image is there:
https://hub.docker.com/r/istio/examples-bookinfo-reviews-v1/tags
So the error that you need to deal with is context deadline exceeded while trying to pull it from dockerhub. This is likely a networking error (a generic Go error saying it took too long), depending on where your cluster is running you can do manually a docker pull from the nodes and that should work.
EDIT: for minikube do a minikube ssh and then a docker pull
| I am trying to deploy istio's sample bookinfo application using the below command:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
from here
but each time I am getting ImagePullBackoff error like this:
NAME READY STATUS RESTARTS AGE
details-v1-c74755ddf-m878f 2/2 Running 0 6m32s
productpage-v1-778ddd95c6-pdqsk 2/2 Running 0 6m32s
ratings-v1-5564969465-956bq 2/2 Running 0 6m32s
reviews-v1-56f6655686-j7lb6 1/2 ImagePullBackOff 0 6m32s
reviews-v2-6b977f8ff5-55tgm 1/2 ImagePullBackOff 0 6m32s
reviews-v3-776b979464-9v7x5 1/2 ImagePullBackOff 0 6m32s
For error details, I have run :
kubectl describe pod reviews-v1-56f6655686-j7lb6
Which returns these:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m41s default-scheduler Successfully assigned default/reviews-v1-56f6655686-j7lb6 to minikube
Normal Pulled 7m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 7m39s kubelet Created container istio-init
Normal Started 7m39s kubelet Started container istio-init
Warning Failed 5m39s kubelet Failed to pull image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 5m39s kubelet Error: ErrImagePull
Normal Pulled 5m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 5m39s kubelet Created container istio-proxy
Normal Started 5m39s kubelet Started container istio-proxy
Normal BackOff 5m36s (x3 over 5m38s) kubelet Back-off pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Warning Failed 5m36s (x3 over 5m38s) kubelet Error: ImagePullBackOff
Normal Pulling 5m25s (x2 over 7m38s) kubelet Pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Do I need to build dockerfile first and push it to the local repository? There are no clear instructions there or I failed to find any.
Can anybody help?
|
|
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
| ###Instruction: I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
### Response: You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
| I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
|
|
You can see field documentation for a Pod with
kubectl explain Pod
then you can look deeper into the structure with e.g. (or any other field)
kubectl explain Pod.spec
| ###Instruction: For example I want to list all the possible values I can put in the yaml file to create a Pod. I want to also look at the meanings of those values. e.g. in the container section you put in the details of the container
### Response: You can see field documentation for a Pod with
kubectl explain Pod
then you can look deeper into the structure with e.g. (or any other field)
kubectl explain Pod.spec
| For example I want to list all the possible values I can put in the yaml file to create a Pod. I want to also look at the meanings of those values. e.g. in the container section you put in the details of the container
|
|
I tried to reproduce the same issue in my environment and got the below results
I have created the Kubernetes cluster and launched the AML studio
In the AML I have created the workspace and created the compute with AKS cluster
Deployed the azureML extension using the below command
az k8s-extension create --name Aml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name my-aks-cluster --resource-group Alldemorg --scope cluster
I am able to see all the deployed clusters using below commands
az k8s-extension show --name <extension_name> --cluster-type connectedClusters --cluster-name <connected_cluster_name> --resource-group <rg_name>
After deploying the AzureML extension I am able to attach the Kubernetes cluster to azureML workspace.
NOTE:
ExtensionOperationFailed error may occur for the below of reasons,
1). Blocking of the the regions, for some AML clusters few of the regions are not allowed it will be blocked
2). Please check the version and upgrade to the latest version
3). While creating the extension please make sure cluster name it will be there in the AML workspace
4). The service principal for the cluster does not exist in the tenant in which we are trying to access
5). Each tenant in which we want to access must consent to the cluster, this will ensure the service principal exists in their tenant and that service principal has the access
| ###Instruction: As said in title, I'm trying to add an AKS cluster to my Azure Machine Learning workspace as Attached computes.
In the wizard that ML studio shows while adding it
there's a link to a guide to install AzureML extension.
Just 4 steps:
Prepare an Azure Kubernetes Service cluster or Arc Kubernetes
cluster.
Deploy the AzureML extension.
Attach Kubernetes cluster to
your Azure ML workspace.
Use the Kubernetes compute target from CLI
v2, SDK v2, and the Studio UI.
My issue comes ad 2nd step.
As suggested I'm trying to create a POC trough az cli
az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
I'm already logged on right subscription (where I'm owner), ad using right cluster name and resource group. as extension-name I've used test-ml-extension, but I keep to get this error
(ExtensionOperationFailed) The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}. Code: ExtensionOperationFailed Message: The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}.
Am I missing something?
### Response: I tried to reproduce the same issue in my environment and got the below results
I have created the Kubernetes cluster and launched the AML studio
In the AML I have created the workspace and created the compute with AKS cluster
Deployed the azureML extension using the below command
az k8s-extension create --name Aml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name my-aks-cluster --resource-group Alldemorg --scope cluster
I am able to see all the deployed clusters using below commands
az k8s-extension show --name <extension_name> --cluster-type connectedClusters --cluster-name <connected_cluster_name> --resource-group <rg_name>
After deploying the AzureML extension I am able to attach the Kubernetes cluster to azureML workspace.
NOTE:
ExtensionOperationFailed error may occur for the below of reasons,
1). Blocking of the the regions, for some AML clusters few of the regions are not allowed it will be blocked
2). Please check the version and upgrade to the latest version
3). While creating the extension please make sure cluster name it will be there in the AML workspace
4). The service principal for the cluster does not exist in the tenant in which we are trying to access
5). Each tenant in which we want to access must consent to the cluster, this will ensure the service principal exists in their tenant and that service principal has the access
| As said in title, I'm trying to add an AKS cluster to my Azure Machine Learning workspace as Attached computes.
In the wizard that ML studio shows while adding it
there's a link to a guide to install AzureML extension.
Just 4 steps:
Prepare an Azure Kubernetes Service cluster or Arc Kubernetes
cluster.
Deploy the AzureML extension.
Attach Kubernetes cluster to
your Azure ML workspace.
Use the Kubernetes compute target from CLI
v2, SDK v2, and the Studio UI.
My issue comes ad 2nd step.
As suggested I'm trying to create a POC trough az cli
az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
I'm already logged on right subscription (where I'm owner), ad using right cluster name and resource group. as extension-name I've used test-ml-extension, but I keep to get this error
(ExtensionOperationFailed) The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}. Code: ExtensionOperationFailed Message: The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}.
Am I missing something?
|
|
If you know the other file is YAML, Helm contains a lightly-documented fromYaml extension that can parse it.
{{- $abc := .Files.Get "abc.yaml" | fromYaml }}
From there you have a couple of options on how to proceed. One tool you have is the corresponding, also lightly-documented, toYaml extension that converts an arbitrary structure back to YAML.
So one choice is to directly emit the values you think you need:
metadata:
test4: dog
test5: elephant
test1: {{ $abc.metadata.test1 }}
test2: {{ $abc.metadata.test2 }}
test3: {{ $abc.metadata.test3 }}
A second is to emit the new values for each block, plus the existing content:
metadata:
test4: dog
test5: elephant
{{ $abc.metadata | toYaml | indent 2 -}}
A third is to modify the structure in-place, and then ask Helm to write out the whole thing as YAML. Unusual for Helm template functions, set modifies the dictionary it's given in place.
{{- $_ := $abc.metadata | set "test4" "dog" | set "test5" "elephant" -}}
{{- toYaml $abc -}}
| ###Instruction: I have a Yaml file in a folder as abc.yaml and the content is
metadata:
test1: apple
test2: banana
test3: cat
container:
image: foo
text: xyz
variables:
ojb: one
meta: two
and I have another file values.yaml.j2 which needs part of the above content.
metadata:
test4: dog
test5: elephant
{{ .... Here I need test1, test2, test3 from the above (abc.yaml).... }}
container:
name: test
{{ .... Here I need image and text from the above (abc.yaml) ....}}
variables:
ping: pong
{{ ..... Here I need ojb and meta from the above (abc.yaml) .... }}
When I was exploring Helm go templates, I found, Files.Lines will return line by line. But I need specific lines as I mentioned above.
Any solution with go template wo get the part of other yaml file?
### Response: If you know the other file is YAML, Helm contains a lightly-documented fromYaml extension that can parse it.
{{- $abc := .Files.Get "abc.yaml" | fromYaml }}
From there you have a couple of options on how to proceed. One tool you have is the corresponding, also lightly-documented, toYaml extension that converts an arbitrary structure back to YAML.
So one choice is to directly emit the values you think you need:
metadata:
test4: dog
test5: elephant
test1: {{ $abc.metadata.test1 }}
test2: {{ $abc.metadata.test2 }}
test3: {{ $abc.metadata.test3 }}
A second is to emit the new values for each block, plus the existing content:
metadata:
test4: dog
test5: elephant
{{ $abc.metadata | toYaml | indent 2 -}}
A third is to modify the structure in-place, and then ask Helm to write out the whole thing as YAML. Unusual for Helm template functions, set modifies the dictionary it's given in place.
{{- $_ := $abc.metadata | set "test4" "dog" | set "test5" "elephant" -}}
{{- toYaml $abc -}}
| I have a Yaml file in a folder as abc.yaml and the content is
metadata:
test1: apple
test2: banana
test3: cat
container:
image: foo
text: xyz
variables:
ojb: one
meta: two
and I have another file values.yaml.j2 which needs part of the above content.
metadata:
test4: dog
test5: elephant
{{ .... Here I need test1, test2, test3 from the above (abc.yaml).... }}
container:
name: test
{{ .... Here I need image and text from the above (abc.yaml) ....}}
variables:
ping: pong
{{ ..... Here I need ojb and meta from the above (abc.yaml) .... }}
When I was exploring Helm go templates, I found, Files.Lines will return line by line. But I need specific lines as I mentioned above.
Any solution with go template wo get the part of other yaml file?
|
|
You need to set externalTrafficPolicy: Local on the Ingress controller service.
Documentation here
Setting this allows the original client IP (which would be outside the cluster) to be preserved, so things like allow/deny list sand networkpolicies would work properly.
| ###Instruction: I have a network policy which will allow connection only from my IP address to the NGINX ingress controller. The network policy is given below. The ingress is using a load balancer and app is in different namespace (default).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-access
namespace: ingress-nginx
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 11.11.11.11/32
ports:
- protocol: TCP
port: http
- protocol: TCP
port: https
But after deploying the network policy I can't establish a connection to my application. So I have changed the CIDR range from 11.11.11.11/32 to 0.0.0.0/0 and checked the logs to see from which IP address I am receiving the connection.
22.22.22.22 - - [10/Nov/2022:05:14:35 +0000] "GET / HTTP/1.1" 200 7250 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
462 0.006 [default-hello-one-80] [] 10.2.0.6:80 7243 0.004 200
4d3fd980e5d07a084b162c1f9305f2d9
It means I am getting hit by one of my kubernetes worker node. (22.22.22.22)
I tried adding all worker node IP address and load balancer IP address under ipBlock, but still my application is not accessible.can somebody help me?
### Response: You need to set externalTrafficPolicy: Local on the Ingress controller service.
Documentation here
Setting this allows the original client IP (which would be outside the cluster) to be preserved, so things like allow/deny list sand networkpolicies would work properly.
| I have a network policy which will allow connection only from my IP address to the NGINX ingress controller. The network policy is given below. The ingress is using a load balancer and app is in different namespace (default).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-access
namespace: ingress-nginx
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 11.11.11.11/32
ports:
- protocol: TCP
port: http
- protocol: TCP
port: https
But after deploying the network policy I can't establish a connection to my application. So I have changed the CIDR range from 11.11.11.11/32 to 0.0.0.0/0 and checked the logs to see from which IP address I am receiving the connection.
22.22.22.22 - - [10/Nov/2022:05:14:35 +0000] "GET / HTTP/1.1" 200 7250 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
462 0.006 [default-hello-one-80] [] 10.2.0.6:80 7243 0.004 200
4d3fd980e5d07a084b162c1f9305f2d9
It means I am getting hit by one of my kubernetes worker node. (22.22.22.22)
I tried adding all worker node IP address and load balancer IP address under ipBlock, but still my application is not accessible.can somebody help me?
|
|
Adding this block in my EKS nodes security group rules solved the issue for me:
node_security_group_additional_rules = {
...
ingress_cluster_metricserver = {
description = "Cluster to node 4443 (Metrics Server)"
protocol = "tcp"
from_port = 4443
to_port = 4443
type = "ingress"
source_cluster_security_group = true
}
...
}
| ###Instruction: I'm using a HPA based on a custom metric on GKE.
The HPA is not working and it's showing me this error log:
unable to fetch metrics from custom metrics API: the server is currently unable to handle the request
When I run kubectl get apiservices | grep custom I get
v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d
this is the HPA spec config :
spec:
scaleTargetRef:
kind: Deployment
name: api-name
apiVersion: apps/v1
minReplicas: 3
maxReplicas: 50
metrics:
- type: Object
object:
target:
kind: Service
name: api-name
apiVersion: v1
metricName: messages_ready_per_consumer
targetValue: '1'
and this is the service's spec config :
spec:
ports:
- name: worker-metrics
protocol: TCP
port: 8080
targetPort: worker-metrics
selector:
app.kubernetes.io/instance: api
app.kubernetes.io/name: api-name
clusterIP: 10.8.7.9
clusterIPs:
- 10.8.7.9
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
What should I do to make it work ?
### Response: Adding this block in my EKS nodes security group rules solved the issue for me:
node_security_group_additional_rules = {
...
ingress_cluster_metricserver = {
description = "Cluster to node 4443 (Metrics Server)"
protocol = "tcp"
from_port = 4443
to_port = 4443
type = "ingress"
source_cluster_security_group = true
}
...
}
| I'm using a HPA based on a custom metric on GKE.
The HPA is not working and it's showing me this error log:
unable to fetch metrics from custom metrics API: the server is currently unable to handle the request
When I run kubectl get apiservices | grep custom I get
v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d
this is the HPA spec config :
spec:
scaleTargetRef:
kind: Deployment
name: api-name
apiVersion: apps/v1
minReplicas: 3
maxReplicas: 50
metrics:
- type: Object
object:
target:
kind: Service
name: api-name
apiVersion: v1
metricName: messages_ready_per_consumer
targetValue: '1'
and this is the service's spec config :
spec:
ports:
- name: worker-metrics
protocol: TCP
port: 8080
targetPort: worker-metrics
selector:
app.kubernetes.io/instance: api
app.kubernetes.io/name: api-name
clusterIP: 10.8.7.9
clusterIPs:
- 10.8.7.9
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
What should I do to make it work ?
|
|
I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.
| ###Instruction: I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
### Response: I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).
Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.
It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.
| I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.
In this scenario I have two pods:
sleeper, the client
frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80
Both running in the same namespace: ns
In the sleeper pod I simply wget a ping endpoint in the frontend pod:
wget -qO- http://frontend-svc.ns:8080/api/Ping
Here's my egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.
Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'
However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)
My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.
However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.
I'm running on Azure Kubernetes with Calico Network Policies.
Any clue what might be the issue, because I'm out of ideas.
After getting it up and running, here's a more locked-down version of the DNS egress policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
|
|
You can use the --continue exec flag, to continue the debugged process on the start, which then will lead to continued logs.
So start delve e.g. with:
dlv --listen=:2345 --headless exec your/app --continue
Without the --continue flag, delve will wait for remote connections and halt your application. With the --continue flag, the application instead will start already.
From dlv help exec:
...
Usage:
dlv exec <path/to/binary> [flags]
Flags:
--continue Continue the debugged process on start.
...
| ###Instruction: I currently have a go app that uses a lot fmt.printf. Whenever that app would run in a pod I was able to get the logs back by doing
kubectl logs podname
However I also needed to integrate remote debugging. I need to use dlv to allow my ide(GoLand) to remotely connect to the pod. It connects to the Pod at port 40000.
Also when the Pods image runs it exposes port 40000 i.e the docker file has this in it 40000
I also have a service that looks like this in my minikube
apiVersion: v1
kind: Service
metadata:
name: mydebug
spec:
type: ClusterIP
selector:
app: fooapp
ports:
- protocol: TCP
port: 40000
targetPort: 40000
name: delve
Now when I do kubectl logs podname I only get this back
API server listening at: [::]:40000
2022-10-30T21:18:57Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
Is there anyway to get my logs back ? Ho
### Response: You can use the --continue exec flag, to continue the debugged process on the start, which then will lead to continued logs.
So start delve e.g. with:
dlv --listen=:2345 --headless exec your/app --continue
Without the --continue flag, delve will wait for remote connections and halt your application. With the --continue flag, the application instead will start already.
From dlv help exec:
...
Usage:
dlv exec <path/to/binary> [flags]
Flags:
--continue Continue the debugged process on start.
...
| I currently have a go app that uses a lot fmt.printf. Whenever that app would run in a pod I was able to get the logs back by doing
kubectl logs podname
However I also needed to integrate remote debugging. I need to use dlv to allow my ide(GoLand) to remotely connect to the pod. It connects to the Pod at port 40000.
Also when the Pods image runs it exposes port 40000 i.e the docker file has this in it 40000
I also have a service that looks like this in my minikube
apiVersion: v1
kind: Service
metadata:
name: mydebug
spec:
type: ClusterIP
selector:
app: fooapp
ports:
- protocol: TCP
port: 40000
targetPort: 40000
name: delve
Now when I do kubectl logs podname I only get this back
API server listening at: [::]:40000
2022-10-30T21:18:57Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
Is there anyway to get my logs back ? Ho
|
|
The error you show about the path not being found should appear when you docker run ... - but it didn't!
So, I assume it is related to the imagePullPolicy. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the same tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the imagePullPolicy to Always.
| ###Instruction: I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple "run this Bash command" type deals.
package.json:
{
...
"scripts": {
"start": "node bin/path/to/index.js",
"compile": "tsc"
}
}
npm run compile && npm run start works on the command-line. Moving on to the Docker container setup...
Dockerfile:
FROM node:18
WORKDIR /working/dir/
...
RUN npm run compile
CMD [ "npm", "run", "start" ]
When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...
my-cron.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-foo
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-foo
image: gcr.io/...
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
When I kubectl apply -f my-cron.yaml sure enough I get some pods that run, one per-minute, however they all error out:
% kubectl logs cron-foo-27805019-j8gbp
> mdmp@0.0.1 start
> node bin/path/to/index.js
node:internal/modules/cjs/loader:998
throw err;
^
Error: Cannot find module '/working/dir/bin/path/to/index.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
at Module._load (node:internal/modules/cjs/loader:841:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.11.0
The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...
Any help would be appreciated. Most CronJob examples I've seen have a command: list in the template spec...
### Response: The error you show about the path not being found should appear when you docker run ... - but it didn't!
So, I assume it is related to the imagePullPolicy. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the same tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the imagePullPolicy to Always.
| I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple "run this Bash command" type deals.
package.json:
{
...
"scripts": {
"start": "node bin/path/to/index.js",
"compile": "tsc"
}
}
npm run compile && npm run start works on the command-line. Moving on to the Docker container setup...
Dockerfile:
FROM node:18
WORKDIR /working/dir/
...
RUN npm run compile
CMD [ "npm", "run", "start" ]
When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...
my-cron.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-foo
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-foo
image: gcr.io/...
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
When I kubectl apply -f my-cron.yaml sure enough I get some pods that run, one per-minute, however they all error out:
% kubectl logs cron-foo-27805019-j8gbp
> mdmp@0.0.1 start
> node bin/path/to/index.js
node:internal/modules/cjs/loader:998
throw err;
^
Error: Cannot find module '/working/dir/bin/path/to/index.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
at Module._load (node:internal/modules/cjs/loader:841:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.11.0
The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...
Any help would be appreciated. Most CronJob examples I've seen have a command: list in the template spec...
|
|
I got this error too..
This error occurs when you are working in organisation environment!
You can reach Admin team to enable access to snowflake inside your deployment environment!! and if it is happening in local environment you need to enable proxies
| ###Instruction: I am using node version v17.9.0.
OS is Linux.
I am using Kubernetes and launching a nodejs application to connect with snowflake and execute a query.
version of snowflake-promise is 2.2.0.
version of snowflake-sdk is 1.6.14.
I am getting below error while connecting to snowflake. Can you please suggest.
/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157
cache.set(certId, response);
^
TypeError: Cannot read properties of undefined (reading 'set')
at OcspResponseCache.set (/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157:11)
at /app/node_modules/snowflake-sdk/lib/agent/socket_util.js:232:32
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:142:5)
at ocspResponseVerify (/app/node_modules/snowflake-sdk/lib/agent/check.js:201:7)
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:71:7)
at IncomingMessage.<anonymous> (/app/node_modules/snowflake-sdk/lib/agent/check.js:99:7)
at IncomingMessage.emit (node:events:539:35)
at IncomingMessage.emit (node:domain:475:12)
at endReadableNT (node:internal/streams/readable:1345:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
### Response: I got this error too..
This error occurs when you are working in organisation environment!
You can reach Admin team to enable access to snowflake inside your deployment environment!! and if it is happening in local environment you need to enable proxies
| I am using node version v17.9.0.
OS is Linux.
I am using Kubernetes and launching a nodejs application to connect with snowflake and execute a query.
version of snowflake-promise is 2.2.0.
version of snowflake-sdk is 1.6.14.
I am getting below error while connecting to snowflake. Can you please suggest.
/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157
cache.set(certId, response);
^
TypeError: Cannot read properties of undefined (reading 'set')
at OcspResponseCache.set (/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157:11)
at /app/node_modules/snowflake-sdk/lib/agent/socket_util.js:232:32
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:142:5)
at ocspResponseVerify (/app/node_modules/snowflake-sdk/lib/agent/check.js:201:7)
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:71:7)
at IncomingMessage.<anonymous> (/app/node_modules/snowflake-sdk/lib/agent/check.js:99:7)
at IncomingMessage.emit (node:events:539:35)
at IncomingMessage.emit (node:domain:475:12)
at endReadableNT (node:internal/streams/readable:1345:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
|
|
Putting solution here in case anybody runs into this issue.
If there is a Load Balancer is present and YugabyteDB server's IP is not resolvable from the voyager machine, then import data command is erroring out.
Ideally it should use the load balancer for importing the data.
Use --target-endpoints=LB_HOST:LB_PORT to force the server address.
See tickets:
Import data 'Error Resolving name' on local kubernetes #553
Import Data failed if LB is present and cluster servers host is not resolvable #585
| ###Instruction: Following the YugabyteDB Voyager database migration steps (https://docs.yugabyte.com/preview/migrate/migrate-steps/) going from PostgreSQL to YugabyteDB on a local Kubernetes, on Docker Desktop, on WSL2, on Windows.
Using Ubuntu 22.04 on WSL2 to run yb-voyager, I get an error on the Import Data step:
import of data in "postgres" database started
Target YugabyteDB version: 11.2-YB-2.15.2.1-b0
Error Resolving name=yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: lookup yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: no such host
The Import Schema step worked correctly (from using pgAdmin connected to the YugabyteDB), so I know that the database can be connected to. Command used:
yb-voyager import schema --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
The command used to import the data, which fails:
yb-voyager import data --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
ENV variables:
EXPORT_DIR=/home/abc/db-export
TARGET_DB_HOST=127.0.0.1
TARGET_DB_USER=ybvoyager
TARGET_DB_PASSWORD=password
TARGET_DB_NAME=postgres
Why does the import data fail when the import schema works connecting to the same database?
### Response: Putting solution here in case anybody runs into this issue.
If there is a Load Balancer is present and YugabyteDB server's IP is not resolvable from the voyager machine, then import data command is erroring out.
Ideally it should use the load balancer for importing the data.
Use --target-endpoints=LB_HOST:LB_PORT to force the server address.
See tickets:
Import data 'Error Resolving name' on local kubernetes #553
Import Data failed if LB is present and cluster servers host is not resolvable #585
| Following the YugabyteDB Voyager database migration steps (https://docs.yugabyte.com/preview/migrate/migrate-steps/) going from PostgreSQL to YugabyteDB on a local Kubernetes, on Docker Desktop, on WSL2, on Windows.
Using Ubuntu 22.04 on WSL2 to run yb-voyager, I get an error on the Import Data step:
import of data in "postgres" database started
Target YugabyteDB version: 11.2-YB-2.15.2.1-b0
Error Resolving name=yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: lookup yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: no such host
The Import Schema step worked correctly (from using pgAdmin connected to the YugabyteDB), so I know that the database can be connected to. Command used:
yb-voyager import schema --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
The command used to import the data, which fails:
yb-voyager import data --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
ENV variables:
EXPORT_DIR=/home/abc/db-export
TARGET_DB_HOST=127.0.0.1
TARGET_DB_USER=ybvoyager
TARGET_DB_PASSWORD=password
TARGET_DB_NAME=postgres
Why does the import data fail when the import schema works connecting to the same database?
|
|
Generally when you see this error message:
"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system)
It most probably mean that your mongodb pod did not shutted down gracefully and had no time to remove the mongod.lock file so when your pod was re-created in another k8s node the "new" mongod process cannot start because it is finding the previous mongod.lock file.
The easiest way to resolve the current availability issue is to scale up and add immediately one more replicaSet member so the new member to init-sync from the available good member:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=3
and elect again primary.
You can check if mongoDB replicaSet elected PRIMARY from mongo shell inside the pod with the command:
rs.status()
For affected pod with the issue you can do as follow:
You can plan maitenance window and scale down ( scaling down stateFullset do not expect to automatically delete the pvc/pv , but good to make backup just in case.
After you scale down you can start custom helper pod to mount the pv so you can remove the mongod.lock file:
Temporary pod that you will start to mount the affected dbPath and remove the mongodb.lock file:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: mongo-pvc-helper
spec:
securityContext:
runAsUser: 0
containers:
- command:
- sh
- -c
- while true ; do echo alive ; sleep 10 ; done
image: busybox
imagePullPolicy: Always
name: mongo-pvc-helper
resources: {}
securityContext:
capabilities:
drop:
- ALL
volumeMounts:
- mountPath: /mongodata
name: mongodata
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: <your_faulty_pod_pvc_name>
EOF
After you start the pod you can do:
kubectl exec mongo-pvc-helper -it sh
$ chown -R 0:0 /mongodata
$ rm /mongodata/mongod.lock
$ exit
Or you can complete wipe up the entire pv(if you prefer safely to init-sync entirely this member):
rm -rf /mongodata/*
And terminate the pod so you can finish the process:
kubectl delete pod mongo-pvc-helper
And again scale-up:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=2
Btw, good to have at least 3x data members in replicaSet for better redundancy to allow during single member down event election to keep still the PRIMARY up and running...
How to troubleshoot this further:
Ensure your pods have the terminationGracePeriod set (at least 10-20 sec) so it allow some time for the mongod process to flush data to storage and remove the mongod.lock file.
Depending from pod memory limits/requests , you can set some safer value for storage.wiredTiger.engineConfig.cacheSizeGB (if not set it is allocating ~50% from memory ).
Check the kubelet logs from node where pod was killed there maybe more details why pod was killed.
| ###Instruction: I'm running a kubernetes cluster (bare metal) with a mongodb (version 4, as my server cannot handle newer versions) replicaset (2 replicas), which is initially working, but from time to time (sometimes 24 hours, somtimes 10 days) one or more mongodb pods are failing.
Warning BackOff 2m9s (x43454 over 6d13h) kubelet Back-off restarting failed container
The relevant part of the logs should be
DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory
But I do not change anything and initially it is working. Also the second pod is currently running (but which will fail the next days).
I'm using longhorn (before I tried nfs) for the storage and I installed mongodb using bitnami helm chart with these values:
image:
registry: docker.io
repository: bitnami/mongodb
digest: "sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209" # tag: 4.4.15
pullPolicy: IfNotPresent
architecture: replicaset
replicaCount: 2
updateStrategy:
type: RollingUpdate
containerPorts:
mongodb: 27017
auth:
enabled: true
rootUser: root
rootPassword: "password"
usernames: ["user"]
passwords: ["userpass"]
databases: ["db"]
service:
portName: mongodb
ports:
mongodb: 27017
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
logs
mongodb 21:25:05.55 INFO ==> Advertised Hostname: mongodb-1.mongodb-headless.mongodb.svc.cluster.local
mongodb 21:25:05.55 INFO ==> Advertised Port: 27017
mongodb 21:25:05.56 INFO ==> Pod name doesn't match initial primary pod name, configuring node as a secondary
mongodb 21:25:05.59
mongodb 21:25:05.59 Welcome to the Bitnami mongodb container
mongodb 21:25:05.60 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 21:25:05.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 21:25:05.60
mongodb 21:25:05.60 INFO ==> ** Starting MongoDB setup **
mongodb 21:25:05.64 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 21:25:05.78 INFO ==> Initializing MongoDB...
mongodb 21:25:05.82 INFO ==> Deploying MongoDB with persisted data...
mongodb 21:25:05.83 INFO ==> Writing keyfile for replica set authentication...
mongodb 21:25:05.88 INFO ==> ** MongoDB setup finished! **
mongodb 21:25:05.92 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2022-10-29T21:25:05.961+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-10-29T21:25:05.963+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-10-29T21:25:05.969+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-1"}}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.15","gitVersion":"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9","openSSLVersion":"OpenSSL 1.1.1n 15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.15.0-48-generic"}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"rs0"},"security":{"authorization":"disabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"true"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory"}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
Update
I checked the syslog and before the the logs Nov 14 23:07:17 k8s-worker2 kubelet[752]: E1114 23:07:17.749057 752 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mongodb\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mongodb pod=mongodb-2_mongodb(314f2776-ced4-4ba3-b90b-f927dc079770)\"" pod="mongodb/mongodb-2" podUID=314f2776-ced4-4ba3-b90b-f927dc079770
I find these logs:
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341806] sd 2:0:0:1: [sda] tag#42 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=11s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341866] sd 2:0:0:1: [sda] tag#42 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341891] sd 2:0:0:1: [sda] tag#42 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341899] sd 2:0:0:1: [sda] tag#42 CDB: Write(10) 2a 00 00 85 1f b8 00 00 40 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341912] blk_update_request: critical medium error, dev sda, sector 8724408 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.352012] Aborting journal on device sda-8.
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.354980] EXT4-fs error (device sda) in ext4_reserve_inode_write:5726: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.355103] sd 2:0:0:1: [sda] tag#40 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357056] sd 2:0:0:1: [sda] tag#40 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357061] sd 2:0:0:1: [sda] tag#40 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357066] sd 2:0:0:1: [sda] tag#40 CDB: Write(10) 2a 00 00 44 14 88 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357068] blk_update_request: critical medium error, dev sda, sector 4461704 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357088] EXT4-fs error (device sda): ext4_dirty_inode:5922: inode #131080: comm mongod: mark_inode_dirty error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.359566] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 557715)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.361432] EXT4-fs error (device sda) in ext4_dirty_inode:5923: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.362792] Buffer I/O error on device sda, logical block 557713
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.364010] Buffer I/O error on device sda, logical block 557714
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365222] sd 2:0:0:1: [sda] tag#43 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=8s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365228] sd 2:0:0:1: [sda] tag#43 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365230] sd 2:0:0:1: [sda] tag#43 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365233] sd 2:0:0:1: [sda] tag#43 CDB: Write(10) 2a 00 00 44 28 38 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365234] blk_update_request: critical medium error, dev sda, sector 4466744 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367434] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558344)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367442] Buffer I/O error on device sda, logical block 558343
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368593] sd 2:0:0:1: [sda] tag#41 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368597] sd 2:0:0:1: [sda] tag#41 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368599] sd 2:0:0:1: [sda] tag#41 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368602] sd 2:0:0:1: [sda] tag#41 CDB: Write(10) 2a 00 00 44 90 70 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368604] blk_update_request: critical medium error, dev sda, sector 4493424 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370907] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 561680)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370946] sd 2:0:0:1: [sda] tag#39 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] sd 2:0:0:1: [sda] tag#39 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370952] sd 2:0:0:1: [sda] tag#39 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] EXT4-fs error (device sda): ext4_journal_check_start:83: comm kworker/u4:0: Detected aborted journal
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370954] sd 2:0:0:1: [sda] tag#39 CDB: Write(10) 2a 00 00 10 41 98 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.372081] blk_update_request: critical medium error, dev sda, sector 1065368 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374353] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131080 starting block 133172)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374396] Buffer I/O error on device sda, logical block 133171
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.388492] EXT4-fs error (device sda) in __ext4_new_inode:1136: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.390763] EXT4-fs error (device sda) in ext4_create:2786: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.391732] sd 2:0:0:1: [sda] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392941] sd 2:0:0:1: [sda] tag#46 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392944] sd 2:0:0:1: [sda] tag#46 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392948] sd 2:0:0:1: [sda] tag#46 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392950] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.395562] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396945] sd 2:0:0:1: [sda] tag#45 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396953] sd 2:0:0:1: [sda] tag#45 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396955] sd 2:0:0:1: [sda] tag#45 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396958] sd 2:0:0:1: [sda] tag#45 CDB: Write(10) 2a 08 00 84 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396959] blk_update_request: critical medium error, dev sda, sector 8650752 op 0x1:(WRITE) flags 0x20800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396930] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.399771] Buffer I/O error on dev sda, logical block 1081344, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.403897] JBD2: Error -5 detected when updating journal superblock for sda-8.
Nov 14 23:07:01 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.w3zzIL.mount: Deactivated successfully.
Nov 14 23:07:05 k8s-worker2 kubelet[752]: E1114 23:07:05.415798 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 kubelet[752]: E1114 23:07:06.412219 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.nK23K3.mount: Deactivated successfully.
Nov 14 23:07:11 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.L5TkRU.mount: Deactivated successfully.
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411831] sd 2:0:0:1: [sda] tag#44 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411888] sd 2:0:0:1: [sda] tag#44 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411898] sd 2:0:0:1: [sda] tag#44 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411952] sd 2:0:0:1: [sda] tag#44 CDB: Write(10) 2a 00 00 44 28 40 00 00 50 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411965] blk_update_request: critical medium error, dev sda, sector 4466752 op 0x1:(WRITE) flags 0x0 phys_seg 10 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.419273] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558354)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430398] sd 2:0:0:1: [sda] tag#47 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430407] sd 2:0:0:1: [sda] tag#47 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430409] sd 2:0:0:1: [sda] tag#47 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430412] sd 2:0:0:1: [sda] tag#47 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430415] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.433686] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.436088] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444291] sd 2:0:0:1: [sda] tag#32 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=14s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444300] sd 2:0:0:1: [sda] tag#32 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444304] sd 2:0:0:1: [sda] tag#32 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444308] sd 2:0:0:1: [sda] tag#32 CDB: Write(10) 2a 00 00 41 01 18 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444313] blk_update_request: critical medium error, dev sda, sector 4260120 op 0x1:(WRITE) flags 0x3000 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.449491] Buffer I/O error on dev sda, logical block 532515, lost async page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453591] sd 2:0:0:1: [sda] tag#33 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453600] sd 2:0:0:1: [sda] tag#33 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453603] sd 2:0:0:1: [sda] tag#33 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453607] sd 2:0:0:1: [sda] tag#33 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453610] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.459072] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.461189] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.464347] EXT4-fs (sda): Remounting filesystem read-only
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.466527] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131081, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.470833] Buffer I/O error on device sda, logical block 561678
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.473548] Buffer I/O error on device sda, logical block 561679
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.477384] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131083, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.482014] Buffer I/O error on device sda, logical block 558344
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.484881] Buffer I/O error on device sda, logical block 558345
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.487224] Buffer I/O error on device sda, logical block 558346
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.488837] Buffer I/O error on device sda, logical block 558347
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.490543] Buffer I/O error on device sda, logical block 558348
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.492061] Buffer I/O error on device sda, logical block 558349
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.493494] Buffer I/O error on device sda, logical block 558350
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.494931] Buffer I/O error on device sda, logical block 558351
Not sure, if this is really related to the problem.
### Response: Generally when you see this error message:
"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system)
It most probably mean that your mongodb pod did not shutted down gracefully and had no time to remove the mongod.lock file so when your pod was re-created in another k8s node the "new" mongod process cannot start because it is finding the previous mongod.lock file.
The easiest way to resolve the current availability issue is to scale up and add immediately one more replicaSet member so the new member to init-sync from the available good member:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=3
and elect again primary.
You can check if mongoDB replicaSet elected PRIMARY from mongo shell inside the pod with the command:
rs.status()
For affected pod with the issue you can do as follow:
You can plan maitenance window and scale down ( scaling down stateFullset do not expect to automatically delete the pvc/pv , but good to make backup just in case.
After you scale down you can start custom helper pod to mount the pv so you can remove the mongod.lock file:
Temporary pod that you will start to mount the affected dbPath and remove the mongodb.lock file:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: mongo-pvc-helper
spec:
securityContext:
runAsUser: 0
containers:
- command:
- sh
- -c
- while true ; do echo alive ; sleep 10 ; done
image: busybox
imagePullPolicy: Always
name: mongo-pvc-helper
resources: {}
securityContext:
capabilities:
drop:
- ALL
volumeMounts:
- mountPath: /mongodata
name: mongodata
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: <your_faulty_pod_pvc_name>
EOF
After you start the pod you can do:
kubectl exec mongo-pvc-helper -it sh
$ chown -R 0:0 /mongodata
$ rm /mongodata/mongod.lock
$ exit
Or you can complete wipe up the entire pv(if you prefer safely to init-sync entirely this member):
rm -rf /mongodata/*
And terminate the pod so you can finish the process:
kubectl delete pod mongo-pvc-helper
And again scale-up:
helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=2
Btw, good to have at least 3x data members in replicaSet for better redundancy to allow during single member down event election to keep still the PRIMARY up and running...
How to troubleshoot this further:
Ensure your pods have the terminationGracePeriod set (at least 10-20 sec) so it allow some time for the mongod process to flush data to storage and remove the mongod.lock file.
Depending from pod memory limits/requests , you can set some safer value for storage.wiredTiger.engineConfig.cacheSizeGB (if not set it is allocating ~50% from memory ).
Check the kubelet logs from node where pod was killed there maybe more details why pod was killed.
| I'm running a kubernetes cluster (bare metal) with a mongodb (version 4, as my server cannot handle newer versions) replicaset (2 replicas), which is initially working, but from time to time (sometimes 24 hours, somtimes 10 days) one or more mongodb pods are failing.
Warning BackOff 2m9s (x43454 over 6d13h) kubelet Back-off restarting failed container
The relevant part of the logs should be
DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory
But I do not change anything and initially it is working. Also the second pod is currently running (but which will fail the next days).
I'm using longhorn (before I tried nfs) for the storage and I installed mongodb using bitnami helm chart with these values:
image:
registry: docker.io
repository: bitnami/mongodb
digest: "sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209" # tag: 4.4.15
pullPolicy: IfNotPresent
architecture: replicaset
replicaCount: 2
updateStrategy:
type: RollingUpdate
containerPorts:
mongodb: 27017
auth:
enabled: true
rootUser: root
rootPassword: "password"
usernames: ["user"]
passwords: ["userpass"]
databases: ["db"]
service:
portName: mongodb
ports:
mongodb: 27017
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
logs
mongodb 21:25:05.55 INFO ==> Advertised Hostname: mongodb-1.mongodb-headless.mongodb.svc.cluster.local
mongodb 21:25:05.55 INFO ==> Advertised Port: 27017
mongodb 21:25:05.56 INFO ==> Pod name doesn't match initial primary pod name, configuring node as a secondary
mongodb 21:25:05.59
mongodb 21:25:05.59 Welcome to the Bitnami mongodb container
mongodb 21:25:05.60 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 21:25:05.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 21:25:05.60
mongodb 21:25:05.60 INFO ==> ** Starting MongoDB setup **
mongodb 21:25:05.64 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 21:25:05.78 INFO ==> Initializing MongoDB...
mongodb 21:25:05.82 INFO ==> Deploying MongoDB with persisted data...
mongodb 21:25:05.83 INFO ==> Writing keyfile for replica set authentication...
mongodb 21:25:05.88 INFO ==> ** MongoDB setup finished! **
mongodb 21:25:05.92 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2022-10-29T21:25:05.961+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-10-29T21:25:05.963+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-10-29T21:25:05.969+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-1"}}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.15","gitVersion":"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9","openSSLVersion":"OpenSSL 1.1.1n 15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.15.0-48-generic"}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"rs0"},"security":{"authorization":"disabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"true"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory"}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
Update
I checked the syslog and before the the logs Nov 14 23:07:17 k8s-worker2 kubelet[752]: E1114 23:07:17.749057 752 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mongodb\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mongodb pod=mongodb-2_mongodb(314f2776-ced4-4ba3-b90b-f927dc079770)\"" pod="mongodb/mongodb-2" podUID=314f2776-ced4-4ba3-b90b-f927dc079770
I find these logs:
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341806] sd 2:0:0:1: [sda] tag#42 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=11s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341866] sd 2:0:0:1: [sda] tag#42 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341891] sd 2:0:0:1: [sda] tag#42 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341899] sd 2:0:0:1: [sda] tag#42 CDB: Write(10) 2a 00 00 85 1f b8 00 00 40 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341912] blk_update_request: critical medium error, dev sda, sector 8724408 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.352012] Aborting journal on device sda-8.
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.354980] EXT4-fs error (device sda) in ext4_reserve_inode_write:5726: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.355103] sd 2:0:0:1: [sda] tag#40 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357056] sd 2:0:0:1: [sda] tag#40 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357061] sd 2:0:0:1: [sda] tag#40 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357066] sd 2:0:0:1: [sda] tag#40 CDB: Write(10) 2a 00 00 44 14 88 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357068] blk_update_request: critical medium error, dev sda, sector 4461704 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357088] EXT4-fs error (device sda): ext4_dirty_inode:5922: inode #131080: comm mongod: mark_inode_dirty error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.359566] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 557715)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.361432] EXT4-fs error (device sda) in ext4_dirty_inode:5923: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.362792] Buffer I/O error on device sda, logical block 557713
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.364010] Buffer I/O error on device sda, logical block 557714
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365222] sd 2:0:0:1: [sda] tag#43 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=8s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365228] sd 2:0:0:1: [sda] tag#43 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365230] sd 2:0:0:1: [sda] tag#43 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365233] sd 2:0:0:1: [sda] tag#43 CDB: Write(10) 2a 00 00 44 28 38 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365234] blk_update_request: critical medium error, dev sda, sector 4466744 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367434] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558344)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367442] Buffer I/O error on device sda, logical block 558343
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368593] sd 2:0:0:1: [sda] tag#41 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368597] sd 2:0:0:1: [sda] tag#41 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368599] sd 2:0:0:1: [sda] tag#41 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368602] sd 2:0:0:1: [sda] tag#41 CDB: Write(10) 2a 00 00 44 90 70 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368604] blk_update_request: critical medium error, dev sda, sector 4493424 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370907] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 561680)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370946] sd 2:0:0:1: [sda] tag#39 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] sd 2:0:0:1: [sda] tag#39 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370952] sd 2:0:0:1: [sda] tag#39 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] EXT4-fs error (device sda): ext4_journal_check_start:83: comm kworker/u4:0: Detected aborted journal
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370954] sd 2:0:0:1: [sda] tag#39 CDB: Write(10) 2a 00 00 10 41 98 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.372081] blk_update_request: critical medium error, dev sda, sector 1065368 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374353] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131080 starting block 133172)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374396] Buffer I/O error on device sda, logical block 133171
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.388492] EXT4-fs error (device sda) in __ext4_new_inode:1136: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.390763] EXT4-fs error (device sda) in ext4_create:2786: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.391732] sd 2:0:0:1: [sda] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392941] sd 2:0:0:1: [sda] tag#46 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392944] sd 2:0:0:1: [sda] tag#46 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392948] sd 2:0:0:1: [sda] tag#46 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392950] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.395562] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396945] sd 2:0:0:1: [sda] tag#45 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396953] sd 2:0:0:1: [sda] tag#45 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396955] sd 2:0:0:1: [sda] tag#45 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396958] sd 2:0:0:1: [sda] tag#45 CDB: Write(10) 2a 08 00 84 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396959] blk_update_request: critical medium error, dev sda, sector 8650752 op 0x1:(WRITE) flags 0x20800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396930] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.399771] Buffer I/O error on dev sda, logical block 1081344, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.403897] JBD2: Error -5 detected when updating journal superblock for sda-8.
Nov 14 23:07:01 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.w3zzIL.mount: Deactivated successfully.
Nov 14 23:07:05 k8s-worker2 kubelet[752]: E1114 23:07:05.415798 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 kubelet[752]: E1114 23:07:06.412219 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.nK23K3.mount: Deactivated successfully.
Nov 14 23:07:11 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.L5TkRU.mount: Deactivated successfully.
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411831] sd 2:0:0:1: [sda] tag#44 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411888] sd 2:0:0:1: [sda] tag#44 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411898] sd 2:0:0:1: [sda] tag#44 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411952] sd 2:0:0:1: [sda] tag#44 CDB: Write(10) 2a 00 00 44 28 40 00 00 50 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411965] blk_update_request: critical medium error, dev sda, sector 4466752 op 0x1:(WRITE) flags 0x0 phys_seg 10 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.419273] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558354)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430398] sd 2:0:0:1: [sda] tag#47 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430407] sd 2:0:0:1: [sda] tag#47 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430409] sd 2:0:0:1: [sda] tag#47 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430412] sd 2:0:0:1: [sda] tag#47 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430415] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.433686] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.436088] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444291] sd 2:0:0:1: [sda] tag#32 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=14s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444300] sd 2:0:0:1: [sda] tag#32 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444304] sd 2:0:0:1: [sda] tag#32 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444308] sd 2:0:0:1: [sda] tag#32 CDB: Write(10) 2a 00 00 41 01 18 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444313] blk_update_request: critical medium error, dev sda, sector 4260120 op 0x1:(WRITE) flags 0x3000 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.449491] Buffer I/O error on dev sda, logical block 532515, lost async page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453591] sd 2:0:0:1: [sda] tag#33 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453600] sd 2:0:0:1: [sda] tag#33 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453603] sd 2:0:0:1: [sda] tag#33 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453607] sd 2:0:0:1: [sda] tag#33 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453610] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.459072] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.461189] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.464347] EXT4-fs (sda): Remounting filesystem read-only
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.466527] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131081, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.470833] Buffer I/O error on device sda, logical block 561678
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.473548] Buffer I/O error on device sda, logical block 561679
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.477384] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131083, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.482014] Buffer I/O error on device sda, logical block 558344
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.484881] Buffer I/O error on device sda, logical block 558345
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.487224] Buffer I/O error on device sda, logical block 558346
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.488837] Buffer I/O error on device sda, logical block 558347
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.490543] Buffer I/O error on device sda, logical block 558348
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.492061] Buffer I/O error on device sda, logical block 558349
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.493494] Buffer I/O error on device sda, logical block 558350
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.494931] Buffer I/O error on device sda, logical block 558351
Not sure, if this is really related to the problem.
|
|
Thank you for the replies, I just tried again and now it is working. I don't know why, did not make any other changes.
| ###Instruction: Since I lost data because I removed (accidently) a namepace including a persistant volume, I am trying to enable Backup for my GKE cluster.
But after I try to enable it I get the message:
'Etcd alias ip must be enabled for Backup for GKE addon.'
I cannot find anything when searching for this, I do not know (or understand) how to enable Etcd alias ip. Does someone know what I can do?
### Response: Thank you for the replies, I just tried again and now it is working. I don't know why, did not make any other changes.
| Since I lost data because I removed (accidently) a namepace including a persistant volume, I am trying to enable Backup for my GKE cluster.
But after I try to enable it I get the message:
'Etcd alias ip must be enabled for Backup for GKE addon.'
I cannot find anything when searching for this, I do not know (or understand) how to enable Etcd alias ip. Does someone know what I can do?
|
|
Some pip-installed packages are not safe to be compressed into a zip. For example, used Airflow v1.10.15 was not ZIP-safe (not sure about new versions)
| ###Instruction: I use Spark 2.4.0 + K8s cluster deployment mode + python 3.5.
I pack all libraries into zip archive and send it to AWS S3, then attach to context
sc = pyspark.SparkContext(appName=args.job_name, environment=environment)
sc.addPyFile('s3a://.../libs.zip')
sc.addPyFile('s3a://.../code.zip')
Import works, I can import any package. But if I import package, that reads some files from package-related folders - I get error:
NotADirectoryError: [Errno 20] Not a directory: '/var/data/spark-ce45d34b-8d2f-4fd0-b3d6-d53ecede8ef1/spark-6ce9d14f-3d90-4c3c-ba2d-9dd6ddf32457/userFiles-08e6e9ec-03fa-447d-930f-bf1bd520f55a/libs.zip/airflow/config_templates/default_airflow.cfg'
How could I solve it?
PS. Use sc.addFile('s3a:/..') and unzipping not works because spark is running in cluster mode.
UPDATE:
I've temporary solved this by installing all packages I need to the docker container I'm using for spark workers.
### Response: Some pip-installed packages are not safe to be compressed into a zip. For example, used Airflow v1.10.15 was not ZIP-safe (not sure about new versions)
| I use Spark 2.4.0 + K8s cluster deployment mode + python 3.5.
I pack all libraries into zip archive and send it to AWS S3, then attach to context
sc = pyspark.SparkContext(appName=args.job_name, environment=environment)
sc.addPyFile('s3a://.../libs.zip')
sc.addPyFile('s3a://.../code.zip')
Import works, I can import any package. But if I import package, that reads some files from package-related folders - I get error:
NotADirectoryError: [Errno 20] Not a directory: '/var/data/spark-ce45d34b-8d2f-4fd0-b3d6-d53ecede8ef1/spark-6ce9d14f-3d90-4c3c-ba2d-9dd6ddf32457/userFiles-08e6e9ec-03fa-447d-930f-bf1bd520f55a/libs.zip/airflow/config_templates/default_airflow.cfg'
How could I solve it?
PS. Use sc.addFile('s3a:/..') and unzipping not works because spark is running in cluster mode.
UPDATE:
I've temporary solved this by installing all packages I need to the docker container I'm using for spark workers.
|
|
You can use CronJob for that.
Since you have a single application with all cronjob code. You can modify the c# application to use an environment variable to decide the job type. This way you can create CronJob resource for each job you want.
Notification Job
apiVersion: batch/v1
kind: CronJob
metadata:
name: notification-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: notification-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "NotificationJob"
restartPolicy: OnFailure
Logger Job
apiVersion: batch/v1
kind: CronJob
metadata:
name: logger-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: logger-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "LoggerJob"
restartPolicy: OnFailure
| ###Instruction: I have a program which I have set up several jobs inside it. According to the figure, these jobs are executed every day at a certain time and, for example, send an SMS to a group of numbers.
When I deploy this to Kubernetes, multiple copies are created.
I want to know, do all these original and replica versions do this and send SMS? If it is true that one SMS should be sent to one number, not that several SMS messages should be sent to the same number.
My question is, how does Kubernetes deal with these programs and how should we deploy them correctly?
I have read various articles but I don't know which is the right way.
### Response: You can use CronJob for that.
Since you have a single application with all cronjob code. You can modify the c# application to use an environment variable to decide the job type. This way you can create CronJob resource for each job you want.
Notification Job
apiVersion: batch/v1
kind: CronJob
metadata:
name: notification-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: notification-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "NotificationJob"
restartPolicy: OnFailure
Logger Job
apiVersion: batch/v1
kind: CronJob
metadata:
name: logger-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: logger-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "LoggerJob"
restartPolicy: OnFailure
| I have a program which I have set up several jobs inside it. According to the figure, these jobs are executed every day at a certain time and, for example, send an SMS to a group of numbers.
When I deploy this to Kubernetes, multiple copies are created.
I want to know, do all these original and replica versions do this and send SMS? If it is true that one SMS should be sent to one number, not that several SMS messages should be sent to the same number.
My question is, how does Kubernetes deal with these programs and how should we deploy them correctly?
I have read various articles but I don't know which is the right way.
|
|
The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.
The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic not allowed by that policy is blocked.
In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".
The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods. This means that the traffic originates from an AWS IP address outside the cluster.
For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.
If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.
My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.
Any other traffic to these pods will need to be explicitly allowed.
For example:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.
We settled on automatically detecting whether a Service resource of type LoadBalancer or NodePort exists, or an Ingress resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.
Have a look here: https://github.com/otterize/intents-operator
And the documentation page explaining this: https://docs.otterize.com/components/intents-operator/#network-policies
If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.
| ###Instruction: I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.
AWS ALB Ingress is created under the same NameSpace so I was thinking that only using DENY all traffic from other namespaces would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add ALLOW traffic from external clients it allows to Ingress but ALSO allows other namespaces too.
So my example is like: (this does not work as expected)
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.
### Response: The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.
The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic not allowed by that policy is blocked.
In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".
The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods. This means that the traffic originates from an AWS IP address outside the cluster.
For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.
If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.
My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.
Any other traffic to these pods will need to be explicitly allowed.
For example:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.
We settled on automatically detecting whether a Service resource of type LoadBalancer or NodePort exists, or an Ingress resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.
Have a look here: https://github.com/otterize/intents-operator
And the documentation page explaining this: https://docs.otterize.com/components/intents-operator/#network-policies
If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.
| I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.
AWS ALB Ingress is created under the same NameSpace so I was thinking that only using DENY all traffic from other namespaces would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add ALLOW traffic from external clients it allows to Ingress but ALSO allows other namespaces too.
So my example is like: (this does not work as expected)
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.
|