prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I was checking Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">documentation</a> for pulling images. In that, I saw two policies IfNotPresent and Always. In &quot;Always&quot; its stated that</p> <blockquote> <p>If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.</p> </blockquote> <p>I am unable to understand what is local here. Is it a node, pod, or cluster? What is the difference between Always and IfNotPresent if it is at node level? It's very confusing.</p>
<p>When you use an image WITHOUT a tag, Kubernetes will assume that you want the latest version of the image, which is identified by the latest tag by default. If you have multiple versions of the same image in your repository with different tags, such as img1:1.0.0, img1:1.1.0, and img1:latest, Kubernetes will use the image with the tag specified in the pod specification.</p> <p>If you use IfNotPresent and the image with the specified tag is already present on the worker node, Kubernetes will use that image to start the container, even if there is a newer version of the image available in the repository with the same tag.</p> <p>If you use Always, however, Kubernetes will always attempt to download the latest version of the image with the specified tag from the repository, even if a cached copy of the image is already present on the worker node. This can be useful if you want to ensure that your containers are always running the latest version of the image.</p> <p>consider a scenario where a container is running on a worker node with img1:latest as the latest tag, and then the container restarts or reschedules on another worker node with the same tag pointing to an older version of the image, IfNotPresent will use the local image present on the worker node, while Always will attempt to download the latest version of the image from the repository.</p> <p>However, it's important to note that the behavior of Always is based on the digest of the image, not the tag. The digest is a unique identifier for a specific version of an image that is based on the content of the image. When you specify Always, Kubernetes will check the digest of the image on the worker node against the digest of the latest version of the image in the repository with the same tag. If the digests match, Kubernetes will use the cached copy of the image on the worker node. If the digests differ, Kubernetes will download the latest version of the image from the repository and use it to start the container.</p>
<p>By default creating a managed certificate object on GKE creates a managed certificate of type &quot;Load Balancer Authorization&quot;. How can I create one with DNS authorization through GKE?</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a></p> <pre><code>apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert spec: domains: - DOMAIN_NAME1 - DOMAIN_NAME2 </code></pre> <p>I want to add wildcard domains and this only possible with DNS authorization.</p> <p><a href="https://stackoverflow.com/questions/73734679/how-to-generate-google-managed-certificates-for-wildcard-hostnames-in-gcp">How to generate Google-managed certificates for wildcard hostnames in GCP?</a></p>
<p>You won't be able to use the current <code>ManagedCertificate</code> CRD to generate wildcard certificates. Wildcard certificates are only supported by <a href="https://cloud.google.com/certificate-manager/docs/overview" rel="nofollow noreferrer">Google Cloud Certificate Manager</a>. <code>ManagedCertificate</code> actually creates the older Google Managed SSL Certificates which do not support wildcards.</p> <p>Currently, the only supported ways to create wildcard certificates would be via CLI (gcloud), API or Terraform. There is currently no CRD for them available directly in GKE. Also, note that Certificate Manager only integrates with the Gateway API and not with Ingress.</p>
<p>I have a Kafka Custer on GKE, and I'm using Apache Spark on Dataproc to access the Kafka Cluster. Dataproc cluster is a private cluster i.e. --no-address is specified when creating the Dataproc cluster, which means it does not have public IP (required due to compliance).</p> <p>Also the Kafka cluster is SSL enabled.</p> <p>Note : GKE &amp; Dataproc are in the same VPC/project &amp; region</p> <p>We have a NAT created, which is allowing the Spark on Dataproc to access Kafka on GKE (using the public IP on Kafka brokers). Without the NAT, Spark is not able to connect to Kafka on GKE - even though they are on the same VPC &amp; region.</p> <p>We would like to get rid of the NAT to reduce costs, any ideas on how to architect/instrument this ?</p> <p>Here is the script used for creating the Dataproc cluster :</p> <pre><code># --no-address - enables it to be a private TYPE=n1-highmem-4 CNAME=&lt;cluster-name&gt; BUCKET=&lt;gcp-bucket&gt; REGION=us-east1 ZONE=us-east1-b IMG_VERSION=2.0-ubuntu18 PROJECT=versa-sml-googl NUM_WORKER=3 gcloud beta dataproc clusters create $CNAME \ --enable-component-gateway \ --bucket $BUCKET \ --region $REGION \ --zone $ZONE \ --no-address --master-machine-type $TYPE \ --master-boot-disk-size 100 \ --master-boot-disk-type pd-ssd \ --num-workers $NUM_WORKER \ --worker-machine-type $TYPE \ --worker-boot-disk-type pd-ssd \ --worker-boot-disk-size 500 \ --image-version $IMG_VERSION \ --scopes 'https://www.googleapis.com/auth/cloud-platform' \ --project $PROJECT \ --initialization-actions 'gs://dataproc-spark-configs/pip_install_versions.sh' \ --metadata 'gcs-connector-version=2.0.0' \ --properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:dataproc.conscrypt.provider.enable=false,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=true,spark:spark.eventLog.enabled=true,spark:spark.eventLog.dir=gs://dataproc-spark-logs/eventlogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/jobhistorylogs' </code></pre>
<p>Not sure if you have <strong>whitelisted</strong> the <strong>IP</strong> of <strong>Spark cluster NAT</strong> to <strong>Kafka GKE</strong>, i think the request is routing outside to <strong>internet</strong> from <strong>NAT</strong> and then accessing the <strong>Kafka</strong>.</p> <p>If both are in same <strong>VPC</strong> what you can do is expose the <strong>Kafka</strong> <strong>service</strong> on <strong>GKE</strong> with <strong>internal private load balancer</strong>, once <strong>SVC</strong> in <strong>GKE</strong> have <strong>private LB</strong> attached you will be able to access <strong>Kafka</strong> from anywhere in <strong>VPC</strong> without using the <strong>NAT</strong>.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: ilb-svc annotations: networking.gke.io/load-balancer-type: &quot;Internal&quot; spec: type: LoadBalancer externalTrafficPolicy: Cluster selector: app: ilb-deployment ports: - name: tcp-port protocol: TCP port: 8080 targetPort: 8080 </code></pre> <p>If you are using any <strong>Ingress controller</strong> to expose the <strong>Kafka</strong> like <strong>Nginx</strong> or so with <strong>TCP port proxy (Static port)</strong> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">Read More</a> you can attach the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">internal LB</a> to <strong>Nginx ingress</strong> instead of attaching it to <strong>Kafka</strong> <strong>service</strong>.</p>
<p>The version of Kubernetes in use is v1.26.3.</p> <p>I have cloned the Kubernetes source code to peruse the scheduler logic.</p> <p>Upon inspection of the framework.go file, I have observed the existence of a frameworkImpl struct. I am curious as to how I can ascertain which plugins have been registered.</p> <pre><code> I am curious as to how I can ascertain which plugins have been registered // NewFramework initializes plugins given the configuration and the registry. func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh &lt;-chan struct{}, opts ...Option) (framework.Framework, error) { options := defaultFrameworkOptions(stopCh) for _, opt := range opts { opt(&amp;options) } f := &amp;frameworkImpl{ registry: r, snapshotSharedLister: options.snapshotSharedLister, scorePluginWeight: make(map[string]int), waitingPods: newWaitingPodsMap(), clientSet: options.clientSet, kubeConfig: options.kubeConfig, eventRecorder: options.eventRecorder, informerFactory: options.informerFactory, metricsRecorder: options.metricsRecorder, extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer, } if profile == nil { return f, nil } f.profileName = profile.SchedulerName f.percentageOfNodesToScore = profile.PercentageOfNodesToScore if profile.Plugins == nil { return f, nil } // get needed plugins from config pg := f.pluginsNeeded(profile.Plugins) pluginConfig := make(map[string]runtime.Object, len(profile.PluginConfig)) for i := range profile.PluginConfig { name := profile.PluginConfig[i].Name if _, ok := pluginConfig[name]; ok { return nil, fmt.Errorf(&quot;repeated config for plugin %s&quot;, name) } pluginConfig[name] = profile.PluginConfig[i].Args } outputProfile := config.KubeSchedulerProfile{ SchedulerName: f.profileName, PercentageOfNodesToScore: f.percentageOfNodesToScore, Plugins: profile.Plugins, PluginConfig: make([]config.PluginConfig, 0, len(pg)), } pluginsMap := make(map[string]framework.Plugin) for name, factory := range r { // initialize only needed plugins. if !pg.Has(name) { continue } args := pluginConfig[name] if args != nil { outputProfile.PluginConfig = append(outputProfile.PluginConfig, config.PluginConfig{ Name: name, Args: args, }) } p, err := factory(args, f) if err != nil { return nil, fmt.Errorf(&quot;initializing plugin %q: %w&quot;, name, err) } pluginsMap[name] = p // Update ClusterEventMap in place. fillEventToPluginMap(p, options.clusterEventMap) } // initialize plugins per individual extension points for _, e := range f.getExtensionPoints(profile.Plugins) { if err := updatePluginList(e.slicePtr, *e.plugins, pluginsMap); err != nil { return nil, err } } // initialize multiPoint plugins to their expanded extension points if len(profile.Plugins.MultiPoint.Enabled) &gt; 0 { if err := f.expandMultiPointPlugins(profile, pluginsMap); err != nil { return nil, err } } if len(f.queueSortPlugins) != 1 { return nil, fmt.Errorf(&quot;only one queue sort plugin required for profile with scheduler name %q, but got %d&quot;, profile.SchedulerName, len(f.queueSortPlugins)) } if len(f.bindPlugins) == 0 { return nil, fmt.Errorf(&quot;at least one bind plugin is needed for profile with scheduler name %q&quot;, profile.SchedulerName) } if err := getScoreWeights(f, pluginsMap, append(profile.Plugins.Score.Enabled, profile.Plugins.MultiPoint.Enabled...)); err != nil { return nil, err } // Verifying the score weights again since Plugin.Name() could return a different // value from the one used in the configuration. for _, scorePlugin := range f.scorePlugins { if f.scorePluginWeight[scorePlugin.Name()] == 0 { return nil, fmt.Errorf(&quot;score plugin %q is not configured with weight&quot;, scorePlugin.Name()) } } if options.captureProfile != nil { if len(outputProfile.PluginConfig) != 0 { sort.Slice(outputProfile.PluginConfig, func(i, j int) bool { return outputProfile.PluginConfig[i].Name &lt; outputProfile.PluginConfig[j].Name }) } else { outputProfile.PluginConfig = nil } options.captureProfile(outputProfile) } return f, nil } </code></pre> <p>I am curious as to how I can ascertain which plugins have been registered</p>
<p>actual, i'm trying to find out what plugins are used by default too.</p> <p>the kubernetes version i'm using is 1.20.6.</p> <p>i found pretty hard to get the answer from source code.</p> <p>but you can export the configuration used by scheduler isntance by adding an arugument to kube-scheduler.yaml:</p> <ul> <li>--write-config-to=/path/to/hostpath/config/file</li> </ul> <p>ATTENTIONS:</p> <ul> <li>--v should &gt;= 2</li> <li>if write-config-to success, scheduler will exit 0, so remove this argument after you export config file</li> </ul>
<p>I'm working with Prometheus alerts, and I would like to dynamically add a 'team' label to all of my alerts based on a regex pattern. I have an example alert:</p> <pre><code>expr: label_replace(label_replace(increase(kube_pod_container_status_restarts_total{job=&quot;kube-state-metrics&quot;,namespace=~&quot;.*&quot;,pod!~&quot;app-test-.*&quot;}[30m]) &gt; 2, &quot;team&quot;, &quot;data&quot;, &quot;container&quot;, &quot;.*test.*&quot;), &quot;team&quot;, &quot;data&quot;, &quot;pod&quot;, &quot;.*test.*&quot;) </code></pre> <p>This example alert adds the 'team' label with the value 'data' for metrics matching the regex pattern &quot;.test.&quot; in the 'container' and 'pod' labels.</p> <p>However, I want to apply this logic to all of my alerts, not just this specific one. Is there a way to do this dynamically in Prometheus or Alertmanager? Any guidance would be appreciated.</p> <p>I tried using the <strong>label_replace</strong> function in the expression of the alert, and it worked as expected for the specific alert mentioned above. I was expecting to find a way to apply this label addition to all of my alerts without having to modify each alert expression individually.</p> <p>Is there a way to achieve this? Any help or guidance would be greatly appreciated.</p>
<p>AFAIK, there is no possibility to add labels to your alerts based on condition without rewriting all rules.</p> <p>Best solution for your exact question is to create separate alerts for all environments/teams/conditions and just add static labels.</p> <p>Something along the lines of</p> <pre class="lang-yaml prettyprint-override"><code> - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job=&quot;kube-state-metrics&quot;,namespace=~&quot;.*&quot;,pod!~&quot;app-test-.*&quot;, container=~&quot;.*test.*&quot;}[30m]) &gt; 2 labels: team: data - alert: many_restarts_data expr: increase(kube_pod_container_status_restarts_total{job=&quot;kube-state-metrics&quot;,namespace=~&quot;.*&quot;,pod!~&quot;app-test-.*&quot;, container=~&quot;.*prod.*&quot;}[30m]) &gt; 2 labels: team: sre </code></pre> <p>But it will require multiplying number of alerts by number of teams.</p> <p>I would argue way easier solution is to use routing capabilities of alertmanager (or PagerDuty if it provides similar functionality). This way you write criteria which alerts with which labels should be routed to which teams, at alertmanager configuration, and it works independently from alerts creation part.</p> <pre class="lang-yaml prettyprint-override"><code> routes: - matchers: - container =~ &quot;.*test.*&quot; - severity =~ &quot;.*test.*&quot; - alertname =~ &quot;my_alert_1|my_alert_2&quot; receiver: team-data - matchers: - container =~ &quot;.*prod.*&quot; - severity =~ &quot;.*prod.*&quot; - alertname =~ &quot;my_alert_1|my_alert_2&quot; receiver: team-sre </code></pre>
<p>Currently, network policies in Kubernetes allow you to control egress and ingress on the pod. However, if two containers run in the same pod, there is no way to have distinct network policies for each container.</p> <p>I am trying to implement a Kafka consumer, which is able to read messages from a broker which is hosted in our private subnet and then dispatch this request to the side container, which runs untrusted code designed by random users on the web. Since there is no way to restrict container communication with a policy, this untrusted code can reach our Kafka broker.</p> <p>I understand that this can be limited by enabling authentication on Kafka. However, the service would still be exposed to an untrusted container.</p> <p>Is there any way to stop this from happening? We have explored Kata containers, Istio + Envoy and Cilium, none of which seem to help solve this problem.</p>
<blockquote> <p>this can be limited by enabling authentication on Kafka</p> </blockquote> <p>You would need to enable asymmetric encryption (SSL / Kerberos) between Kafka and any client. This way, you would delegate public keys to any trusted client, while any untrusted code would be unable to connect without a valid key-pair. Also, the encryption would prevent untrusted code from packet-sniffing the network data local to that container/pod/host. None of this requires/involves Kubernetes.</p> <ul> <li><a href="https://kafka.apache.org/documentation/#security" rel="nofollow noreferrer">https://kafka.apache.org/documentation/#security</a></li> <li><a href="https://docs.confluent.io/platform/current/kafka/encryption.html#encrypt-with-tls" rel="nofollow noreferrer">https://docs.confluent.io/platform/current/kafka/encryption.html#encrypt-with-tls</a></li> </ul> <p>Beyond that, run the untrusted code in the container on a limited-access (non-root) user account, and regular security best practices you should be doing in containers, anyway.</p> <p>If you need finer network policies in the container, install <code>iptables</code>, for example.</p>
<p>Today, my kubernetes(<code>v1.21</code>) cluster certificate was expired(1 year), after I using this command to renew the certificate:</p> <pre><code>kubeadm certs renew all </code></pre> <p>the logs shows that the kube-apiserver\etcd should be restart:</p> <pre><code>Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. </code></pre> <p>I have tried this way to restart:</p> <pre><code>[root@k8smasterone ~]# systemctl restart kube-apiserver Failed to restart kube-apiserver.service: Unit not found. </code></pre> <p>what should I do to restart all the component's properly? I also tried to find the kubernetes pods that did not found pods with kubernetes with api server.</p>
<p>On 1.23.5 even after restarting <code>kubelet</code> through <code>systemctl restart kubelet</code> my <code>kube-controller-manager</code> still had errors (<code>kubectl -n kube-system logs kube-controller-manager-kubeadm</code> - name will vary according to your node name):</p> <pre><code>error retrieving resource lock kube-system/kube-controller-manager: Unauthorized </code></pre> <p>Only after following <a href="https://v1-23.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="noreferrer">this documentation</a> I was able to fix:</p> <pre class="lang-bash prettyprint-override"><code>mkdir tmp mv /etc/kubernetes/manifests/* tmp/ # Wait until pods stop running (in my case, single master, kubectl stopped working) mv tmp/* /etc/kubernetes/manifests/ </code></pre> <p>Before running the commands above some pods worked, but my cluster was highly inconsistent: wrong endpoints for services, deleted pods from daemonsets would not restart, that is how I suspected the controller was not healthy.</p>
<p>So I was going to set up a GKE cluster and interact it with <code>kubectl</code>. But when I tried to apply the namespace, it just threw an error.</p> <p>I've added my IP to Control plane authorized networks in GKE dashboard.</p> <p>I'm using a Windows 10 machine, my kube/config</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: [REDACTED] server: https://34.66.200.196 name: gke_project-307907_us-central1_test contexts: - context: cluster: gke_project-307907_us-central1_test user: gke_project-307907_us-central1_test name: gke_project-307907_us-central1_test current-context: gke_project-307907_us-central1_test kind: Config preferences: {} users: - name: gke_project-307907_us-central1_test user: exec: apiVersion: client.authentication.k8s.io/v1beta1 command: gke-gcloud-auth-plugin.exe installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke provideClusterInfo: true </code></pre> <p>I tried to apply namespace but,</p> <pre class="lang-bash prettyprint-override"><code>╰─ kubectl apply -f k8s/canary/namespace.yaml Unable to connect to the server: dial tcp 34.66.200.196:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. </code></pre>
<strong>Looks kubectl lost connection to the Cluster and you can set the Cluster context by following the official GCP troubleshooting doc&nbsp;<a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#kubectl-times-out" rel="nofollow noreferrer">GCP kubectl command times out</a>, try below two solutions :</strong></p> <strong>Solutions 1 : When kubectl cannot communicate with the cluster control plane or doesn't exist:</strong></p> <blockquote> <p>To resolve your issue, verify the context were the cluster is set:</p> <p>Go to <code>$HOME/.kube/config</code> or run the command <code>kubectl config view</code> to verify the config file contains the cluster context and the external IP address of the control plane (Check there is a possibility that the server mentioned was old or not reachable).</p> <p>Set the cluster credentials:</p> <pre><code> gcloud container clusters get-credentials CLUSTER_NAME \ --region=COMPUTE_REGION \ --project=PROJECT_ID </code></pre> <p><strong>Note :</strong> For zonal clusters, use <code>--zone=COMPUTE_ZONE</code></p> </blockquote> <p>Above command will automatically update the default cluster for kubectl. In case you don’t know the correct cluster name and zone, use <code>gcloud container clusters list</code>. After completing the above steps, please try to create the namespace again and let me know the outcome.</p> <p><strong>Solution 2 : Source IP is not allowed on the &quot;<code>Control plane authorized networks</code>&quot; - cluster config :</strong></p> <blockquote> <p>If the cluster is a private GKE cluster, then ensure that the outgoing IP of the machine you are attempting to connect from is included in the list of existing authorized networks.</p> <p>You can find your existing authorized networks by running the following command:</p> <pre><code> gcloud container clusters describe \ --region= --project= \ --format &quot;flattened(masterAuthorizedNetworksConfig.cidrBlocks[])&quot; </code></pre> </blockquote> <p>In GKE, there is a feature called &quot;<code>Control plane authorized networks</code>&quot;. The main purpose of this parameter is to allow the user to specify CIDR ranges and allow IP addresses in those ranges to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept#overview" rel="nofollow noreferrer">access GKE cluster endpoints</a>.</p> <p>You can also use the GCP console to check the allowed IP CIDR in the &quot;<code>Control plane authorized networks</code>&quot; from the GKE cluster details.</p> <p><strong>Other Scenarios :</strong></p> <p><strong>a.</strong> For other common reasons refer to <strong>Uli Köhler’s</strong> blog on <strong>Techover flow</strong> <a href="https://techoverflow.net/2019/04/01/how-to-fix-kubectl-unable-to-connect-to-the-server-dial-tcp-443-i-o-timeout/" rel="nofollow noreferrer">How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout</a>.</p> <p><strong>b.</strong> Check If you're using docker's default bridge network as your GKE endpoint, disable the bridge network to avoid the network conflict.</p> <p><strong>c.</strong> Also, check if there are recent updates in <strong>Windows/Docker</strong>. Refer to <strong>Dzmitry Paulouski’s</strong> answer <a href="https://stackoverflow.com/questions/68954948/kubernetes-on-windows-error-unable-to-connect-to-the-server-dial-tcp-some-ip">Kubernetes on Windows Error: Unable to connect to the server: dial tcp</a>, which may help to resolve your issue.</p> <p><strong>d.</strong> If the issue with the config file, restarts the Docker Desktop restores with the new config file by adding/replacing it in your <code>HOME/.kube / config file</code>.</p>
<p>I have made a nginx deployment which will be tagged by a ClusterIP service via a selector. Then I entered a new pod that is not related to that deployment nor service. And from within that pod I try to ping the i.p of the ClusterIP service hoping it would reach the nginx deploy, but it's not receiving the ping response.</p> <p>The nginx deployment I made was with this manifest.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 </code></pre> <p>Then, the service I created was with this manifest:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: basicping-service labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx </code></pre> <p>I made sure the service got created by running <code>kubectl get svc</code> and it did, the i.p is <code>10.98.91.185</code></p> <p>And then I created a new pod completely unrelated to this deployment&amp;Service.</p> <p><code>kubectl run -it --rm --image=ubuntu bash</code></p> <p>From within it, I pinged a sandbox server called <code>pingtest.net</code> just to see it was able to send requests and receive response. And it did</p> <p>So finally, I tried pinging the <code>basicping-service</code> created previously by trying to ping the i.p of the service, I did this by running <code>ping 10.98.91.185</code></p> <p>And here is the problem. It does sends pings but doesn't receives the responses back, even after several minutes.</p> <p>It was my understanding that the ping should have received a response. But is my understanding of services incorrect ? Or it should have worked but there is an error?</p> <p>Just for more documentation, the my-nginx deployment is running, and the pod as well. And there seems to be nothing wrong with the nginx running in it. I checked this by running the <code>kubectl describe</code> of the deploy &amp; pod, and also by checking the pod's logs, it's running nginx correctly apparently. Also, after running <code>kubectl describe svc basicping-service</code> it does shows the nginx pod's i.p address with port 80 as the endpoint</p>
<p>ping doesn't work with a service's cluster IP, as it is a virtual IP. You should be able to ping a specific pod, but not a service.</p> <p>ping sends packets using the very-low-level ICMP protocol, but Nginx serves HTTP which uses the TCP protocol instead.</p>
<p>I created new <code>config</code> file for Kubernetes from <code>Azure</code> in <code>Powershell</code> by <code>az aks get-credentials --resource-group &lt;RGName&gt; --name &lt;ClusterName&gt;</code>. Got a message that <code>Merged &quot;cluster_name&quot; as current context in C:\michu\.kube\config</code>. I copied this file into default <code>.kube\config</code> location and now when I try to run any command e.g <code>kubectl get pods</code> I am receiving:</p> <pre><code>Unable to connect to the server: getting credentials: exec: executable kubelogin not found It looks like you are trying to use a client-go credential plugin that is not installed. To learn more about this feature, consult the documentation available at: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins </code></pre> <p>What is wrong here?</p> <p>I just realized that when I type <code>kubectl config get-contexts</code> then I can see my <code>cluster_name</code> and I can even switch to this by <code>kubectl config use-context cluster_name</code> and message is correct: <code>Switched to context cluster_name</code> but then still all other commands ends with <code>Unable to connect to the server: getting credentilas: exec: executable kubelogin not found</code></p>
<p>The error implies that the <code>kubelogin</code> executable could not be located. You need to install <code>kubelogin</code> in the azure cli using <code>az aks install-cli</code>, then it works as expected.</p> <p>Refer <a href="https://azure.github.io/kubelogin/install.html" rel="nofollow noreferrer">github</a> for installation process.</p> <p><em>I tried the same requirement in my environment, and it worked for me as follows.</em></p> <pre class="lang-bash prettyprint-override"><code>az aks get-credentials --resource-group caroline --name sampleaks1 kubectl get pods </code></pre> <p><em><strong>Output:</strong></em></p> <p><img src="https://i.imgur.com/s7340xO.png" alt="enter image description here" /></p> <p>Once you have the <code>aks</code> credentials, running <code>kubectl get pods</code> will prompt you for an <code>Azure kubernetes service authentication with AAD</code>, as shown.</p> <p><img src="https://i.imgur.com/JBI9mdo.png" alt="enter image description here" /></p> <p><em>Just give</em> <code>kubectl</code> <em>in the bash</em> <em>to verify whether it is installed successfully.</em></p> <p><img src="https://i.imgur.com/IT6SHL9.png" alt="enter image description here" /></p> <p><em>If still the issue persists,</em></p> <ol> <li><p>Delete all the cache or any unused folders inside the ~/.kube/ and ran the aks credentials command by adding <code>--admin</code> flag in the end.</p> <p>Refer this <a href="https://blog.baeke.info/2021/06/03/a-quick-look-at-azure-kubelogin/" rel="nofollow noreferrer">doc</a> by @Geert Baeke for more related information.</p> </li> <li><p>Check the kube config version and upgrade if required.</p> </li> </ol>
<p>I have an Airflow environment (v2.4.3) on Kubernetes and I want to sync it with a private git repo so that any changes I make to DAGs in my master branch get automatically picked up by my Airflow environment.</p> <p>According to <a href="https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html#mounting-dags-from-a-private-github-repo-using-git-sync-sidecar" rel="nofollow noreferrer">Airflow documentation</a>, I can use Git-sync sidecar along with an SSH key added to my private git repo and Airflow env to make it work.</p> <p>However, given that I am constantly creating new private repos and Airflow environments, I am wondering if there is a more simple way of connecting my private git repos to their respective Airflow environment.</p> <p>If I have a webapp managing my Airflow environments and have access to an OAuth token from Github after signing into my account (or any other git service), could I use that to connect my an Airflow environement and sync changes to any git repo of my choice under my account?</p>
<p>I was able to figure it out.</p> <p>One can use <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="nofollow noreferrer">personal access tokens</a> as passwords provided by whatever git service the private repo is in along with the repo's username.</p> <p>I just stored the personal access token as an Opaque secret in my Airflow K8s cluster and referenced that in my <a href="https://github.com/kubernetes/git-sync/tree/v3.6.5" rel="nofollow noreferrer">git-sync sidecar container yaml</a> definition which I included in my Airflow yaml deployment definition.</p> <pre><code> containers: - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - &quot;-wait=60&quot; - &quot;-repo=&lt;repo&gt;&quot; - &quot;-branch=master&quot; - &quot;-root=/opt/airflow/dags&quot; - &quot;-username=&lt;username&gt;&quot; - &quot;-password-file=/etc/git-secret/token&quot; volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt/airflow/dags volumes: - name: dags-data emptyDir: {} - name: git-secret secret: secretName: github-token </code></pre>
<p>I am trying to figure out if it is possible to use Cloud Code in VSCode to debug a TypeScript Node application in Kubernetes remotely?</p> <p>I used Cloud Code regularly to debug JS Node applications in Kubernetes, but I am having hard time to configure launch.json to do the same for TypeScript Node app. Sources for that is non-existent and at this point I am not even sure if this is possible.</p> <p>Here is the launch.json file I configured. Also you can see a setting for local debugging which works fine:</p> <pre><code>{ &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Kubernetes: Run/Debug&quot;, &quot;type&quot;: &quot;cloudcode.kubernetes&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;skaffoldConfig&quot;: &quot;${workspaceFolder}\\skaffold.yaml&quot;, &quot;watch&quot;: false, &quot;cleanUp&quot;: false, &quot;portForward&quot;: true, &quot;imageRegistry&quot;: &quot;zzz.common.repositories.zzz.zzz&quot; }, { &quot;type&quot;: &quot;node&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;name&quot;: &quot;Local Debug&quot;, &quot;runtimeExecutable&quot;: &quot;node&quot;, &quot;runtimeArgs&quot;: [&quot;--nolazy&quot;, &quot;-r&quot;, &quot;ts-node/register/transpile-only&quot;], &quot;args&quot;: [&quot;${workspaceRoot}/lcs/src/index.ts&quot;], &quot;cwd&quot;: &quot;${workspaceRoot}/lcs&quot;, &quot;internalConsoleOptions&quot;: &quot;openOnSessionStart&quot;, &quot;env&quot;: { &quot;NODE_ENV&quot;: &quot;development&quot; }, &quot;skipFiles&quot;: [&quot;&lt;node_internals&gt;/**&quot;, &quot;node_modules/**&quot;] } ]} </code></pre> <p>In my tsconfig.json I have &quot;sourceMap&quot;: true. I assume I need to map my dist folder in Kubernetes (that is where I compile TS files to JS files) to src folder (which contains original TS files). Problem is I couldn't find any documentation to do that.</p> <p>Here is my docker file so you can see that I am putting the compiled files into ./dist folder:</p> <pre><code>FROM node:19-alpine as builder WORKDIR /app COPY package*.json ./ COPY tsconfig.json ./ RUN npm install COPY . . RUN npm run build FROM node:19-alpine WORKDIR /app COPY package*.json ./ RUN npm install --production COPY --from=builder /app/dist ./dist EXPOSE 3000 CMD [ &quot;node&quot;, &quot;./dist/index.js&quot; ] </code></pre>
<p>I figured it out. It needed a bit more configuration in launch.json. Specifically debug section with sourceFileMap. Here is the final result if anybody else has trouble making it work:</p> <pre><code>{ &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Kubernetes: Run/Debug&quot;, &quot;type&quot;: &quot;cloudcode.kubernetes&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;skaffoldConfig&quot;: &quot;${workspaceFolder}\\skaffold.yaml&quot;, &quot;watch&quot;: false, &quot;cleanUp&quot;: false, &quot;portForward&quot;: true, &quot;imageRegistry&quot;: &quot;zzz.common.repositories.zzz.zzz&quot;, &quot;debug&quot;: [ { &quot;image&quot;: &quot;zzz.common.repositories.zzz.zzz/zzz-lcs/k8s&quot;, &quot;containerName&quot;: &quot;lcs&quot;, &quot;sourceFileMap&quot;: { &quot;${workspaceFolder}\\lcs\\dist&quot;: &quot;/app/dist&quot; } } ] }, { &quot;type&quot;: &quot;node&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;name&quot;: &quot;Local Debug&quot;, &quot;runtimeExecutable&quot;: &quot;node&quot;, &quot;runtimeArgs&quot;: [&quot;--nolazy&quot;, &quot;-r&quot;, &quot;ts-node/register/transpile-only&quot;], &quot;args&quot;: [&quot;${workspaceRoot}/lcs/src/index.ts&quot;], &quot;cwd&quot;: &quot;${workspaceRoot}/lcs&quot;, &quot;internalConsoleOptions&quot;: &quot;openOnSessionStart&quot;, &quot;env&quot;: { &quot;NODE_ENV&quot;: &quot;development&quot; }, &quot;skipFiles&quot;: [&quot;&lt;node_internals&gt;/**&quot;, &quot;node_modules/**&quot;] } ] } </code></pre> <p>Note that my files are located in my local inside /lcs/app/src and my build put them in ./lcs/dist</p>
<p>I have a helm chart that deploys a kube-prometheus stack (prometheus, grafana, node-exporter), there are some json files (dashboards) in the grafana configuration, they are transferred to the grafana pod via configMap (common practice). I have a task to optimize this configuration to add grafana dashboards via their id from grafana.com and not to use json files (as they are very big). I know how to create a folder in grafana and specify the prometheus resource, but I don't understand how to export dashboards by id.</p> <p>To create a folder I have a file (yaml) which is added via configmap to the directory /etc/grafana/provisioning/dashboards</p> <pre><code>- name: 'default' org_id: 1 folder: 'my-dashboards' type: 'file' options: folder: '/var/lib/grafana/dashboards' </code></pre> <p>How to do it and in which file I need to insert this configuration to make it work. I will be grateful for help.</p> <p>I tried to create configurations I found on github (dashboardProviders), but it only creates a folder (the code I specified), without dashboards</p> <p>UPD: what i have now</p> <pre><code>apiVersion: 1 providers: # &lt;string&gt; an unique provider name. Required - name: 'prometheus' # &lt;int&gt; Org id. Default to 1 orgId: 1 # &lt;string&gt; name of the dashboard folder. folder: 'my-dashboards' # &lt;string&gt; folder UID. will be automatically generated if not specified folderUid: '' # &lt;string&gt; provider type. Default to 'file' type: file # &lt;bool&gt; disable dashboard deletion disableDeletion: false # &lt;int&gt; how often Grafana will scan for changed dashboards updateIntervalSeconds: 10 # &lt;bool&gt; allow updating provisioned dashboards from the UI allowUiUpdates: false options: # &lt;string, required&gt; path to dashboard files on disk. Required when using the 'file' type path: /var/lib/grafana/dashboards # &lt;bool&gt; use folder names from filesystem to create folders in Grafana foldersFromFilesStructure: true dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus </code></pre> <p>but it still doesn't work...why?</p>
<p>It is (or can be) a part of the <a href="https://github.com/grafana/helm-charts/tree/grafana-6.52.9/charts/grafana" rel="nofollow noreferrer">grafana helm chart</a>, from what I remember when used it in the past. That configuration needs to go into the <code>values.yaml</code> when using that helm chart.</p> <p>Specifically <a href="https://github.com/grafana/helm-charts/blob/grafana-6.52.9/charts/grafana/values.yaml#L636" rel="nofollow noreferrer">here</a> to enable/configure <code>dashboardProviders</code> and <a href="https://github.com/grafana/helm-charts/blob/grafana-6.52.9/charts/grafana/values.yaml#L655" rel="nofollow noreferrer">here</a> to provision the <code>dashboard</code> using <code>dashboard id</code> from the grafana website.</p> <p>Can also refer to some documentation <a href="https://github.com/grafana/helm-charts/tree/grafana-6.52.9/charts/grafana#import-dashboards" rel="nofollow noreferrer">here</a>.</p> <p>Hope it helps.</p> <p>Update:</p> <p>Using the below config I was able to import the MinIO Dashboard (the one OP tried to import):</p> <pre><code>dashboardProviders: dashboardproviders.yaml: apiVersion: 1 providers: - name: 'default' orgId: 1 folder: 'default' type: file disableDeletion: true editable: true options: path: /var/lib/grafana/dashboards/standard dashboards: default: MinIO: gnetId: 13502 revision: 2 datasource: prometheus </code></pre> <p><a href="https://i.stack.imgur.com/JsppQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JsppQ.png" alt="enter image description here" /></a></p> <p>Ofcourse I don't have the prometheus data source, hence the warning sign(s).</p>
<p>How can I list all Kubernetes services along with the number of active pods associated with each service?</p> <p>Currently, I can list all services with: <code>kubectl get services</code></p> <p>I would like to add one additional column to the output, which lists active pod count for each service.</p>
<p>A service is basically a load balancer over a deployment (and a deployment is a replica set of multiple pods). So, chances are you want to be listing the deployments instead of the services in this case. Services and deployments are loosely coupled via tags, so there probably isn't a great way to go from services to pods directly.</p> <p>I recommend <code>kubectl get deploy -A</code> to view all deployments and their pod counts in all namespaces in this case.</p> <p>Example output from the <code>kube-system</code> namespace in our cluster (since it's not sensitive in any way)...</p> <pre><code>NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system cluster-autoscaler-aws-cluster-autoscaler 1/1 1 1 231d kube-system coredns 2/2 2 2 231d kube-system ebs-csi-controller 2/2 2 2 231d kube-system efs-csi-controller 2/2 2 2 231d kube-system metrics-server 1/1 1 1 231d </code></pre>
<p>When I run locally, in my laptop (using python 3.10 and pandas 1.3.5), the following code, I get 0.031s approximately (ball parking it):</p> <pre><code>profile_data = ( profiles_df[data_cols] .loc[profile_ids] .rename(columns=new_cols) .to_dict(&quot;records&quot;) ) </code></pre> <p>where data_cols and new_cols are two lists of strings, and profiles_df is a dataframe with mostly string data. However, when I run in it in a pod, using the same python and pandas versions, I get it run in 0.1s approx. The pod has still ample secondary memory (a few GBs) and never reaches its limit, nor does it reach the CPU limits (1 out of 1.5)</p> <ol> <li>Is there a way to optimize the above code?</li> <li>What could be causing this difference in performance?</li> </ol>
<p><code>df.rename()</code> introduces some overhead, but you can skip that step by constructing the result directly with the new column names:</p> <pre class="lang-py prettyprint-override"><code>profile_data = [{new_cols[col]: profiles_df.loc[ix, col] for col in new_cols} for ix in profile_ids] </code></pre> <p>I do not know the answer to your second question.</p>
<p>values.yaml</p> <pre><code>S3Catalogs: Catalogs: - CatalogName: BotVoice Url: &quot;http://sandbox5.*.com&quot; - CatalogName: WrongVoice Url: &quot;http://sandbox5.*.com&quot; </code></pre> <p>the structure specified above needs to be filled in somehow in deployment.yaml tried different variants, with brackets [0] does not accept.</p> <pre><code> - name: S3Catalogs__Catalogs value: &quot;{{ .Values.S3Catalogs.Catalogs }}&quot; - name: S3Catalogs__Catalogs__Url value: &quot;{{ .Values.S3Catalogs.Catalogs.Url}}&quot; </code></pre> <p>that's not how it works</p>
<p>I think you were on the right path using <code>0</code> (and presumably <code>1</code>) to access elements of the <code>S3Catalogs</code> array in your <code>values.yaml</code> file.</p> <p>The following syntax works for me:</p> <p>deployment.yaml:</p> <pre><code> - name: S3Catalogs__Catalogs__CatalogName_0 value: &quot;{{ index .Values.S3Catalogs.Catalogs 0 &quot;CatalogName&quot; }}&quot; - name: S3Catalogs__Catalogs__Url_0 value: &quot;{{ index .Values.S3Catalogs.Catalogs 0 &quot;Url&quot; }}&quot; - name: S3Catalogs__Catalogs__CatalogName_1 value: &quot;{{ index .Values.S3Catalogs.Catalogs 1 &quot;CatalogName&quot; }}&quot; - name: S3Catalogs__Catalogs__Url_1 value: &quot;{{ index .Values.S3Catalogs.Catalogs 1 &quot;Url&quot; }}&quot; </code></pre> <p>values.yaml:</p> <pre><code>S3Catalogs: Catalogs: - CatalogName: BotVoice Url: &quot;http://sandbox5.foo.com&quot; - CatalogName: WrongVoice Url: &quot;http://sandbox5.bar.com&quot; </code></pre> <p>When I do a <code>helm template</code>, following is the result:</p> <pre><code>- name: S3Catalogs__Catalogs__CatalogName_0 value: &quot;BotVoice&quot; - name: S3Catalogs__Catalogs__Url_0 value: &quot;http://sandbox5.foo.com&quot; - name: S3Catalogs__Catalogs__CatalogName_1 value: &quot;WrongVoice&quot; - name: S3Catalogs__Catalogs__Url_1 value: &quot;http://sandbox5.bar.com&quot; </code></pre> <p>Is this something that'll work out for you?</p>
<p>I have a test.txt file that has bunch of kubernetes commands. It looks like:</p> <pre><code>kubectl get cm -o yaml | grep &quot;test&quot; kubectl get pods kubectl describe pod xxx . . </code></pre> <p>bash script read the test.txt and loop thru each command should executes. However, when I run the script, it gives the following error. Any ideas how to fix this?</p> <blockquote> <p>Error from server (NotFound): configmaps &quot;|&quot; not found</p> <p>Error from server (NotFound): configmaps &quot;grep&quot; not found</p> <p>Error from server (NotFound): configmaps &quot;&quot;&quot; not found</p> </blockquote> <p>Here is the script:</p> <pre><code>in_file=test.txt # Loop over each line while read -r test_case; do &quot;kubectl -n ${namespace} ${test_case}&quot; done &lt; &quot;$in_file&quot; </code></pre>
<p>Change your bash script to this:</p> <pre><code>#!/bin/bash cat test.txt | xargs -I {} bash -c {} </code></pre> <p>Works like a treat:</p> <pre><code>{&quot;apiVersion&quot;:&quot;v1&quot;,&quot;data&quot;:{&quot;api-key&quot;:&quot;abc123&quot;,&quot;database-url&quot;:&quot;postgresql://localhost/mydatabase&quot;},&quot;kind&quot;:&quot;ConfigMap&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;example-config&quot;,&quot;namespace&quot;:&quot;default&quot;}} name: example-config {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;data&quot;:{&quot;api-key&quot;:&quot;abc123&quot;,&quot;database-url&quot;:&quot;postgresql://localhost/mydatabase&quot;},&quot;kind&quot;:&quot;ConfigMap&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;example-config2&quot;,&quot;namespace&quot;:&quot;default&quot;}} name: example-config2 {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;data&quot;:{&quot;api-key&quot;:&quot;abc123&quot;,&quot;database-url&quot;:&quot;postgresql://localhost/mydatabase&quot;},&quot;kind&quot;:&quot;ConfigMap&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;example-config3&quot;,&quot;namespace&quot;:&quot;default&quot;}} name: example-config3 </code></pre> <p>Got the idea from <a href="https://coderwall.com/p/vsx0yg/bash-xargs-as-a-for-loop" rel="nofollow noreferrer">here</a>. I wasn't sure if it'd work, but it did :)</p> <p>Hope it helps.</p>
<p>I am trying to deploy service on <code>eks</code> with <code>knative</code> installed on eks, here eks configured with <code>fargate-profile</code>. I want to mount <code>efs</code> to my service for that i have created <code>StorageClass</code>,<code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> below are the respective yml</p> <p><strong>storageclass.yml</strong></p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: fs-1234 directoryPerms: &quot;775&quot; reclaimPolicy: Retain </code></pre> <p><strong>efs-pv.yml</strong></p> <pre><code>kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-sc csi: driver: efs.csi.aws.com volumeHandle: fs-1234 </code></pre> <p><strong>efs-pvc.yml</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pvc spec: accessModes: - ReadWriteMany storageClassName: my-sc resources: requests: storage: 100Gi </code></pre> <p>all the sc, pv and pvc get created successfully</p> <p>Now I am trying to use <code>my-pvc</code> in my Service <strong>my-service.yml</strong> like below</p> <pre><code>apiVersion: serving.knative.dev/v1 kind: Service metadata: name: myservice annotations: serving.knative.dev/efs-storage-class: my-sc spec: template: spec: containers: - env: - name: key value: val image: image:latest ports: - containerPort: 8080 volumeMounts: - mountPath: /app/data name: data readOnly: true # warning were given for not setting readOnly=true volumes: - name: data persistentVolumeClaim: claimName: my-pvc readOnly: true metadata: annotations: autoscaling.knative.dev/metric: concurrency autoscaling.knative.dev/target: '1' autoscaling.knative.dev/minScale: '1' autoscaling.knative.dev/maxScale: '5' autoscaling.knative.dev/scaleDownDelay: 60s autoscaling.knative.dev/window: 600s </code></pre> <p>when I try to run following command <code>kubectl apply -f my-service.yml</code> it gives validation error, here the complete error</p> <p>Error from server (BadRequest): error when creating &quot;my-service.yml&quot;: admission webhook &quot;<strong>validation.webhook.serving.knative.dev</strong>&quot; denied the request: validation failed: <strong>Persistent volume claim support is disabled</strong>, but found persistent volume claim my-pvc: must not set the field(s): spec.template.spec.volumes[0].persistentVolumeClaim</p> <p>Error simply means PVC is disabled, in order to use PVC I need to enable PVC.</p> <p><strong>But How and where(in configMap?)?</strong></p> <p>in In <a href="https://github.com/knative/serving/releases/download/v0.27.0/serving-core.yaml" rel="nofollow noreferrer">serving-core.yml</a> i found <code>kubernetes.podspec-persistent-volume-claim: &quot;Disabled&quot;</code> which is example.</p> <p>So I have tried to add configMap (Just guess) with <code>kubernetes.podspec-persistent-volume-claim</code> like below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: knative-serving name: kubernetes.podspec-persistent-volume-claim data: kubernetes.podspec-persistent-volume-claim: enabled </code></pre> <p>But nothing changed only configmap got created, Any help will be appreciated</p>
<p>You need to <a href="https://knative.dev/docs/serving/configuration/feature-flags/#configuring-flags" rel="nofollow noreferrer">set the flag in the <code>config-features</code> ConfigMap</a>. That ConfigMap should already have one key, named <code>_example</code>. You'll need to add your key above or below the example. The <code>_example</code> key is ignored by Knative, but provides a place for documentation for system administrators.</p>
<p>I have a local k3s Kubernetes cluster created by multipass.</p> <p>I am trying to set up Kafka using Ingress way based on <a href="https://strimzi.io/blog/2019/05/23/accessing-kafka-part-5/" rel="nofollow noreferrer">this tutorial</a> to make it accessible for clients running outside of Kubernetes.</p> <p>Here are my steps:</p> <p>First get my cluster IP by</p> <pre><code>➜ kubectl get nodes NAME STATUS ROLES AGE VERSION west-master Ready control-plane,master 15m v1.26.3+k3s1 ➜ kubectl get node west-master -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME west-master Ready control-plane,master 16m v1.26.3+k3s1 192.168.205.5 &lt;none&gt; Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.19-k3s1 ➜ kubectl cluster-info Kubernetes control plane is running at https://192.168.205.5:6443 CoreDNS is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy </code></pre> <p>which is <code>192.168.205.5</code> (Is this correct IP address for the one I used in <strong>my-kafka-persistent.yaml</strong> I used in next step?).</p> <p>Then I deploy my Kafka by:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace hm-kafka kubectl apply --filename=&quot;https://strimzi.io/install/latest?namespace=hm-kafka&quot; --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka </code></pre> <p><strong>my-kafka-persistent.yaml</strong> (based on <a href="https://strimzi.io/examples/latest/kafka/kafka-persistent.yaml" rel="nofollow noreferrer">kafka-persistent.yaml</a>):</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: ingress tls: true configuration: bootstrap: host: kafka-bootstrap.192.168.205.5.nip.io brokers: - broker: 0 host: kafka-broker-0.192.168.205.5.nip.io - broker: 1 host: kafka-broker-1.192.168.205.5.nip.io - broker: 2 host: kafka-broker-2.192.168.205.5.nip.io config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: &quot;3.4&quot; storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} </code></pre> <p>After deployment:</p> <p>Pods</p> <p><a href="https://i.stack.imgur.com/KaFqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KaFqX.png" alt="enter image description here" /></a></p> <p>Services</p> <p><a href="https://i.stack.imgur.com/FvlSO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FvlSO.png" alt="enter image description here" /></a></p> <p>Ingresses</p> <p><a href="https://i.stack.imgur.com/HhtuF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhtuF.png" alt="enter image description here" /></a></p> <p>Also, for each Ingress, I can see SSL passthrough in the annotations:</p> <p><a href="https://i.stack.imgur.com/sUuQR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sUuQR.png" alt="enter image description here" /></a></p> <p>Then I succeed following the tutorial creating the truststore.</p> <pre class="lang-bash prettyprint-override"><code>➜ kubectl get secret hm-kafka-cluster-ca-cert \ --namespace=hm-kafka \ --output=jsonpath=&quot;{.data.ca\.crt}&quot; \ | base64 -d \ &gt; ca.crt ➜ keytool -importcert \ -trustcacerts \ -alias root \ -file ca.crt \ -keystore kafka-truststore.jks \ -storepass my_passw0rd \ -noprompt Certificate was added to keystore </code></pre> <p>However, when I try to produce data, I met this issue:</p> <pre><code>➜ kafka-console-producer \ --broker-list kafka-bootstrap.192.168.205.5.nip.io:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=my_passw0rd \ --producer-property ssl.truststore.location=kafka-truststore.jks \ --topic my-topic &gt;[2023-04-14 15:57:06,047] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,047] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,200] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,201] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient) [2023-04-14 15:57:06,691] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient) </code></pre> <p>Any guide would be appreciate, thanks!</p> <hr /> <h2>UPDATE 1</h2> <p>Thanks @OneCricketeer pointing the issue!</p> <p>As I am using <strong>multipass</strong> on macOS, I can provide <code>INSTALL_K3S_EXEC=&quot;server --disable traefik&quot;</code>, so the updated command to create k3s cluster is:</p> <pre class="lang-bash prettyprint-override"><code>multipass launch --name=west-master --cpus=4 --memory=16g --disk=128g multipass exec west-master -- \ bash -c 'curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=&quot;server --disable traefik&quot; K3S_KUBECONFIG_MODE=&quot;644&quot; sh -' </code></pre> <p>I actually switched to Rancher Desktop as it is also using k3s and easy to disable Traefik which can be set up in the UI.</p> <p><a href="https://i.stack.imgur.com/AyRC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AyRC2.png" alt="enter image description here" /></a></p> <h2>UPDATE 2</h2> <p>Regarding how to deploy ingress-nginx and how to resolve another issue &quot;ingress does not contain a valid IngressClass&quot; I met, I posted at <a href="https://stackoverflow.com/questions/76020537/strimzi-kafka-brokers-not-created-because-of-ingress-does-not-contain-a-valid-i">Strimzi Kafka brokers not be created because of &quot;ingress does not contain a valid IngressClass&quot;</a></p>
<p>k3s uses traefik, not nginx, so those annotations aren't doing anything... The referenced blog assumes you are using nginx instead</p> <p>Restart your k3s cluster, but provide <code>--no-deploy-traefik</code> option, and install nginx ingress controller</p> <p>Otherwise, you will need to refer to Traefik ingress docs on what matching annotations it will use for SSL passthrough.</p> <p>Keeping in mind, Kafka is not an HTTP/S service, so you should not be using ports 80/443 to communicate with it.</p>
<p>I have a local k3s Kubernetes cluster created by Rancher Desktop.</p> <p>I am trying to set up Kafka using load balancer based on <a href="https://strimzi.io/blog/2019/05/13/accessing-kafka-part-4/" rel="nofollow noreferrer">this tutorial</a> to make it accessible for clients running outside of Kubernetes.</p> <p>Here are my steps:</p> <pre class="lang-bash prettyprint-override"><code>kubectl create namespace hm-kafka kubectl apply --filename=&quot;https://strimzi.io/install/latest?namespace=hm-kafka&quot; --namespace=hm-kafka kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka </code></pre> <p><strong>my-kafka-persistent.yaml</strong> (based on <a href="https://strimzi.io/examples/latest/kafka/kafka-persistent.yaml" rel="nofollow noreferrer">kafka-persistent.yaml</a>):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: hm-kafka spec: kafka: version: 3.4.0 replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: loadbalancer tls: false configuration: brokers: - broker: 0 advertisedPort: 9095 - broker: 1 advertisedPort: 9096 - broker: 2 advertisedPort: 9097 config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: &quot;3.4&quot; storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 3 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} </code></pre> <p>When I deploy, I got this issue</p> <p>Pods:</p> <p><a href="https://i.stack.imgur.com/sg6mB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sg6mB.png" alt="enter image description here" /></a></p> <p>Services:</p> <p><a href="https://i.stack.imgur.com/VBC6c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VBC6c.png" alt="enter image description here" /></a></p> <p>This is because <code>advertisedPort</code> option in my YAML file doesn’t really change the port used in the load balancer itself. It changes only the port number used in the <code>advertised.listeners</code> Kafka broker configuration parameter.</p> <p>And they all try to use the port 9094.</p> <p>I am wondering is there a way I can change load balancer port so that bootstrap and brokers have different ports? Will it help make Kafka work? Thanks!</p> <p>Also, I read another similar question and its <a href="https://stackoverflow.com/a/69757166/2000548">answer</a>, does it mean which port to use depends on k3s Kubernetes cluster created by Rancher Desktop and we have no control? Thanks!</p>
<p>To allow clients to access the individual brokers, Strimzi needs to create one <code>type: LoadBalancer</code> service for every broker. In addition, it creates one more service targeting all brokers for bootstrapping (that can be disabled). This normally results in Kubernetes provisioning one load balancer per service -&gt; so there is no need to do any complicated hacks with port numbers.</p> <p>If your environment does not support load balancers like this, maybe the best thing is to try one of the other mechanisms for exposing the Kafka cluster. For example NodePorts or Kubernetes Nginx Ingress.</p> <p>You can also do this your self if you want:</p> <ul> <li>Configure a regular internal listener instead of the <code>type: loadbalancer</code> one</li> <li>Use the <code>advertisedHost</code> and <code>advertisedPort</code> options to configure the hostname and port you want to use for each broker</li> <li>Create your own services with any configuration (e.g. port) you want (make sure the services have some custom name and do not have the Strimzi labels to avoid having them deleted or overwritten by Strimzi)</li> </ul>
<p>I'm reading helm documentation,</p> <blockquote> <p>The templates/ directory is for template files. When Tiller evaluates a chart, it will send all of the files in the templates/ directory through the template rendering engine. Tiller then collects the results of those templates and sends them on to Kubernetes.</p> </blockquote> <p>I have lots of different templates in my template folder, I'm looking for a way to skip those templates that start with &quot;y&quot; and &quot;z&quot; and don't send it to Kubernetes, is there any way I can achieve that? I want to be flexible let's say if statementProvider is x, skip all manifests starting with y and z and do not send it to Kubernetes.</p> <p>I wrote this helper function to extract the list of resources that should be deployed in Kubernetes but I don't know how I can use it:</p> <pre><code>{{- define &quot;statement.resource&quot;}} {{- $statementProvider := lower ( $.Values.statementProvider ) -}} {{- $statementFiles := list -}} {{- range $path, $bytes := .Files.Glob &quot;templates/**&quot; }} {{- if eq $statementProvider &quot;x&quot; -}} {{- if not (or (hasPrefix $path &quot;y&quot;) (hasPrefix $path &quot;z&quot;)) -}} {{- $statementFiles = append $statementFiles $path -}} {{- end }} {{- $statementFiles -}} {{- end }} {{- end }} {{- end }} </code></pre>
<p>What I'm gonna suggest is along the lines of what Michał has already done, but with a small tweak that might work for you.</p> <p>So, your <code>template</code> can look something like this:</p> <pre><code>{{- if .Values.apply_template.true }} ... {{- end }} </code></pre> <p>and add this to your <code>values.yaml</code>:</p> <pre><code>apply_template: true: true false: false </code></pre> <p>Now, as you mentioned: &quot;to skip those templates that start with &quot;y&quot; and &quot;z&quot;&quot;, you can run this simple <code>cmd</code> that would replace <code>true</code> with <code>false</code> in the templates that start with <code>y</code> and <code>z</code>.</p> <pre><code>find . -type f \( -name &quot;y*.yaml&quot; -o -name &quot;z*.yaml&quot; \) -print0 | xargs -0 sed -i '' &quot;s/true/false/g&quot; | helm template . </code></pre> <p>You will still have to change your templates (one time) to add the <code>if</code> statement, but beyond that, you can control which templates you wanna skip going forward. I have only very quickly tested this on a couple of temp templates (and it works fine), but I hope this gives you some headway to achieve what you wanna do.</p> <p>You might have to tweak the <code>cmd</code> a bit given that I'm running it on OSX - in case you are not (I guess you might have to git rid of the single quotes <code>''</code>.</p>
<p>I'm using Terraform workload-identity module , to create Kubernetes service account in Google Cloud. When i apply the changes, I'm getting below warning.</p> <blockquote> <p>&quot;default_secret_name&quot; is no longer applicable for Kubernetes v1.24.0 and above │ │ with module.app-workload-identity.kubernetes_service_account_v1.main, │<br /> on ../../modules/workload-identity/main.tf line 57, in resource &quot;kubernetes_service_account_v1&quot; &quot;main&quot;: │ 57: resource &quot;kubernetes_service_account_v1&quot; &quot;main&quot; { │ │ Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, &quot;default_secret_name&quot; will be │ empty</p> </blockquote> <p><strong>Workload-Identity main.tf</strong></p> <pre><code>locals { service_account_tmp = var.google_service_account_email== &quot;&quot; ? &quot;projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com&quot; : var.google_service_account_email service_id = &quot;projects/${var.project_id}/serviceAccounts/cloudsql-sa@${var.project_id}.iam.gserviceaccount.com&quot; k8s_sa_gcp_derived_name = &quot;serviceAccount:${var.project_id}.svc.id.goog[${var.namespace}/${local.output_k8s_name}]&quot; gcp_sa_email = var.google_service_account_email # This will cause terraform to block returning outputs until the service account is created k8s_given_name = var.k8s_sa_name != null ? var.k8s_sa_name : var.name output_k8s_name = var.use_existing_k8s_sa ? local.k8s_given_name : kubernetes_service_account.main[0].metadata[0].name output_k8s_namespace = var.use_existing_k8s_sa ? var.namespace : kubernetes_service_account.main[0].metadata[0].namespace } # resource &quot;google_service_account&quot; &quot;cluster_service_account&quot; { # GCP service account ids must be &lt; 30 chars matching regex ^[a-z](?:[-a-z0-9]{4,28}[a-z0-9])$ # KSA do not have this naming restriction. # account_id = substr(var.name, 0, 30) # display_name = substr(&quot;GCP SA bound to K8S SA ${local.k8s_given_name}&quot;, 0, 100) # project = var.project_id # } resource &quot;kubernetes_namespace&quot; &quot;k8s_namespace&quot; { metadata { name = var.namespace } } # resource &quot;kubernetes_secret_v1&quot; &quot;main&quot; { # metadata { # name = var.name # namespace = var.namespace # annotations = { # &quot;kubernetes.io/service-account.name&quot; = kubernetes_service_account_v1.main.metadata.0.name # &quot;kubernetes.io/service-account.namespace&quot; = kubernetes_service_account_v1.main.metadata.0.namespace # } # generate_name = &quot;${kubernetes_service_account_v1.main.metadata.0.name}-token-&quot; # } # type = &quot;kubernetes.io/service-account-token&quot; # wait_for_service_account_token = true #} resource &quot;kubernetes_service_account&quot; &quot;main&quot; { count = var.use_existing_k8s_sa ? 0 : 1 metadata { name = var.name namespace = var.namespace annotations = { &quot;iam.gke.io/gcp-service-account&quot; = var.google_service_account_email } } } module &quot;annotate-sa&quot; { source = &quot;terraform-google-modules/gcloud/google//modules/kubectl-wrapper&quot; version = &quot;~&gt; 2.0.2&quot; enabled = var.use_existing_k8s_sa &amp;&amp; var.annotate_k8s_sa skip_download = true cluster_name = var.cluster_name cluster_location = var.location project_id = var.project_id kubectl_create_command = &quot;kubectl annotate --overwrite sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account=${local.gcp_sa_email}&quot; kubectl_destroy_command = &quot;kubectl annotate sa -n ${local.output_k8s_namespace} ${local.k8s_given_name} iam.gke.io/gcp-service-account-&quot; } resource &quot;google_service_account_iam_member&quot; &quot;main&quot; { service_account_id = local.service_id role = &quot;roles/iam.workloadIdentityUser&quot; member = local.k8s_sa_gcp_derived_name } </code></pre> <p>As per the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/service_account" rel="nofollow noreferrer">this</a> documentation , I have tried to add the resource &quot;kubernetes_secret_v1&quot; to create a service account token. But still getting the same warning message.</p>
<p>From this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/pull/1792" rel="nofollow noreferrer">git issue</a> kubernetes_service_account issue has been successfully fixed using this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/pull/1792/files/04140ea649a2dcdaffb2da3f85dde35320fd97c8#diff-c743e045ffac6c322ed857bb5f5b6efa1b2d854c02de71996f9d937e0242dd03" rel="nofollow noreferrer">manifest</a>.</p> <p>I found this <a href="https://github.com/yasserisa/terraform-google-kubernetes-engine/commit/a1972155e856c702c13f1196a202f65b71378bde" rel="nofollow noreferrer">alternative solution</a> where changes are made using the terraform resource kubernetes_manifest to manually generate the service accounts along with their secret.</p> <p>Can you try the main.tf file and let me know if this works.</p> <p>For more information follow this <a href="https://github.com/hashicorp/terraform-provider-kubernetes/issues/1724" rel="nofollow noreferrer">Issue</a>.</p>
<p>I have updated the SSL certificates on my AKS service. But I am getting CORS error. New certificates are getting reflected though. Can someone provide a solution for it. Here is the TLS part of my ingress file.</p> <pre><code>spec: tls: - hosts: - &quot;prodazure.thesolarlabs.com&quot; secretName: tls-secret </code></pre>
<p>A <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="nofollow noreferrer">CORS</a> error is related to the fact that you are issuing a request (fetch, XHR, ...) from a Javascript origin to another service, the destination/backend, and the origin service is not allowed to according to the headers provided by that destination service.</p> <p>It means that your service, the origin, the one deployed in AKS, is trying contacting another service. In order to avoid the CORS errors, that service needs to provide an <code>Access-Control-Allow-Origin</code> that at least includes the host for the origin service, in your example:</p> <pre><code>Access-Control-Allow-Origin: https://prodazure.thesolarlabs.com </code></pre> <p>Changing a SSL certificate by itself shouldn't be the cause of the CORS error: please, as mentioned, adapt your destination/backend service CORS configuration to point to the new host of the origin service, if you changed it, and be sure that you are configuring <code>Access-Control-Allow-Origin</code> for the right HTTP scheme, <code>https</code> in this case, as well.</p>
<p>I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ?</p> <p>We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate.</p> <p>What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ?</p>
<p>I faced a similar issue:</p> <ol> <li>have a wildcard certificate from Certificate Manager</li> <li>wanted to use it for multiple k8 services for ingress with tls (in my case all services were grpc, but it doesn't matter would work the same for https)</li> </ol> <p>First of all, GKE does not support Certificate Manager (at least not yet). Details <a href="https://github.com/kubernetes/ingress-gce/issues/1692" rel="nofollow noreferrer">here</a></p> <p>This is a workaround that was suggested in the GitHub issue and worked for me:</p> <ol> <li>Create a placeholder <code>ManagedCertificate</code> object:</li> </ol> <pre><code>apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: labels: &lt; set labels here as needed&gt; name: placeholder-certificate spec: domains: - foo.example.com </code></pre> <p>Ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: networking.gke.io/managed-certificates: &quot;placeholder-certificate&quot; # connecting to dummy ssl, switched via deployment scripts to wildcard ssl kubernetes.io/ingress.class: &quot;gce&quot; spec: defaultBackend: service: name: some-service port: number: XXXX </code></pre> <p>This will make GKE setup target https proxy, which can be updated to point to a certificate map of your certificate:</p> <ol> <li><p><code>kubectl describe ingress XXXX</code> (you will need to wait for it become available, takes little time for everything to launch)</p> </li> <li><p>find the annotation for <code>ingress.kubernetes.io/https-target-proxy</code></p> </li> <li><p>update the target proxy:</p> </li> </ol> <pre><code>gcloud compute target-https-proxies update &lt;target proxy&gt; --certificate-map=&lt;map name&gt; </code></pre>
<p>I am familiar with Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/" rel="nofollow noreferrer">documentation</a> that describes how to setup limits for PVC. However, what if the container is not assigned PVC?</p> <p>Suppose a Kubernetes container that simply defines:</p> <pre><code>- image: 'redis:7' name: redis </code></pre> <p>... I keep writing data to this Redis instance.</p> <ul> <li>How do I set a quota to ensure that the container does not use more than allocated storage?</li> <li>How to inspect how much storage is already used?</li> </ul> <p>I have tried setting <code>ResourceQuota</code> for ephemeral resources such as:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ResourceQuota metadata: labels: # {{ include &quot;app.resource_labels&quot; . | indent 4 }} name: '{{ .Release.Name }}' spec: hard: configmaps: 10 limits.cpu: 4 limits.ephemeral-storage: 1Gi limits.memory: 10Gi pods: 30 secrets: 5 services: 20 </code></pre> <p>However, when inspecting quota, it always says 0 for <code>ephemeral-storage</code>.</p> <pre><code>kubectl describe quota Name: gaia-review-contra-resource-quota-c79e5b3c Namespace: gaia-review-c79e5b3c Resource Used Hard -------- ---- ---- configmaps 2 10 limits.cpu 21 4 limits.ephemeral-storage 0 1Gi limits.memory 25576Mi 10Gi pods 16 30 secrets 4 5 services 8 20 </code></pre> <p>Therefore, I suspect that something else is not working as it should or I am looking at the wrong place.</p> <p>Meanwhile, the VMs that are running these pods is experiencing disk pressure.</p> <p><a href="https://i.stack.imgur.com/CUQ8B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUQ8B.png" alt="VMs" /></a></p> <p><a href="https://i.stack.imgur.com/114dZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/114dZ.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/5L8AJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5L8AJ.png" alt="enter image description here" /></a></p> <p>My next best theory is that it is actually the Docker image layers that are filling the disk space, but I am unsure how to confirm that or why resources are not being freed.</p>
<p>If the container is not assigned to PVC then assuming ephemeral storage is assigned will be used by default.</p> <p>To ensure that the container does not use more than allocated storage set the <strong><code>memory limits field spec.containers[].resources.limits.memory</code></strong>.</p> <p>Below example is taken from this <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">official kubernetes</a> doc, similarly you can also set limits to your containers.</p> <pre><code>containers: - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: &quot;2Gi&quot; limits: ephemeral-storage: &quot;4Gi&quot; </code></pre> <p>As @larks suggested, this document contains a detailed explanation and more methods which will help you and you can also use <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass" rel="nofollow noreferrer">resource quotas</a> for storage to set limits.</p> <p>For more information you can also refer to the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-1" rel="nofollow noreferrer">Resource Management for Pods and Containers</a> doc.</p>
<p>EKS Slowness: All, I have an EKS cluster running and I am using nginx Ingress controller with Classic LB. However, while accessing the LB, it is too dead slow ( Not all the requests, but only a couple of requests ). I have the LB in the public subnet and my EC2 Instances are in the private subnet. And my public subnets also have the elb tag as well. Is there anything that I am missing?</p>
<p>Install the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/" rel="nofollow noreferrer">AWS Load Balancer Controller</a> and use it to create an NLB for your NGINX ingress service. NLB is more performant than the Classic Load Balancer.</p>
<p>I am able to <code>list</code>, <code>get</code> pods, secrets, deployments and many other resources. I have cluster wide access for many resources. Is there some way to see the consolidated list of all permissions that I have in a kubernetes cluster?</p> <p>The result of <code>kubectl auth can-i '*' '*'</code> is <code>no</code> though.</p>
<p>I don't think it is possible to get a consolidated list of all permissions that you have in a k8s cluster.</p> <p>Looking at the <code>help</code> &amp; <code>examples</code> for the <code>kubectl auth can-i</code> command it needs a <code>VERB</code> which is a logical Kubernetes API verb like <code>'get'</code>, <code>'list'</code>, <code>'watch'</code>, <code>'delete'</code>, etc. and a <code>TYPE</code> which is a Kubernetes resource.</p> <p>The result of <code>kubectl auth can-i '*' '*'</code> is understandably <code>no</code> as it checks if you can do everything in your current namespace - which evidently is not true in your case and thus the <code>no</code> response.</p> <p>You will need to run <code>kubectl auth can-i</code> for each resource and each action that you want to check (of course you would be able to use <code>-A</code> or <code>--all-namespaces</code> flag for all namespaces) but I would imagine you can write a script to run a loop on this command for all/some resources to check permissions for all/some actions.</p> <p><strong>Update:</strong></p> <p>If you'd really want to, you could run a script to loop through certain <code>kubectl</code> cmds to achieve this.</p> <p>The script would (could) look like:</p> <pre><code>#!/bin/bash for namespace in $(kubectl get namespaces | awk '{ print $1 }' | tail -n +2); do echo &quot;Current namespace: $namespace&quot; for resource in $(kubectl api-resources --verbs=list --namespaced -o name); do for action in get list watch create update patch delete; do echo &quot;- $action $resource&quot; kubectl auth can-i &quot;$action&quot; &quot;$resource&quot; --namespace=&quot;$namespace&quot; done done done </code></pre> <p>Note: There could be variations of how you do this in <code>bash</code>.</p> <p>I'll explain a bit:</p> <ul> <li>Firstly, we're fetching all the namespaces, printing only the 1st column, and <code>-n +2</code> basically means “start passing through on the second line of output”.</li> <li>Next, we're fetching most (if not all) resource types available in your cluster. Picked up from <a href="https://stackoverflow.com/a/53016918/1722326">here</a>.</li> <li>Iterating through a few common actions supported in K8s. Found most (if not all) on <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/" rel="nofollow noreferrer">this</a> page.</li> <li>And then just run <code>kubectl auth can-i</code> cmd on each resource, action, and namespace we got in the earlier commands.</li> </ul> <p>While it works, it takes a lot of time as we have a lot of namespaces and you'd have to go through a lot of output to find something or you would end up redirecting the output to a file and searching for fields.</p> <p>Realistically, you would wanna run it only on a few namespaces at a time to check on a few actions on a few resources.</p> <p>Something like this:</p> <pre><code>#!/bin/bash for namespace in default calico-system; do echo &quot;Current namespace: $namespace&quot; for resource in pods deployments.apps; do for action in get create; do echo &quot;- $action $resource&quot; kubectl auth can-i &quot;$action&quot; &quot;$resource&quot; --namespace=&quot;$namespace&quot; done done done </code></pre> <p>It is a lot faster and would help you easily find what you're looking for.</p> <p>Sample output:</p> <pre><code>Current namespace: default - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no Current namespace: calico-system - get pods yes - create pods no - get deployments.apps yes - create deployments.apps no </code></pre> <p>Hope it helps!</p>
<p>I'm attempting to deploy a dask application on Kubernetes/Azure. I have a Flask application server that is the client of a Dask scheduler/workers.</p> <p>I installed the Dask operator as described <a href="https://kubernetes.dask.org/en/latest/#kubecluster" rel="nofollow noreferrer">here</a>:</p> <pre><code>helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator </code></pre> <p>This created the scheduler and worker pods, I have them running on Kubernetes without errors.</p> <p>For the Flask application, I have a Docker image with the following Dockerfile:</p> <pre><code>FROM daskdev/dask RUN apt-get -y install python3-pip RUN pip3 install flask RUN pip3 install gunicorn RUN pip3 install &quot;dask[complete]&quot; RUN pip3 install &quot;dask[distributed]&quot; --upgrade RUN pip3 install &quot;dask-ml[complete]&quot; </code></pre> <p>Whenever I try to run a function in the workers using the <code>Client</code> interface, I get this error in the scheduler pod:</p> <pre><code>TypeError: update_graph() got an unexpected keyword argument 'graph_header' </code></pre> <p>It seems to me that the Dask image used to run Flask and the Dask Kubernetes that I installed are not compatible or aligned?</p> <p>How to create an image that includes Dask for the Flask server that can be integrated with the Dask Kubernetes package?</p> <p>I run in Flask <code>client.get_versions(check=True)</code> and this is what I get:</p> <p>{'scheduler': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'workers': {'tcp://10.244.0.3:40749': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.0.4:36757': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.1.7:40561': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}}, 'client': {'host': {'python': '3.8.16.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.16.final.0', 'dask': '2023.4.0', 'distributed': '2023.4.0', 'msgpack': '1.0.5', 'cloudpickle': '2.2.1', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.23.5', 'pandas': '2.0.0', 'lz4': '4.3.2'}}} @ 2023-04-20 13:33:09.921545&quot;}</p>
<p>Solved, just forced the Dockerfile to use version 2023.1.0, that fixed the problem and matched the operator dask version.</p>
<p>I would like to make the result of a text classification model (finBERT pytorch model) available through an endpoint that is deployed on Kubernetes.</p> <p>The whole pipeline is working but it's super slow to process (30 seconds for one sentence) when deployed. If I time the same endpoint in local, I'm getting results in 1 or 2 seconds. Running the docker image in local, the endpoint also takes 2 seconds to return a result.</p> <p>When I'm checking the CPU usage of my kubernetes instance while the request is running, it doesn't go above 35% so I'm not sure it's related to a lack of computation power?</p> <p>Did anyone witness such performances issues when making a forward pass to a pytorch model? Any clues on what I should investigate?</p> <p>Any help is greatly appreciated, thank you!</p> <p>I am currently using</p> <p>limits: cpu: &quot;2&quot; requests: cpu: &quot;1&quot;</p> <p>Python : 3.7 Pytorch : 1.8.1</p>
<p>I had the same issue. Locally my pytorch model would return a prediction in 25 ms and then on Kubernetes it would take 5 seconds. The problem had to do with how many threads torch had available to use. I'm not 100% sure why this works, but reducing the number of threads sped up performance significantly.</p> <p>Set the following environment variable on your kubernetes pod. <code>OMP_NUM_THREADS=1</code></p> <p>After doing that it performed on kubernetes like it did running it locally ~30ms per call.</p> <p>These are my pod limits:</p> <ul> <li>cpu limits <code>1</code></li> <li>mem limits: <code>1500m</code></li> </ul> <p>I was led to discover this from this blog post: <a href="https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html" rel="nofollow noreferrer">https://www.chunyangwen.com/blog/python/pytorch-slow-inference.html</a></p>
<p>I ran bash script from java code to sort a file of tens of millions of strings of &lt;25 characters. The java application is running in a kubernetes pod. The script failed and the error message is</p> <pre><code>bash: line 1: 425 Killed sort /tmp/keys &gt; /tmp/keys_sorted </code></pre> <p>What does 425 actually mean here? How can I fix it?</p>
<p>This means that <code>sort</code> was sent a SIGKILL signal.</p> <p>On modern systems without something weird going on, this <em>almost</em> always means you're out of RAM.</p> <p>Linux supports something called &quot;memory overcommit&quot;. Due to language runtimes (Java among them!) habitually allocating far more virtual memory than they'll ever need matching physical memory for, the kernel allows allocation requests to succeed <em>even if it doesn't have enough memory to back them</em>. When the application actually tries to <em>access</em> those virtual memory pages, the kernel's fault handler gets invoked to try to find physical memory to map to them.</p> <p>Sometimes it can free up space by deleting unused pages -- discarding block cache contents, or memory pages that are mapped to file contents and thus can be reloaded from that file later. Sometimes, however, there's more memory outstanding than can be free'd, and the kernel invokes the &quot;OOM killer&quot; -- where OOM stands for &quot;out of memory&quot; -- to kill some processes and <em>make</em> more free RAM.</p> <hr /> <p>In the case of <code>sort</code>ing a large file specifically:</p> <ul> <li>Make sure you're using GNU sort, not a clone like busybox sort. This is because GNU sort, but not all its smaller clones, supports breaking a large stream into pieces, writing those pieces to disk, and then doing a merge sort to reassemble them later; so it can sort files larger than available RAM.</li> <li>Make sure that you have temporary space that is <em>actually</em> disk. If GNU sort tries to conserve RAM by shuffling contents off to disk <em>that is actually RAM itself</em>, that's obviously not going to go well.</li> <li>Use the GNU <code>sort</code> argument <code>-S</code> to limit the amount of memory GNU sort will allocate before shunting data to temporary files on disk. (For example, one can use <code>sort -S 32M</code> to allow 32MB of RAM to be allocated for working space).</li> </ul>
<p>I've been trying to find a way to take a data collection that contains three fields where two of the fields can contain multiple items and convert/expand that data to a list where a row is added for every item in one of the fields.</p> <p>The data is about kubernetes pods and the containers and images inside them. I can get the raw data just fine I just can't transform it.</p> <p>the json data is this:</p> <pre><code> - name: condense the raw data set_fact: pod_data: &quot;{{ pod_data|default([]) + [{'pod_name':item.pod_name,'container_name':item.container_name,'image_path':item.image_path }] }}&quot; with_items: &quot;{{ podinfo | json_query('resources[*].{pod_name: metadata.name, container_name: spec.containers[*].name, image_path: spec.containers[*].image }') }}&quot; - debug: var=pod_data &quot;pod_data&quot;: [ { &quot;container_name&quot;: [ &quot;container1&quot;, &quot;container2&quot; ], &quot;image_path&quot;: [ &quot;path1&quot;, &quot;path2&quot; ], &quot;pod_name&quot;: &quot;pod1&quot; }, { &quot;container_name&quot;: [ &quot;container3&quot;, &quot;container4&quot; ], &quot;image_path&quot;: [ &quot;path3&quot;, &quot;path4&quot; ], &quot;pod_name&quot;: &quot;pod2&quot; } </code></pre> <p>I want to output the data in a 1:1 format like this:</p> <pre><code>&quot;pod_data&quot;: [ { &quot;container_name&quot;: &quot;container1&quot; , &quot;image_path&quot;: &quot;path1&quot; , &quot;pod_name&quot;: &quot;pod1&quot; }, { &quot;container_name&quot;: &quot;container2&quot; , &quot;image_path&quot;: &quot;path2&quot; ], &quot;pod_name&quot;: &quot;pod1&quot; }, { &quot;container_name&quot;: &quot;container3&quot; , &quot;image_path&quot;: &quot;path3&quot; , &quot;pod_name&quot;: &quot;pod2&quot; } </code></pre> <p>I'm not good enough with loops to know how to do a nested style loop when there are multiple sub elements in ansible. I could do it in powershell but that doesn't help me here.</p> <p>Is my only choice some sort of include_tasks loop while passing each item and container/image count so I can split it that many times?</p>
<p>Create the structure</p> <pre class="lang-yaml prettyprint-override"><code> pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] </code></pre> <p>, convert it from YAML, and update <em>pod_data</em></p> <pre class="lang-yaml prettyprint-override"><code> - set_fact: pod_data: &quot;{{ pod_data_update|from_yaml }}&quot; </code></pre> <p>gives what you want</p> <pre class="lang-yaml prettyprint-override"><code> pod_data: - {container_name: container1, image_path: path1, pod_name: pod1} - {container_name: container2, image_path: path2, pod_name: pod1} - {container_name: container3, image_path: path3, pod_name: pod2} - {container_name: container4, image_path: path4, pod_name: pod2} </code></pre> <hr> <sup> <p>Example of a complete playbook for testing</p> <pre class="lang-yaml prettyprint-override"><code>- hosts: localhost vars: pod_data: - container_name: - container1 - container2 image_path: - path1 - path2 pod_name: pod1 - container_name: - container3 - container4 image_path: - path3 - path4 pod_name: pod2 pod_data_update: | [{% for i in pod_data %} {% for n,p in i.container_name|zip(i.image_path) %} {container_name: {{ n }}, image_path: {{ p }}, pod_name: {{ i.pod_name }}}, {% endfor %} {% endfor %}] tasks: - set_fact: pod_data: &quot;{{ pod_data_update|from_yaml }}&quot; - debug: var: pod_data|to_yaml </code></pre> </sup>
<p>When I create a dashboard in Grafana and export it in JSON, the role, user and group permissions I define are not saved with it.</p> <p>I am looking for a way to assign permissions for each dashboard in a Grafana deployment with Helm, in which I already include the dashboards to use.</p> <p>Does anyone know if this is possible? I can't find a way to do it, can it only be done from web or from API?</p> <p>Thanks.</p>
<p>Yes, you can assign permissions to dashboards in Grafana using Helm, as well as through the Grafana web UI or API.</p> <p>To assign permissions using Helm, you can define a custom Grafana dashboard provisioning configuration file in your Helm chart's values.yaml or in a separate YAML file, and specify the appropriate permissions for each dashboard using the datasources, dashboards, and users sections. Here's an example:</p> <h1>values.yaml or custom configuration file</h1> <pre><code>grafana: provisioning: datasources: - name: &lt;datasource_name&gt; type: &lt;datasource_type&gt; access: proxy &lt;datasource-specific_configurations&gt; # e.g., url, basicAuth, etc. dashboards: - name: &lt;dashboard_name&gt; uid: &lt;dashboard_uid&gt; # unique identifier for the dashboard url: &lt;dashboard_url&gt; # URL of the JSON file for the dashboard permissions: role: &lt;role_name&gt; # role to assign the dashboard to user: &lt;user_name&gt; # user to assign the dashboard to team: &lt;team_name&gt; # team to assign the dashboard to users: - username: &lt;user_name&gt; role: &lt;role_name&gt; </code></pre> <p>In this example, you can specify the datasource configuration, dashboard configuration (including permissions), and user configuration using Helm values. Once you apply the Helm chart, Grafana will provision the dashboards with the specified permissions.</p> <p>Note: Make sure to use the appropriate values for &lt;datasource_name&gt;, &lt;datasource_type&gt;, &lt;dashboard_name&gt;, &lt;dashboard_uid&gt;, &lt;dashboard_url&gt;, &lt;role_name&gt;, &lt;user_name&gt;, and &lt;team_name&gt; in your configuration.</p> <p>Alternatively, you can also assign permissions to dashboards using the Grafana web UI or API. In the web UI, you can go to the dashboard settings, navigate to the &quot;Permissions&quot; tab, and specify the roles, users, or teams that should have access to the dashboard. You can also use the Grafana API to create, update, or delete dashboards with specific permissions using the appropriate API endpoints and payload.</p> <p>Please note that in order to assign permissions to dashboards, you need to have appropriate permissions and roles configured in Grafana. Also, make sure to follow Grafana's documentation and best practices for securing your deployment and managing permissions effectively.</p>
<p><a href="https://i.stack.imgur.com/A2G76.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2G76.png" alt="enter image description here" /></a></p> <p>The nodeport takes in 3 parameters in the service yaml.</p> <ul> <li>port</li> <li>targetPort</li> <li>nodePort</li> </ul> <p>Since all you wanted was to map a port on the node(nodeport) to the port on the container(targetPort), why do we need to provide the port of the service?</p> <p>Is this because Nodeport is internally implemented <strong>on top of</strong> ClusterIP?</p>
<p>&quot;Nodeport is internally implemented on top of ClusterIP&quot; - correct.</p> <p>The port in the Kubernetes Service definition is used to specify the port on which the service will listen for traffic within the Kubernetes cluster. This is the port that will be exposed to other pods in the cluster as an endpoint for the service. When a request is made to this port by a client within the cluster, the traffic will be routed to one of the pods selected by the Service based on its load balancing algorithm.</p> <p>The nodePort is used to expose the service on a port on the node itself, which allows the service to be accessed from outside the cluster.</p>
<p>I am starting on learning Containers and Kubernetes. When watching tutorials on Conatiners and Kubernetes, often the instructors says &quot;You could have thousands of containers&quot;.</p> <p>I am trying to understand how we can end up with thousands of containers?</p> <p>After deploying my single container app, how thousands of container instances are created?</p> <p>I am hope my question makes sense. thanks</p>
<p>Adding to GSerg suggestion</p> <p>A cluster is a set of nodes running Kubernetes agents, managed by the control plane.You can scale your cluster by adding or removing nodes. The way you do this depends on how your cluster is deployed. The limit is 300000 containers.You can find this in this official <a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/" rel="nofollow noreferrer">doc</a>.</p> <p>As per this <a href="https://www.netapp.com/devops-solutions/what-are-containers/" rel="nofollow noreferrer">doc</a></p> <blockquote> <p>Containerized applications can get complicated, however. When in production, many might require hundreds to thousands of separate containers in production. This is where container runtime environments such as Docker benefit from the use of other tools to orchestrate or manage all the containers in operation.</p> <p>One of the most popular tools for this purpose is Kubernetes, a container orchestrator that recognizes multiple container runtime environments, including Docker.</p> </blockquote> <p>The software that runs containers is called the container runtime. Kubernetes upholds compartment runtimes, for example, containerd, CRI-O, and some other execution of the Kubernetes CRI (Holder Runtime Point of interaction).</p> <p>Usually, you can let your cluster choose a Pod's default container runtime. You can specify the RuntimeClass for a Pod to ensure that Kubernetes runs those containers using a specific container runtime if you need to use more than one container runtime in your cluster. Using RuntimeClass, you can also use the same container runtime to run multiple Pods with different settings.</p>
<p>I am creating a github actions step</p> <pre><code>. . . steps: name: export-secrets run: | echo '${{ toJSON(secrets) }}' &gt;&gt; gh-secrets.json . . . </code></pre> <p>suppose the gh-secrets.json file is as below:</p> <pre><code># gh-secrets.json {&quot;A&quot;: &quot;a1&quot;, &quot;B&quot;: &quot;b1&quot;, &quot;C&quot;: &quot;c1&quot;} </code></pre> <p>Then I am going to pull the helm charts and untar them and then move <code>gh-secrets.json</code> into the untar directory. The reason I am doing this is according to <a href="https://helm.sh/docs/chart_template_guide/accessing_files/" rel="nofollow noreferrer">helm docs</a>, &quot;Files outside of a helm application subchart, including those of the parent, cannot be accessed&quot;</p> <p>suppose I have these files in the untar directory</p> <pre><code># values.yaml secrets: - A - B </code></pre> <p>and I have <code>templates/secrets.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-secret data: {{- range .Values.secrtes }} {{ . }}: {{ $.Files.Get &quot;gh-secrets.json&quot; | toJson }} {{- end }} </code></pre> <p>this is the result:</p> <pre><code># Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: &quot;'{\&quot;A\&quot;: \&quot;a1\&quot;, \&quot;B\&quot;: \&quot;b1\&quot;, \&quot;C\&quot;: \&quot;c1\&quot;}'&quot; B: &quot;'{\&quot;A\&quot;: \&quot;a1\&quot;, \&quot;B\&quot;: \&quot;b1\&quot;, \&quot;C\&quot;: \&quot;c1\&quot;}'&quot; </code></pre> <p>but I want it to be</p> <pre><code># Source: mktplc-catalog/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-secret data: A: &quot;a1&quot; B: &quot;b1&quot; </code></pre> <p>How can I achieve this?</p>
<p>Helm includes a <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#fromjson" rel="nofollow noreferrer"><code>fromJson</code></a> function; in fact, its documentation fairly closely matches your updated setup. So you can use this to read in the file, parse it, and then select the specific key out of it.</p> <pre class="lang-yaml prettyprint-override"><code>data: {{- $secrets := .Files.Get &quot;gh-secrets.json&quot; | fromJson -}} {{- range .Values.secrets }} {{ . }}: {{ index $secrets . }} {{- end }} </code></pre> <p>You appear to have some control over the input structure here. If you can wrap the secret values in some top-level key</p> <pre class="lang-bash prettyprint-override"><code>echo '{&quot;secretValues&quot;: ${{ toJSON(secrets) }}}' &gt; gh-secrets.json </code></pre> <p>then you can provide this as an extra values file to Helm (valid JSON files are also valid YAML)</p> <pre class="lang-bash prettyprint-override"><code>helm install -f gh-secrets.json ... </code></pre> <p>This file doesn't need to be in the chart directory. Now Helm will parse it for you, and it will be available as <code>.Values.secretValues</code> matching the injected key. (Remember that the <code>range</code> loop resets the value of <code>.</code>, so you will need to save its value outside the loop.)</p> <p>If this technique works for you, it might be possible to use it to directly inject the secret values without the layer of indirection; use something like</p> <pre class="lang-yaml prettyprint-override"><code>data: {{ .Values.secretValues | toYaml | indent 2 }} </code></pre>
<p>I'm running this command:</p> <pre><code># kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf </code></pre> <p>Which gives me output like this:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE 123456-1234-1234-1234-123456789 some-app-123456 2/2 Running 0 10m 123456-1234-1234-1234-123456789 some-app-789112 1/2 Running 0 10m </code></pre> <p>I would like to be able to filter on the READY column, but I can't seem to find the right field-selector value.</p> <p>Is this possible?</p> <p>I've tried searching around for a list of available field-selectors, and haven't had any luck. It's possible that one doesn't exist.</p>
<p>I don't think <code>kubectl get pods</code> supports <code>field selectors</code> based on the <code>READY</code> column directly.</p> <p>But <code>kubectl</code> provides a method exporting the resource configuration (YAML) directly into <code>JSON, -o json</code>. Then, we can use <code>jq</code> to read, parse, and mutate K8s object results from <code>kubectl</code>.</p> <p>In your case, you could use a command like this to filter all pods (excluding the <code>pods</code> from namespaces <code>kube-system</code>, <code>monitoring</code> &amp; <code>rtf</code>) not in <code>ready</code> state:</p> <pre><code>kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system,metadata.namespace!=monitoring,metadata.namespace!=rtf -ojson | jq '.items[] | select(.status.containerStatuses[].ready==false) | .metadata.namespace + &quot;/&quot; + .metadata.name' </code></pre> <p>and/or change <code>ready=true</code> to get the pods in <code>ready</code> state.</p> <p>Have a look at <a href="https://medium.com/geekculture/my-jq-cheatsheet-34054df5b650" rel="nofollow noreferrer">this</a> article for many more such uses of <code>jq</code> with <code>kubectl</code></p> <p>Hope it helps.</p>
<p>I'm having a pain point with Kubernetes and I hope someone can help me out. I'm trying to see the contents of a PVC (Persistent Volume Claim), but it seems that the only way to do so is to mount the PVC into a container and check what's inside.</p> <p>As you can imagine, this is quite cumbersome and not very practical. I was wondering if there are any tools or solutions that could make this process easier?</p> <p>I've searched around but haven't found anything yet. If anyone has any ideas or suggestions, please let me know! I'm eager to hear your thoughts and experiences.</p> <p>Thank you in advance!</p>
<p>Unfortunately, there is no built-in way to do this. To get around this, you would have to attach the disk to a container or VM and inspect its contents</p> <p>You need to add the pvc to a pod and copy the files using kubectl cp or list the files using kubectl exec to check the contents of PVC.</p> <p>A simpler way is to create an inspector POD to inspect the contents of PVC and clean up the pod easily as explained in this <a href="https://frank.sauerburger.io/2021/12/01/inspect-k8s-pvc.html" rel="nofollow noreferrer">blog</a> by Frank Sauerburger.</p>
<p>Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo.</p> <p>So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).  </p> <p>I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo.</p> <pre><code>INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] &quot;level&quot;=0 &quot;msg&quot;=&quot;starting up&quot; &quot;pid&quot;=12 &quot;args&quot;=[&quot;/git-sync&quot;,&quot;-wait=60&quot;,&quot;-repo=https://github.com/jorgeavelar98/AirflowProject.git&quot;,&quot;-branch=master&quot;,&quot;-root=/opt/airflow/dags&quot;,&quot;-username=jorgeavelar98&quot;,&quot;-password-file=/etc/git-secret/token&quot;] I0411 20:50:31.029064 12 main.go:950] &quot;level&quot;=0 &quot;msg&quot;=&quot;cloning repo&quot; &quot;origin&quot;=&quot;https://github.com/jorgeavelar98/AirflowProject.git&quot; &quot;path&quot;=&quot;/opt/airflow/dags&quot; I0411 20:50:31.031728 12 main.go:956] &quot;level&quot;=0 &quot;msg&quot;=&quot;git root exists and is not empty (previous crash?), cleaning up&quot; &quot;path&quot;=&quot;/opt/airflow/dags&quot; I0411 20:50:31.894074 12 main.go:760] &quot;level&quot;=0 &quot;msg&quot;=&quot;syncing git&quot; &quot;rev&quot;=&quot;HEAD&quot; &quot;hash&quot;=&quot;18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; I0411 20:50:31.907256 12 main.go:800] &quot;level&quot;=0 &quot;msg&quot;=&quot;adding worktree&quot; &quot;path&quot;=&quot;/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; &quot;branch&quot;=&quot;origin/master&quot; I0411 20:50:31.911039 12 main.go:860] &quot;level&quot;=0 &quot;msg&quot;=&quot;reset worktree to hash&quot; &quot;path&quot;=&quot;/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; &quot;hash&quot;=&quot;18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; I0411 20:50:31.911065 12 main.go:865] &quot;level&quot;=0 &quot;msg&quot;=&quot;updating submodules&quot; </code></pre> <p> </p> <p><strong>However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI</strong></p> <p>This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver</p> <pre><code> containers:         - name: airflow-scheduler           image: &lt;redacted&gt;           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: &quot;AIRFLOW_SERVICE_NAME-env&quot;           env:                         &lt;redacted&gt;           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - &quot;/usr/bin/dumb-init&quot;             - &quot;--&quot;           args:             &lt;redacted&gt;         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - &quot;-wait=60&quot;             - &quot;-repo=&lt;repo&gt;&quot;             - &quot;-branch=master&quot;             - &quot;-root=/opt/airflow/dags&quot;             - &quot;-username=&lt;redacted&gt;&quot;             - &quot;-password-file=/etc/git-secret/token&quot;           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token </code></pre> <p>What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!</p>
<p>Looks like my issue was that my worker, scheduler, and web server container had different dag volume mounts from the ones I defined for my git-sync container.</p> <p>This is what I had:</p> <pre><code>containers: - name: airflow-scheduler image: &lt;redacted&gt; imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: &quot;AIRFLOW_SERVICE_NAME-env&quot; env: &lt;redacted&gt; resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config </code></pre> <p>And the following edits made it work. I removed the dag subpath and plugins volume mount:</p> <pre><code>containers: - name: airflow-scheduler image: &lt;redacted&gt; imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: &quot;AIRFLOW_SERVICE_NAME-env&quot; env: &lt;redacted&gt; resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config </code></pre>
<p>In a Deployment of Kubernetes, what is the difference between <code>spec.containers.name</code> and <code>spec.hostname</code>?</p> <pre><code>spec: containers: - env: - name: CONFIG_HOME value: /config image: my-image:1.0.0.0-SNAPSHOT name: name-a ports: - containerPort: 8080 ... hostname: name-b </code></pre>
<p><strong>Container name</strong></p> <p>Container name it's just the <strong>name</strong> you are providing to identify it.</p> <p>it is useful to debug the <strong>POD</strong> when <strong>failing</strong> or running <strong>multiple</strong> <strong>containers</strong> inside <strong>single</strong> <strong>POD</strong>.</p> <p>Get logs of <strong>specific</strong> container</p> <pre><code>kubectl logs &lt;pod-name&gt; -c &lt;container-name&gt; </code></pre> <p><strong>Hostname</strong></p> <p>Only <strong>service</strong> get the <strong>DNS</strong> name record mapping not POD so you can't call the specific POD over <strong>DNS</strong> name unless you are using <strong>statefulset</strong>.</p> <pre><code>hostname: busybox-2 subdomain: busybox-subdomain </code></pre> <p><strong>hostname</strong> and <strong>subdomain</strong> you can use the give <strong>DNS</strong> mapping to POD. FQDN fully qualified domain name.</p> <p>You will be able to request that <strong>POD</strong> over the</p> <pre><code>hostname.subdomain.namespace-name.svc.cluster.local </code></pre> <p>you can <strong>curl</strong> the <code>hostname.subdomain.namespace-name.svc.cluster.local</code></p>
<p>This doc <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity</a> explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool.</p> <p>How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled?</p> <p>Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too.</p>
<blockquote> <p>How do you know if your application needs to be configured to use workload identity or not?</p> </blockquote> <p>That will be hard to say, it depends on your application if it's using any <strong>GCP</strong> service or needs <strong>Auth</strong> or not.</p> <p>If it's not using and running simple <strong>API/Tasks</strong> you can run it without <strong>workload</strong> <strong>identity</strong> just drain node and migrate to new pool.</p> <p>If you are using the <strong>Oauth</strong>, <strong>Key</strong> or another method, storing value in <strong>Env variable</strong> to <strong>Authenticate</strong> to <strong>Google</strong> service will work however best practice is to go with <strong>Workload identity</strong> for security reasons.</p>
<p>This doc <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity</a> explains that if you enable workload identity on a running cluster and create a new node on that cluster which would have workload identity automatically enabled ,then you should configure your application to use workload identity before moving the pods over to the new node pool.</p> <p>How do you know if your application needs to be configured to use workload identity or not? I'm guessing it depends on what services your application currently makes use of and how they currently authenticate? What would the consequences be of not configuring them to use workload identity before moving them to the new node pool where workload identity is enabled?</p> <p>Some context. I'm busy enabling GKE backup, but to get this running I need to enable workload identity on the cluster and then obviously on the node pool too.</p>
<p>There are a couple of things that should decide the impact. Depending on how the applications authenticate to google.</p> <ol> <li>No authentication: can just be moved</li> <li>Authenticating with service account json files: can be moved, but recommended to switch to workload identity for security reasons</li> <li>Authenticating with the default compute engine service account: Not possible, you will probably need to setup workload identity first to provide a more applicable SA for the pods</li> </ol>
<p>Im attempting to incorporate git-sync sidecar container into my Airflow deployment yaml so my private Github repo gets synced to my Airflow Kubernetes env every time I make a change in the repo.</p> <p>So far, it successfully creates a git-sync container along with our scheduler, worker, and web server pods, each in their respective pod (ex: scheduler pod contains a scheduler container and gitsync container).  </p> <p>I looked at the git-sync container logs and it looks like it successfully connects with my private repo (using a personal access token) and prints success logs every time I make a change to my repo.</p> <pre><code>INFO: detected pid 1, running init handler I0411 20:50:31.009097 12 main.go:401] &quot;level&quot;=0 &quot;msg&quot;=&quot;starting up&quot; &quot;pid&quot;=12 &quot;args&quot;=[&quot;/git-sync&quot;,&quot;-wait=60&quot;,&quot;-repo=https://github.com/jorgeavelar98/AirflowProject.git&quot;,&quot;-branch=master&quot;,&quot;-root=/opt/airflow/dags&quot;,&quot;-username=jorgeavelar98&quot;,&quot;-password-file=/etc/git-secret/token&quot;] I0411 20:50:31.029064 12 main.go:950] &quot;level&quot;=0 &quot;msg&quot;=&quot;cloning repo&quot; &quot;origin&quot;=&quot;https://github.com/jorgeavelar98/AirflowProject.git&quot; &quot;path&quot;=&quot;/opt/airflow/dags&quot; I0411 20:50:31.031728 12 main.go:956] &quot;level&quot;=0 &quot;msg&quot;=&quot;git root exists and is not empty (previous crash?), cleaning up&quot; &quot;path&quot;=&quot;/opt/airflow/dags&quot; I0411 20:50:31.894074 12 main.go:760] &quot;level&quot;=0 &quot;msg&quot;=&quot;syncing git&quot; &quot;rev&quot;=&quot;HEAD&quot; &quot;hash&quot;=&quot;18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; I0411 20:50:31.907256 12 main.go:800] &quot;level&quot;=0 &quot;msg&quot;=&quot;adding worktree&quot; &quot;path&quot;=&quot;/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; &quot;branch&quot;=&quot;origin/master&quot; I0411 20:50:31.911039 12 main.go:860] &quot;level&quot;=0 &quot;msg&quot;=&quot;reset worktree to hash&quot; &quot;path&quot;=&quot;/opt/airflow/dags/18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; &quot;hash&quot;=&quot;18d3c8e19fb9049b7bfca9cfd8fbadc032507e03&quot; I0411 20:50:31.911065 12 main.go:865] &quot;level&quot;=0 &quot;msg&quot;=&quot;updating submodules&quot; </code></pre> <p> </p> <p><strong>However, despite their being no error logs in my git-sync container logs, I could not find any of the files in the destination directory where my repo is supposed to be synced into (/opt/airflow/dags). Therefore, no DAGs are appearing in the Airflow UI</strong></p> <p>This is our scheduler containers/volumes yaml definition for reference. We have something similar for workers and webserver</p> <pre><code> containers:         - name: airflow-scheduler           image: &lt;redacted&gt;           imagePullPolicy: IfNotPresent           envFrom:             - configMapRef:                 name: &quot;AIRFLOW_SERVICE_NAME-env&quot;           env:                         &lt;redacted&gt;           resources:              requests:               memory: RESOURCE_MEMORY               cpu: RESOURCE_CPU           volumeMounts:             - name: scripts               mountPath: /home/airflow/scripts             - name: dags-data               mountPath: /opt/airflow/dags               subPath: dags             - name: dags-data               mountPath: /opt/airflow/plugins               subPath: plugins             - name: variables-pools               mountPath: /home/airflow/variables-pools/             - name: airflow-log-config               mountPath: /opt/airflow/config           command:             - &quot;/usr/bin/dumb-init&quot;             - &quot;--&quot;           args:             &lt;redacted&gt;         - name: git-sync           image: registry.k8s.io/git-sync/git-sync:v3.6.5           args:             - &quot;-wait=60&quot;             - &quot;-repo=&lt;repo&gt;&quot;             - &quot;-branch=master&quot;             - &quot;-root=/opt/airflow/dags&quot;             - &quot;-username=&lt;redacted&gt;&quot;             - &quot;-password-file=/etc/git-secret/token&quot;           volumeMounts:             - name: git-secret               mountPath: /etc/git-secret               readOnly: true             - name: dags-data               mountPath: /opt/airflow/dags       volumes:         - name: scripts           configMap:             name: AIRFLOW_SERVICE_NAME-scripts             defaultMode: 493         - name: dags-data           emptyDir: {}         - name: variables-pools           configMap:             name: AIRFLOW_SERVICE_NAME-variables-pools             defaultMode: 493         - name: airflow-log-config           configMap:             name: airflow-log-configmap             defaultMode: 493         - name: git-secret           secret:             secretName: github-token </code></pre> <p>What can be the issue? I couldn't find much documentation that could help me further investigate. Any help and guidance would be greatly appreciated!</p>
<p>Your problem could be probably related to the directory structure you are defining across the different containers.</p> <p>It is unclear in your question but, according to your containers definitions, your git repository should contain at least <code>dags</code> and <code>plugins</code> as top level directories:</p> <pre><code>/ ├─ dags/ ├─ plugins/ </code></pre> <p>This structure resembles a typical <code>airflow</code> folder structure: I assume, that is the one you configured.</p> <p>Then, please, try using this slightly modified version of your Kubernetes configuration:</p> <pre class="lang-yaml prettyprint-override"><code> containers: - name: airflow-scheduler image: &lt;redacted&gt; imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: &quot;AIRFLOW_SERVICE_NAME-env&quot; env: &lt;redacted&gt; resources: requests: memory: RESOURCE_MEMORY cpu: RESOURCE_CPU volumeMounts: - name: scripts mountPath: /home/airflow/scripts - name: dags-data mountPath: /opt/airflow/dags subPath: dags - name: dags-data mountPath: /opt/airflow/plugins subPath: plugins - name: variables-pools mountPath: /home/airflow/variables-pools/ - name: airflow-log-config mountPath: /opt/airflow/config command: - &quot;/usr/bin/dumb-init&quot; - &quot;--&quot; args: &lt;redacted&gt; - name: git-sync image: registry.k8s.io/git-sync/git-sync:v3.6.5 args: - &quot;-wait=60&quot; - &quot;-repo=&lt;repo&gt;&quot; - &quot;-branch=master&quot; - &quot;-root=/opt/airflow&quot; - &quot;-username=&lt;redacted&gt;&quot; - &quot;-password-file=/etc/git-secret/token&quot; volumeMounts: - name: git-secret mountPath: /etc/git-secret readOnly: true - name: dags-data mountPath: /opt volumes: - name: scripts configMap: name: AIRFLOW_SERVICE_NAME-scripts defaultMode: 493 - name: dags-data emptyDir: {} - name: variables-pools configMap: name: AIRFLOW_SERVICE_NAME-variables-pools defaultMode: 493 - name: airflow-log-config configMap: name: airflow-log-configmap defaultMode: 493 - name: git-secret secret: secretName: github-token </code></pre> <p>Note that we basically changed the <code>root</code> argument of the <code>git-sync</code> container removing <code>/dags</code>.</p> <p>If it doesn't work, please, try including and tweaking the value of the <a href="https://github.com/kubernetes/git-sync/tree/release-3.x#primary-flags" rel="nofollow noreferrer"><code>--dest</code></a> <code>git-sync</code> flag, I think it could be of help as well.</p>
<p>i wan't to send pod log to ELK, buat after deploying fluentd i get Error, i got from tutorial Official Fluentd documentation</p> <p>EKS Version 1.22</p> <p>i put Suppress_Type_Name On, it's not solved this issue</p> <pre><code>[2022/06/20 16:23:07] [error] [output:es:es.0] HTTP status=400 URI=/_bulk, response: {&quot;error&quot;:{&quot;root_cause&quot;:[{&quot;type&quot;:&quot;illegal_argument_exception&quot;,&quot;reason&quot;:&quot;Action/metadata line [1] contains an unknown parameter [_type]&quot;}],&quot;type&quot;:&quot;illegal_argument_exception&quot;,&quot;reason&quot;:&quot;Action/metadata line [1] contains an unknown parameter [_type]&quot;},&quot;status&quot;:400} </code></pre> <p>my configmap</p> <pre><code> fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False </code></pre>
<p>I was able to resolve the issue by these 3 steps,</p> <p>step 1. You need to update your fluentbit image to latest</p> <blockquote> <pre><code> image: fluent/fluent-bit:2.1.1 </code></pre> </blockquote> <p>You can get the deployment file from <a href="https://docs.fluentbit.io/manual/v/1.5/installation/kubernetes" rel="nofollow noreferrer">here</a></p> <p>step 2: Add &quot;Suppress_Type_Name On&quot; to output-elasticsearch.conf</p> <pre><code> output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} HTTP_User ${FLUENT_ELASTICSEARCH_USER} HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD} Logstash_Format On Replace_Dots On Retry_Limit False Suppress_Type_Name On </code></pre> <p>Step 3. Delete the fluentbit pods and reapply it</p> <pre><code>kubectl delete -f fluentbit-ds.yaml kubectl apply -f fluentbit-ds.yaml </code></pre>
<p>I'm invoking Kaniko (I Docker image that can build Docker images) successfully in this way (EKS environment):</p> <pre><code>cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --stdin --rm --restart=Never --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --env=AWS_SDK_LOAD_CONFIG=true -- --destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-54-DV-6525-kube --context tar://stdin --label commit=8e3a236f702c689891a50a60acf7e05658fa3939 --label build_url=Sin-Jenkins </code></pre> <p>This works ok, except when there is no enough ephemeral storage available.</p> <p>Now I want to specify limits, like <code>ephemereal-storage</code>. As the <code>--limits</code> option has been removed in recent versions of Kubernetes, I have to use the <code>--overrides</code>, and I have to change many things.</p> <p>Here is how I do it:</p> <pre><code>cat build.tar.gz | kubectl run kaniko-httpd-ex --quiet --restart=Never -i --rm --image=748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0 --overrides='{&quot;apiVersion&quot;:&quot;v1&quot;, &quot;spec&quot;: {&quot;containers&quot;:[{ &quot;name&quot;:&quot;kaniko&quot;, &quot;stdin&quot;: true, &quot;restartPolicy&quot;:&quot;Never&quot;, &quot;image&quot;:&quot;748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0&quot;, &quot;env&quot;:[{&quot;name&quot;:&quot;AWS_SDK_LOAD_CONFIG&quot;,&quot;value&quot;:&quot;true&quot;}], &quot;resources&quot;:{&quot;requests&quot;:{ &quot;ephemeral-storage&quot;:&quot;116Mi&quot;, &quot;memory&quot;: &quot;50Mi&quot;, &quot;cpu&quot;: &quot;50m&quot; }}, &quot;args&quot;:[ &quot;--destination=748960220740.dkr.ecr.eu-west-1.amazonaws.com/httpd-ex:23-04-26_08-57-DV-6525-kube&quot;, &quot;--context&quot;,&quot;tar://stdin&quot; ,&quot;--label&quot;,&quot;commit=8e3a236f702c689891a50a60acf7e05658fa3939&quot;,&quot;--label&quot;,&quot;build_url=Sin-Jenkins&quot;] }]}' </code></pre> <p>It launches correctly kaniko, but after 65 seconds, kaniko is killed and the only error message is:</p> <pre><code>error: timed out waiting for the condition </code></pre> <pre><code>$ kubectl get ev | grep kaniko 19m Normal Scheduled pod/kaniko-httpd-ex Successfully assigned sbt-david/kaniko-httpd-ex to ip-10-120-134-171.eu-west-1.compute.internal 19m Normal Pulled pod/kaniko-httpd-ex Container image &quot;748960220740.dkr.ecr.eu-west-1.amazonaws.com/kaniko:0&quot; already present on machine 19m Normal Created pod/kaniko-httpd-ex Created container kaniko 19m Normal Started pod/kaniko-httpd-ex Started container kaniko 17m Normal Killing pod/kaniko-httpd-ex Stopping container kaniko </code></pre> <p>How to specify limits on resources when using <code>stdin</code>?</p> <p>Related info:</p> <ul> <li><a href="https://github.com/kubernetes/kubectl/issues/1101" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/1101</a></li> </ul> <p>Notes:</p> <ul> <li>I cannot use the <code>--limits</code> flag from <code>kubectl run</code> as it has been removed recently.</li> </ul>
<p>Your sample:</p> <pre><code>... &quot;resources&quot;:{&quot;requests&quot;:{ &quot;ephemeral-storage&quot;:&quot;116Mi&quot;}, # &lt;-- incorrect ... </code></pre> <p>Try:</p> <pre><code>... &quot;resources&quot;:{ &quot;requests&quot;:{ &quot;ephemeral-storage&quot;: &quot;116Mi&quot;, &quot;memory&quot;: &quot;50Mi&quot;, &quot;cpu&quot;: &quot;50m&quot; }, ... </code></pre> <p>Here's a working example that runs a pod for 5m. Do <code>kubectl describe pod busybox</code> and check the <code>Requests</code> section:</p> <pre><code>kubectl run busybox --image busybox --overrides ' { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;spec&quot;: { &quot;restartPolicy&quot;: &quot;Never&quot;, &quot;containers&quot;: [ { &quot;name&quot;: &quot;busybox&quot;, &quot;image&quot;: &quot;busybox&quot;, &quot;command&quot;: [&quot;ash&quot;, &quot;-c&quot;, &quot;sleep 300&quot;], &quot;resources&quot;: { &quot;requests&quot;:{ &quot;ephemeral-storage&quot;: &quot;116Mi&quot;, &quot;memory&quot;: &quot;50Mi&quot;, &quot;cpu&quot;: &quot;50m&quot; }}}]}}' </code></pre>
<p>Getting below error while trying to create an ingress endpoint on k8s v1.26.3. i have created a k8s cluster using kops.</p> <p>Error from server</p> <blockquote> <p>(BadRequest): error when creating &quot;ingress.yaml&quot;: Ingress in version &quot;v1&quot; cannot be handled as a Ingress: strict decoding error: unknown field &quot;spec.rules[0].http.paths[0].backend.serviceName&quot;, unknown field &quot;spec.rules[0].http.paths[0].backend.servicePort&quot;</p> </blockquote> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: labels: env: prod app: demo-ms layer: frontend name: prod-demo-ingress namespace: prod annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - backend: serviceName: prod-demo-service servicePort: 8080 path: / ~~~ </code></pre>
<pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prod-demo-ingress namespace: prod labels: env: prod app: demo-ms layer: frontend annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: demo.abc.in http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: prod-demo-service port: number: 8080 </code></pre>
<p>I'm fairly new to Kubernetes and having some problems understanding some things.</p> <p>I'm setting up a cluster on Hetzner Cloud for client applications. In order to make the ingresses of the applications work, I deployed Hetzner Cloud Controller Manager. My question is, how do I expose the applications so i can access them from the outside on client1.<strong>cluster.example.com</strong>, client2.<strong>cluster.example.com</strong>, etc... The Hetzner loadbalancer that the cloud control manager manages, forwards incoming requests from ports 443 (on the loadbalancer) to 30500 (on the ingress controller) and 80 to 30379.</p> <ol> <li>Is this possible?</li> <li>If so, do i need to set up a DNS record for *.<strong>cluster.example.com</strong>?</li> <li>How do i do this with ssl certificates, how does this work?</li> </ol> <p>any help will be appreciated Thank you :)</p> <p>I tried searching for these things on the internet, but since my lack of experience I couldn't find out whether the situations were applicable to my situation.</p>
<p>First you create a service for your application. You use the default service type (ClusterIP).</p> <p>Then you create an ingres-ngnix service of type load balancer.</p> <p>And you need to create some Ingress rules to make URL paths to the service of your application.</p> <p>And you need to set the annotation for the Ingres nginx controller to your zone (for example fsn1).</p> <p>If you are using cluster API provider hetzner from Syself (which I recommend), then the docs should guide you. Please create an issue if the docs are missing something.</p> <p>(I work for Syself)</p>
<p>I was wondering if the ingress controller of GKE comes either any additional costs, like a Google Cloud LoadBalancer, and if so is it possible to expose services with an ingress controller without the Google Cloud LoadBalancer. The same question is for GKE Gateway API.</p> <p>I have tried to look at several pieces of documentation online and it is very contradicting.</p>
<p>There is <strong>no additional</strong> cost running the <strong>Ingress controller</strong> with <strong>GCP</strong> however you have to pay for the <strong>HTTP LoadBalancer</strong> that <strong>Ingress controller</strong> create.</p> <p>It will be the same for all <strong>ingress</strong> <strong>controllers</strong> you use <strong>GCP</strong>, <strong>Nginx</strong>, <strong>Apache</strong>.</p> <p>While <strong>Gateway</strong> also it's the same, it's <strong>controller</strong> only provides more flexibility however with <strong>Gateway</strong> if you are leveraging <strong>Managed SSL Certificates</strong>, etc then you might have to pay for those <strong>services</strong>.</p> <p>Incase if you would like to read more about <strong>Ingress vs Gateway</strong> checkout my article : <a href="https://medium.com/google-cloud/kubernetes-ingress-vs-gateway-api-647ee233693d" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-ingress-vs-gateway-api-647ee233693d</a></p>
<p>When low on resources kubernetes starts to re-create pods but newer pods also fail, so they keep growing in number. The cluster becomes unusable. This seems an illogical behaviour. Is it possible to prevent it ? Is it possible to recover without deleting everything ?</p> <pre><code>light@o-node0:~/lh-orchestrator$ k get pod NAME READY STATUS RESTARTS AGE aa344-detect-5cd757f65d-8kz2n 0/1 ContainerStatusUnknown 536 (62m ago) 46h bb756-detect-855f6bcc78-jnfzd 0/1 ContainerStatusUnknown 8 (59m ago) 75m aa344-analyz-5cc6c59d6c-rchkm 0/1 ContainerStatusUnknown 1 46h lh-graphql-77fc996db5-8qcxl 0/1 ContainerStatusUnknown 1 (2d ago) 2d lh-pgadmin-5b598d4d4-shjbz 0/1 ContainerStatusUnknown 1 2d bb756-analyz-8cd7c48f7-k2xh9 0/1 ContainerStatusUnknown 1 75m lh-postgres-698bc448bd-9vkqp 0/1 ContainerStatusUnknown 1 2d lh-pgadmin-5b598d4d4-c4ts4 0/1 ContainerStatusUnknown 1 54m lh-graphql-77fc996db5-btvzx 0/1 ContainerStatusUnknown 1 54m lh-postgres-698bc448bd-99m55 0/1 ContainerStatusUnknown 1 54m aa344-detect-5cd757f65d-qmvcc 0/1 ContainerStatusUnknown 1 58m bb756-detect-855f6bcc78-7lc7g 0/1 ContainerStatusUnknown 1 56m lh-graphql-77fc996db5-7lbms 1/1 Running 0 34m lh-pgadmin-5b598d4d4-l6f7s 0/1 ContainerStatusUnknown 1 34m aa344-analyz-5cc6c59d6c-78ltt 0/1 ContainerStatusUnknown 1 (17m ago) 55m lh-postgres-698bc448bd-gjbf2 0/1 ContainerStatusUnknown 1 34m aa344-detect-5cd757f65d-cbspd 0/1 ContainerStatusUnknown 1 33m bb756-detect-855f6bcc78-qvqsf 0/1 ContainerStatusUnknown 1 32m lh-pgadmin-5b598d4d4-4znww 1/1 Running 0 17m lh-postgres-698bc448bd-xxm28 1/1 Running 0 16m aa344-analyz-5cc6c59d6c-h7vfc 1/1 Running 3 (9m41s ago) 16m bb756-analyz-8cd7c48f7-4tdcp 1/1 Running 7 (10m ago) 54m bb756-detect-855f6bcc78-fgpzx 0/1 Pending 0 2s bb756-detect-855f6bcc78-t4p4q 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-cd6gl 0/1 ContainerStatusUnknown 1 16m aa344-detect-5cd757f65d-dwhf6 0/1 Pending 0 1s </code></pre>
<p>Before deleting the pods check why the pods are failing to create, if it is due to memory follow below steps:</p> <ol> <li><p>If you are using docker run docker system prune -a to clean up some space taken by docker and then the node will get some space, drain the node and restart docker</p> </li> <li><p>Any container will write any amount of storage to the filesystem. set a <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">quota</a> (<code>limits.ephemeral-storage, requests.ephemeral-storage</code>) to limit this.</p> </li> <li><p>You may need to increase storage as kubernetes need more space.</p> </li> </ol> <p><strong>For Deployment:</strong> scale down the deployment so that if pods are deleted new pods will not try to create. If you scale down Kubernetes will delete the pods.</p> <p>Now scale up the deployment so kubernetes creates new replicas of the pod that the previous command.</p> <p>You can also delete all the pods which are in failed phase without scaling the deployment by running below command:</p> <pre><code>kubectl delete pod --field-selector=status.phase==Failed </code></pre> <p>You can find more methods regarding deleting the pod in this <a href="https://komodor.com/learn/kubectl-restart-pod/" rel="nofollow noreferrer">blog</a> by Oren Ninio</p> <p>Generally it is recommended to use an <a href="https://www.kubecost.com/kubernetes-autoscaling/kubernetes-cluster-autoscaler/" rel="nofollow noreferrer">autoscaler</a> to manage deployments.</p>
<p>While using <code>Kubernetes API</code>, I came across this issue where I am not able to <code>pod exec</code> into another pod using the API.</p> <p>I made sure I am running a <code>pod</code> with a <code>serviceAccount</code> that has correct <code>RBAC</code> permissions (and both <code>pods</code> are in the same namespace). I tried with <code>curl</code> (<code>7.74.0</code>) but I stumbled upon <a href="https://stackoverflow.com/a/49254652/1722326">this</a> SO answer that <code>curl</code> may not be the right utility to do this. I have already tried other possible options (<a href="https://github.com/kubernetes-client/python/issues/409#issuecomment-1241425302" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/a/37396806/1722326">this</a>) mentioned in the comments in the same post, but no luck.</p> <pre><code>TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) curl -k -H &quot;Upgrade: SPDY/3.1&quot; -H &quot;Authorization: Bearer $TOKEN&quot; -H &quot;Connection: Upgrade&quot; -H &quot;Sec-WebSocket-Version: 13&quot; -H &quot;Accept: application/json&quot; &quot;https://kuberne tes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&quot; { &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;Upgrade request required&quot;, &quot;reason&quot;: &quot;BadRequest&quot;, &quot;code&quot;: 400 } </code></pre> <p>Have already tried adding/removing few of those headers.</p> <p>The next thing I tried was <code>wscat</code> (<code>5.2.0</code>). Tried a couple of variations, but needless to say, &quot;no luck&quot; again.</p> <pre><code>wscat -n -H &quot;Authorization: Bearer $TOKEN&quot; -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls error: Unexpected server response: 403 </code></pre> <p>This command outputs nothing (at all):</p> <pre><code>wscat -c wss://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&amp;stdin=true&amp;stdout=true&amp;tty=true&amp;token=$TOKEN&amp;namespace=$NAMESPACE </code></pre> <p>I looked at <a href="https://stackoverflow.com/a/64316721/1722326">this</a> SO post, but it doesn't work for me.</p> <pre><code>wscat -n -H &quot;Authorization: Bearer $TOKEN&quot; -c 'https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/nginx-deployment-5c547569b4-jgm85/exec?command=ls&amp;stdin=true&amp;stdout=true&amp;tty=true' ... SyntaxError: The URL's protocol must be one of &quot;ws:&quot;, &quot;wss:&quot;, or &quot;ws+unix:&quot; ... </code></pre> <p>Not sure how the fella even managed to use <code>http(s)</code> in the first place.</p> <p>Can someone kindly advise if there's something wrong with what I doing or an alternative (can't use <code>kubectl</code>) to do the same? TIA</p>
<p>Curl Magic and trial and error. Make sure that the user has the right permissions so you don't get the 403.</p> <pre><code>curl -k \ --http1.1 \ -H &quot;Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==&quot; \ -H &quot;Sec-WebSocket-Version: 13&quot; \ -i \ -N \ -L \ -k \ -H &quot;Connection: Upgrade&quot; \ -H &quot;Upgrade: websocket&quot; \ -H &quot;Authorization: Bearer $TOKEN&quot; \ &quot;https://127.0.0.1:53329/api/v1/namespaces/default/pods/single-wrong-arch/exec?command=ls&amp;command=-l&amp;stdin=true&amp;stdout=true&amp;stderr=true&quot; HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: qGEgH3En71di5rrssAZTmtRTyFk= Sec-WebSocket-Protocol: total 56 ~drwxr-xr-x 2 root root 4096 Mar 29 14:45 bin drwxr-xr-x 5 root root 360 Apr 27 11:03 dev drwxr-xr-x 1 root root 4096 Apr 27 11:03 etc drwxr-xr-x 2 root root 4096 Mar 29 14:45 home drwxr-xr-x 7 root root 4096 Mar 29 14:45 lib drwxr-xr-x 5 root root 4096 Mar 29 14:45 media drwxr-xr-x 2 root root 4096 Mar 29 14:45 mnt drwxr-xr-x 2 root root 4096 Mar 29 14:45 opt dr-xr-xr-x 248 root root 0 Apr 27 11:03 proc drwx------ 2 root root 4096 Mar 29 14:45 root drwxr-xr-x 1 root root 4096 Apr 27 11:03 run drwxr-xr-x 2 root root 4096 Mar 29 14:45 sbin drwxr-xr-x 2 root root 4096 Mar 29 14:45 srv dr-xr-xr-x 13 root root 0 Apr 27 11:03 sys drwxrwxrwt 2 root root 4096 Mar 29 14:45 tmp drwxr-xr-x 7 root root 4096 Mar 29 14:45 usr drwxr-xr-x 12 root root 4096 Mar 29 14:45 var </code></pre> <p>That's against a minikube cluster, which should explain the port choice and here's the rest of the environemt:</p> <pre><code>$ curl --version curl 7.87.0 (x86_64-apple-darwin22.0) libcurl/7.87.0 (SecureTransport) LibreSSL/3.3.6 zlib/1.2.11 nghttp2/1.51.0 Release-Date: 2022-12-21 Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: alt-svc AsynchDNS GSS-API HSTS HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL threadsafe UnixSockets $ minikube version minikube version: v1.30.1 commit: 08896fd1dc362c097c925146c4a0d0dac715ace0 $ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;27&quot;, GitVersion:&quot;v1.27.0&quot;, GitCommit:&quot;1b4df30b3cdfeaba6024e81e559a6cd09a089d65&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2023-04-11T17:04:23Z&quot;, GoVersion:&quot;go1.20.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/arm64&quot;} Kustomize Version: v5.0.1 Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;26&quot;, GitVersion:&quot;v1.26.3&quot;, GitCommit:&quot;9e644106593f3f4aa98f8a84b23db5fa378900bd&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2023-03-15T13:33:12Z&quot;, GoVersion:&quot;go1.19.7&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/arm64&quot;} $ uname -a Darwin 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 arm64 </code></pre>
<p>I have a grpc service, here is the yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: staging labels: app: staging spec: replicas: 4 selector: matchLabels: app: staging template: metadata: labels: app: staging spec: containers: - name: staging image: ... imagePullPolicy: Always ports: - containerPort: 5274 - containerPort: 5900 --- apiVersion: v1 kind: Service metadata: name: staging-service spec: type: NodePort selector: app: staging ports: - name: staging protocol: TCP port: 5274 targetPort: 5274 nodePort: 30277 - name : staging protocol: TCP port: 5900 targetPort: 5900 nodePort: 30278 </code></pre> <p>As you can see, the grpc is on 5900 port, now I have a ingres yaml:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-rpc-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/backend-protocol: &quot;GRPC&quot; spec: ingressClassName: nginx rules: -http: paths: - path: /st(/|$)(.*) pathType: Prefix backend: service: name: staging-service port: number: 5900 </code></pre> <p>now, the k8s master is 192.168.70.1, so if I access 192.168.70.1/st in my nestjs project like:</p> <pre><code> url: ‘192.168.70.1/st’, package: ‘test’, </code></pre> <p>I will get the error:</p> <pre><code>details: ‘Name resolution failed for target dns:192.168.70.1/st’, </code></pre> <p>if I access the grpc service via 192.168.70.1:30378 everything is fine.</p> <p>Am I missing something here?</p> <p>Thank you</p>
<p>gRPC supports DNS as the default name-system.The following format is supported which is related to IPV4 address.</p> <p><strong>ipv4:address[:port][,address[:port],...] -- IPv4 addresses</strong></p> <p>Here,you can specify multiple comma-delimited addresses of the form <strong>address[:port]:</strong></p> <p><strong>address</strong> is the IPv4 address to use.</p> <p><strong>port</strong> is the port to use. If not specified, 443 is used.</p> <p>This is the reason you are able to access the grpc service via 192.168.70.1:30378</p> <p>You can refer the <a href="https://github.com/grpc/grpc/blob/master/doc/naming.md" rel="nofollow noreferrer">link</a> for more useful information.</p>
<p>I'm looking at the Kubernetes logs we see in our log aggregation tool (Splunk, but not important), and I'm trying to visualise image pull speeds over the day. message that appears in the logs is something like:</p> <p>Successfully pulled image &quot;registry.redhat.io/redhat/redhat-operator-index:v4.10&quot; in 1.100152244s</p> <p>The value 1.100152244s is a string so obviously doesn't work for visualisation. The string itself can appear in multiple formats:</p> <p>1.100152244s 4m4.004131504s 64.10064ms</p> <p>Silly question but what units do we see here after the period? Are these nanoseconds?</p> <p>Many thanks in advance</p>
<p><strong>Disclaimer</strong>: At the time of writing I am not an expert on Kubernetes internals nor do I have any knowledge about Go. (However, I am familiar with the general functioning of K8s components and also with high/low level programming languages...) After about 10 min of research I've gained satisfactory answers:</p> <p><strong>Question</strong></p> <blockquote> <p>what units do we see here after the period? Are these nanoseconds?</p> </blockquote> <p><strong>Short answer</strong></p> <p>The short answer lies in <a href="https://github.com/golang/go/blob/master/src/time/time.go#L644" rel="nofollow noreferrer">time.go</a>.</p> <p><strong>Long answer</strong></p> <p>The Kubernetes event regarding &quot;<em>Successfully pulled image &lt;image-name&gt; in &lt;time&gt;</em>&quot; is generated by the <code>kubelet</code> component. <em>Kubelet</em> is basically responsible for managing (the life cycle of) pods and their containers. It generates <a href="https://github.com/kubernetes/kubernetes/blob/release-1.24/pkg/kubelet/events/event.go#L40" rel="nofollow noreferrer">events</a> for various stages of the container's execution, including the <code>image pulling</code>. Thus the event you mentioned is generated by the <em>kubelet</em> during the image pulling stage of container execution. (At least I knew that part already).</p> <p>You seem to have a kubelet &lt;= release-1.24 in use. The corresponding event is generated <a href="https://github.com/kubernetes/kubernetes/blob/release-1.24/pkg/kubelet/images/image_manager.go#L156" rel="nofollow noreferrer">here</a>:</p> <pre class="lang-golang prettyprint-override"><code>..., fmt.Sprintf(&quot;Successfully pulled image %q in %v&quot;, container.Image, time.Since(startTime)), ... </code></pre> <p>The duration string comes from the <code>time.Since(startTime)</code> (<code>Duration</code>) stament. The <code>fmt.Sprintf</code> calls the <a href="https://github.com/golang/go/blob/master/src/time/time.go#L644" rel="nofollow noreferrer"><code>func (d Duration) String() string</code></a> method on that Duration value, which appears to produce a string output according to the following criteria:</p> <ul> <li><p>If the duration is less than one second, it is displayed with smaller units such as &quot;12.345µs&quot; or &quot;678ns&quot;.</p> </li> <li><p>If the duration is between one second and one minute, it is displayed in seconds, such as &quot;5s&quot; or &quot;42.123456s&quot;.</p> </li> <li><p>If the duration is between one minute and one hour, it is displayed in minutes and seconds, such as &quot;3m45s&quot; or &quot;59m59s&quot;.</p> </li> <li><p>If the duration is more than one hour, it is displayed in hours, minutes, and seconds, such as &quot;1h42m&quot; or &quot;12h34m56s&quot;.</p> </li> </ul> <p>Maybe you can develop suitable patterns in Splunk to transform this format into the desired time unit for visualization (using if-else conditions may also help, e.g. value includes &quot;h&quot;, &quot;m&quot; and &quot;s&quot;? then use the regex <code>(\d+)h(\d+)m(\d+\.?\d*)s</code> to extract values from e.g. &quot;2h10m10.100152244s&quot;).</p> <p><strong>Side note</strong></p> <p>Newer versions of kubelet (&gt; release-1.24) seem to use a slightly different <a href="https://github.com/kubernetes/kubernetes/blob/release-1.25/pkg/kubelet/images/image_manager.go" rel="nofollow noreferrer">log</a></p> <pre class="lang-golang prettyprint-override"><code>..., fmt.Sprintf(&quot;Successfully pulled image %q in %v (%v including waiting)&quot;, container.Image, imagePullResult.pullDuration, time.Since(startTime)), ... </code></pre> <p>e.g.</p> <blockquote> <p>Successfully pulled image &quot;xyz&quot; in 8.831719579s (8.831722421s including waiting)</p> </blockquote>
<p>I want to use github actions to deploy a kubernetes secret to my cluster but I want to leverage the secrets in github actions to store the secret value and inject into my manifest file. I'm new to github actions so not sure how it can be done</p> <p>This is my github actions file</p> <pre><code> on: push: branches: [&quot;main&quot;] workflow_dispatch: env: RESOURCE_GROUP: &quot;rg&quot; CLUSTER_NAME: &quot;cluster&quot; DEPLOYMENT_MANIFEST_PATH: &quot;kustomize/overlay/uat&quot; jobs: deploy: permissions: actions: read contents: read id-token: write runs-on: ubuntu-latest steps: # Checks out the repository this file is in - uses: actions/checkout@v3 # Logs in with your Azure credentials - name: Log in with Azure uses: azure/login@v1 with: creds: '${{ secrets.AZURE_CREDENTIALS_UAT }}' # Use kubelogin to configure your kubeconfig for Azure auth - name: Set up kubelogin for non-interactive login uses: azure/use-kubelogin@v1 with: kubelogin-version: 'v0.0.25' # Retrieves your Azure Kubernetes Service cluster's kubeconfig file - name: Get K8s context uses: azure/aks-set-context@v3 with: resource-group: ${{ env.RESOURCE_GROUP }} cluster-name: ${{ env.CLUSTER_NAME }} admin: 'false' use-kubelogin: 'true' # Deploys application based on given manifest file - name: Deploys application uses: Azure/k8s-deploy@v4 with: action: deploy manifests: ${{ env.DEPLOYMENT_MANIFEST_PATH }} </code></pre> <p>This is my manifest</p> <pre><code> apiVersion: v1 data: api-key: kind: Secret metadata: name: datadog-api-key type: Opaque </code></pre> <p>I've created the secret in github as API_KEY_DEV_QA but unsure how to get github actions to inject that into the above yaml during the workflow.</p>
<p>If could use the <a href="https://github.com/marketplace/actions/yaml-update-action" rel="nofollow noreferrer">yaml-update-action</a>, as example, adding this step before the deploy:</p> <pre><code> - name: Update values.yaml uses: fjogeleit/yaml-update-action@main with: valueFile: '&lt;the-manifest-filename&gt;' propertyPath: data['api-key'] value: ${{ secrets. API_KEY_DEV_QA }} commitChange: false `` Tested here https://github.com/mbiagetti/github-action-poc/pull/3 </code></pre>
<p>I have one deployment which has two containers.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ol000dep spec: selector: matchLabels: app: ol000kube replicas : 2 template: metadata: labels: app: ol000kube spec: containers: - name: django image: django:local ports: - containerPort: 8011 containers: - name: nginx image: nginx:local ports: - containerPort: 80 </code></pre> <p>This makes two replicad pods, and on my understanding, each pod has two containers.</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE ol000dep-bc96bfc98-r9lrj 1/1 Running 0 21m ol000dep-bc96bfc98-t6flw 1/1 Running 0 21m </code></pre> <p>Now I can login</p> <pre><code>kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash </code></pre> <p>Then, I wonder,</p> <p>Is this login in a Pod not a Container??</p> <p>If so ,how can I select the container I want to login?</p>
<p>Interesting. Even though the yaml you provided is incorrect (<code>.spec.containers</code> map key must be <strong>unique</strong>), k8s seems to be fine with that.</p> <p>In your case the pod is started with <strong>only</strong> the second container (name: nginx) in it though.</p> <blockquote> <p>Is this login in a Pod not a Container??</p> </blockquote> <p><strong>Container.</strong></p> <p>So, with <code>kubectl exec -it ol000dep-bc96bfc98-r9lrj /bin/bash</code>, you login/exec into the nginx container.</p> <p>After correcting the yaml, two containers would be started in the pod and you can log into the desired container via its name (e.g. <code>name: django</code>) using the <code>-c / --container</code> parameter.</p> <pre class="lang-yaml prettyprint-override"><code>... containers: - name: django image: django:local ports: - containerPort: 8011 - name: nginx image: nginx:local ports: - containerPort: 80 </code></pre> <p>login:</p> <pre><code>kubectl exec -it POD_NAME -c CONTAINER_NAME -- /bin/bash </code></pre> <p>Note that if you do not specify the name of the container (by omitting <code>-c CONTAINER_NAME</code>), you will login into the first defined container by default (in your case <code>django</code>).</p>
<p>I'm setting up a GitHub action for deploying to Azure kubernetes. The deploy job looks as follows:</p> <pre><code>deploy: name: Deploy to kubernetes runs-on: ubuntu-latest needs: docker steps: - name: Checkout uses: actions/checkout@v3 - name: Setup kubectl uses: Azure/setup-kubectl@v3 - name: Setup cluster context uses: Azure/k8s-set-context@v3 with: method: kubeconfig kubeconfig: ${{ secrets.AZURE_K8_KUBECONFIG }} - name: Deploy to Azure uses: Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/${{ env.image_name }}:${{ needs.release-on-push.outputs.version }} pull-images: false annotate-namespace: false </code></pre> <p>When it gets to the deploy phase it runs through a manifest, which looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: localizer spec: selector: app: localizer ports: - name: std-api-ports port: 8080 targetPort: http-in --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: localizer labels: app: localizer annotations: nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTP&quot; nginx.ingress.kubernetes.io/proxy-body-size: &quot;300M&quot; spec: ingressClassName: nginx-default rules: - host: localizer.UUID.LOCATION.aksapp.io http: paths: - backend: service: name: localizer port: name: std-api-ports path: / pathType: Prefix --- apiVersion: apps/v1 kind: Deployment metadata: name: localizer labels: app: localizer domain: frontend spec: replicas: 2 selector: matchLabels: app: localizer strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 template: metadata: labels: app: localizer spec: containers: - name: localizer image: ghcr.io/urltorepo resources: requests: cpu: 0.5 memory: 2G limits: memory: 2G ports: - name: http-in containerPort: 8080 env: - name: NODE_ENV value: &quot;test&quot; imagePullPolicy: Always livenessProbe: httpGet: path: /ping port: http-in failureThreshold: 3 periodSeconds: 30 startupProbe: httpGet: path: /ping port: http-in failureThreshold: 12 periodSeconds: 10 </code></pre> <p>The workflow then fails due to an error which is as follows:</p> <pre><code>Run Azure/k8s-deploy@v4 with: manifests: manifests/test/manifest.yml images: ghcr.io/***/localizer: pull-images: false annotate-namespace: false namespace: default strategy: basic route-method: service version-switch-buffer: 0 traffic-split-method: pod percentage: 0 action: deploy force: false token: *** private-cluster: false skip-tls-verify: false env: image_name: ***/localizer AZURE_WEBAPP_NAME: localizer-app KUBECONFIG: /home/runner/work/_temp/kubeconfig_1682630327009 Deploying manifests /opt/hostedtoolcache/kubectl/1.27.1/x64/kubectl apply -f /tmp/manifest.yml --namespace default service/localizer unchanged ingress.networking.k8s.io/localizer configured error: error parsing /tmp/manifest.yml: error converting YAML to JSON: yaml: line 25: mapping values are not allowed in this context Error: Error: undefined </code></pre> <p>From the console it would seem that the Service is setup and the Ingress is setup, which leads me to believe that the Deployment is the one with the error. I've run the manifest through a YAML validator and it passes. I've tried commenting out values from the manifest, hence the console showing that the Service is unchanged and Ingress is configured instead of created. I seem to be blind as to what the error could be now.</p> <p>UPDATE: I've now tried to split the manifest file into three files(Deployment, Ingress and Service). This has allowed me to figure out that it definitely is the Deployment manifest that is causing the error. I've also found that the error is related to &quot;image&quot; element. I've tried to follow a course from Microsoft on Azure kubernetes and if I change out my image with the one in the example, then there isn't a problem.</p>
<p>After the action not working, then I decided to try to write the manifests directly in Azure through the Cloud Shell. The manifests worked as intended which resulted in the understanding that the problem is related to the Deploy action. Finally running the action with debugger activated, the problem finally exposed itself. The image wasn't receiving the version from an earlier job. This resulted in the image being displayed as &quot;ghcr.io/*****/****:&quot;. This was then perceived as an object in the Deployment.yml</p>
<h2>Is it possible to have Knative automatically create K8s Ingress resources?</h2> <p>Hello all, Based on the following lines from <a href="https://knative.dev/docs/serving/services/creating-services/" rel="nofollow noreferrer">documentation</a>, I was wondering if I can have Knative automatically create the Ingress resources for my service? I haven't found details on this in the documentation.</p> <pre><code>After the service has been created, Knative performs the following tasks: - Creates a new immutable revision for this version of the app. - Performs network programming to create a route, ingress, service, and load balancer for your app. - Automatically scales your pods up and down based on traffic, including to zero active pods. </code></pre> <p>Example: Taking the Service and Ingress definition below, would it be possible to abstract away the Ingress yaml and have knative take care of its creation automatically for services?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello namespace: knative spec: template: metadata: labels: app: nonprofit annotations: queue.sidecar.serving.knative.dev/resourcePercentage: &quot;10&quot; autoscaling.knative.dev/class: &quot;kpa.autoscaling.knative.dev&quot; autoscaling.knative.dev/target: &quot;40&quot; autoscaling.knative.dev/min-scale: &quot;1&quot; autoscaling.knative.dev/max-scale: &quot;3&quot; spec: containers: - image: gcr.io/knative-samples/helloworld-java resources: requests: cpu: 50m memory: 100M limits: cpu: 200m memory: 200M ports: - containerPort: 8080 env: - name: TARGET value: &quot;Sunny Day&quot; traffic: - tag: latest latestRevision: true percent: 100 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: knative-hello-ingress namespace: knative annotations: nginx.ingress.kubernetes.io/upstream-vhost: &quot;hello.knative&quot; spec: ingressClassName: &quot;ingress-generic&quot; rules: - host: &quot;hello-knative.com&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: hello port: number: 80 </code></pre> <p>Thank you,</p> <p>Haven't tried anything as I haven't found details in the documentation regarding this.</p>
<p>Unfortunately, the v1 Ingress API in Kubernetes does not have sufficient capabilities to express Knative's routing requirements. Knative <em>does</em> support several ingress implementations (including Istio, Contour, and the Gateway API), but no one has written a plugin for the Nginx Ingress annotations.</p> <p>Some of the capabilities that are missing from the Kubernetes Ingress API which are needed by Knative include:</p> <ul> <li>Backend traffic splits / weights</li> <li>Setting request headers to the backend server</li> <li>Requesting HTTP/2 or websockets protocol support</li> </ul> <p>If you're willing to use bets software, the Gateway API plugin is mostly feature complete and should plug into a <a href="https://gateway-api.sigs.k8s.io/implementations/" rel="nofollow noreferrer">variety of ingress providers</a>. Unfortunately, Nginx does not appear to be on that list.</p>
<p>I am using the docker dind image and creating a pod. When I try to run docker build inside the pod, I get an error.</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: runAsUser: 0 </code></pre> <p>The pod is getting created, but when I execute a docker build inside the pod, I get the following error: <code>ERROR: Cannot connect to the docker daemon at unix://var/run/docker.sock</code></p> <p><a href="https://i.stack.imgur.com/4dava.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4dava.png" alt="enter image description here" /></a></p>
<p><strong>NOTE</strong>: this assumes that you're using a docker runtime in K8s! If not, use <a href="https://github.com/agavitalis/kaniko-kubernetes" rel="nofollow noreferrer">kaniko</a>.</p> <p>The docker daemon isn't mounted into the pod - this is the reason for your error. In order to build images, you will need one of these:</p> <ul> <li>get the docker daemon mounted into each container of the pod</li> <li>run the containers in the pod in <code>privileged</code> mode</li> </ul> <pre><code>apiVersion: v1 kind: Pod metadata: name: dockercontainer namespace: default spec: containers: - image: docker:24.0.0-rc.1-dind name: dockercontainer securityContext: privileged: true # this should do the trick </code></pre> <p>There are some issues w/ building/running containers w/in a K8s pod, <strong>from a security standpoint</strong> (more on this <a href="https://kurtmadel.com/posts/native-kubernetes-continuous-delivery/building-container-images-with-kubernetes/" rel="nofollow noreferrer">here</a> and <a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/" rel="nofollow noreferrer">here</a>). Much safer approach is to use <a href="https://github.com/nestybox/sysbox" rel="nofollow noreferrer">sysbox</a> but that's perhaps more detailed than we want to get here.</p> <p>I'm assuming that it's not possible for you to be able to build this image outside of K8s.</p>
<p>I am trying to get default Grafana dashboard and admin username/password on deploy.</p> <p>I tried creating <code>values.yaml</code> but doesn't seem like its working</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: grafana namespace: monitoring labels: grafana_dashboard: &quot;1&quot; data: dashboard-k8.json: | (3) ${indent(4, data)} </code></pre> <p>Is there any better working way to do this? and I used same method for default username and password for grafana.</p> <p>I am working on a maintenance project so its not new installation of Grafana. But, the requirement is to have default dashboard set with new admin username and password when the service is deployed.</p>
<p>As per the grafana docker <a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-docker/#configure-grafana-with-docker-secrets" rel="nofollow noreferrer">documentation</a>, any config in the <code>grafana.ini</code> file can be set at runtime using environment variables of the format <code>GF_&lt;SectionName&gt;_&lt;KeyName&gt;__FILE</code>, which represent paths to files containing the actual value. So we can use the environment variables <code>GF_SECURITY_ADMIN_USER__FILE</code>, <code>GF_SECURITY_ADMIN_PASSWORD__FILE</code> and <code>GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE</code>, to set the admin username, password and the dashboard that is opened at launch.</p> <p>According to the instructions in <a href="https://stackoverflow.com/questions/63518460/grafana-import-dashboard-as-part-of-docker-compose/74995091#74995091">this answer</a>, we can use a dashboard provider of type <code>file</code> to import dashboards from files.</p> <p>Using both these techniques together, I think I was able to get what you wanted.</p> <p><strong>Steps performed:</strong></p> <ul> <li>Created a secret that contains the admin password. At runtime, the password gets mounted into the grafana container at <code>/etc/secrets/admin_password</code> and environment variable <code>GF_SECURITY_ADMIN_PASSWORD__FILE</code> gets set to <code>/etc/secrets/admin_password</code></li> <li>Created a config map that contains the dashboard provider config which gets mounted as <code>/etc/grafana/provisioning/dashboards/main.yaml</code></li> <li>Created another config map that contains the admin user and default dashboard path. These values are exposed to the container as files <code>/etc/config/admin_user</code> and <code>/etc/config/default_home_dashboard_path</code>, with those file paths set to the <code>GF_SECURITY_ADMIN_PASSWORD__FILE</code> and <code>GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE</code> environment variables</li> <li>Created another config map which contains the JSON for the default dashboard. The JSON file gets mounted as <code>/var/lib/grafana/dashboards/test-dashboard.json</code>, with the text <code>/var/lib/grafana/dashboards/test-dashboard.json</code> exposed as <code>/etc/config/default_home_dashboard_path</code>(set in the previous step)</li> </ul> <p><strong>Config map for the dashboard JSON:</strong></p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboards data: test-dashboard.json: |- { &quot;annotations&quot;: { &quot;list&quot;: [ ...truncated... } </code></pre> <p><strong>Other config maps and deployment spec:</strong></p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: Secret metadata: namespace: default labels: app: grafana name: grafana-secrets type: Opaque stringData: admin_password: testpwd --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-dashboard-provider data: dashboard.yml: |- --- apiVersion: 1 providers: - name: &quot;Dashboard provider&quot; orgId: 1 type: file disableDeletion: false updateIntervalSeconds: 10 allowUiUpdates: false options: path: /var/lib/grafana/dashboards foldersFromFilesStructure: true --- apiVersion: v1 kind: ConfigMap metadata: namespace: default labels: app: grafana name: grafana-config data: admin_user: testuser default_home_dashboard_path: /var/lib/grafana/dashboards/test-dashboard.json --- apiVersion: apps/v1 kind: Deployment metadata: namespace: default labels: app: grafana name: grafana spec: selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: volumes: - name: secret-volume secret: secretName: grafana-secrets items: - key: admin_password path: admin_password - name: dashboard-provider-volume configMap: name: grafana-dashboard-provider items: - key: dashboard.yml path: main.yaml - name: dashboards-volume configMap: name: grafana-dashboards items: - key: test-dashboard.json path: test-dashboard.json - name: config-volume configMap: name: grafana-config items: - key: admin_user path: admin_user - key: default_home_dashboard_path path: default_home_dashboard_path containers: - name: grafana image: grafana/grafana-oss:9.5.1 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 volumeMounts: - name: secret-volume mountPath: /etc/secrets - name: dashboard-provider-volume mountPath: /etc/grafana/provisioning/dashboards - name: dashboards-volume mountPath: /var/lib/grafana/dashboards - name: config-volume mountPath: /etc/config env: - name: GF_SECURITY_ADMIN_PASSWORD__FILE value: /etc/secrets/admin_password - name: GF_SECURITY_ADMIN_USER__FILE value: /etc/config/admin_user - name: GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH__FILE value: /etc/config/default_home_dashboard_path </code></pre> <p>The complete config files can be found on <a href="https://github.com/devatherock/random-demo/commit/fd868fa7f488fcb77091a0202499da8684c905df" rel="nofollow noreferrer">github</a></p>
<p>I installed pods and services with <code>deployment.yml</code>,<code>service.yml</code> and <code>ingress.yml</code> like this below.</p> <p><code>deployment.yml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: apa000dep spec: selector: matchLabels: app: apa000kube replicas : 3 template: metadata: labels: app: apa000kube spec: containers: - name: apa000ex91 image: httpd ports: - containerPort: 80 </code></pre> <p><code>service.yml</code></p> <pre><code>apiVersion: v1 kind: Service metadata: name: apa000ser spec: type: NodePort ports: - port: 8099 targetPort: 80 protocol: TCP nodePort: 30080 selector: app: apa000kube </code></pre> <p>Both works well and, I can access pod directly <code>localhost:30080</code></p> <p>then i installed ingress.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: /* backend: service: name: apa000ser port: number: 8099 </code></pre> <p>The hostname is set correctly</p> <pre><code>$kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-ingress &lt;none&gt; demo.localdev.me 80 95m </code></pre> <p>However when I access <code>http://demo.localdev.me</code> it returns 404 error.</p> <p><code>http://localhost:30080/</code> returns <code>&lt;span&gt;it works!&lt;/span&gt;</code></p> <p>So, I guess some ingress setting is wrong. where should I check?</p> <hr /> <p>Even I stop <code>example-ingress</code>, result is same.</p> <p>So it means.</p> <pre><code>nginx-ingress -&gt; ingress -&gt; service -&gt; pods. </code></pre> <p>Every request is fetchd in first <code>nginx-ingress</code>, and not sent to <code>ingress</code>?</p>
<p>Two things that caught my eye:</p> <ol> <li><p>The Ingress controller you are using <del>seems to</del> might require an <code>ingressClassName</code> defined in the Ingress resource, so that the ingress controller takes this resource (and its configs) into account.</p> <p>More about <code>Ingress class</code>:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="nofollow noreferrer">Ingress class</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-class" rel="nofollow noreferrer">Default IngressClass</a></li> </ul> </li> <li><p>You should change the <code>path: /*</code> to <code>path: /</code>.</p> <ul> <li><p><code>path: /*</code> makes the Ingress controller create a location block à la</p> <pre><code>location /* { ... } </code></pre> <p><code>/*</code> is interpreted as a normal prefix in this case and has nothing to do with regex (probably not what you assumed).</p> <p>In your case: the <em>404</em> comes rather from the ingress itself (since request URI <code>/</code> not found - must be '/*' instead). In order for a request to be proxied to the httpd server at all, the request must be as follows: &quot;http://demo.localdev.me/*&quot;, to which the httpd would again respond with 404 (since the resource '/*' also doesn't exist on httpd by default).</p> </li> <li><p>Whereas <code>path: /</code> does the following:</p> <pre><code>location / { ... } </code></pre> <p>The <code>location /</code> block is a special case that matches any URI that starts with a slash (/), which includes all URIs. (This is also the default location block that Nginx uses if no other location block matches the request's URI.)</p> </li> </ul> <p>More about nginx <code>location</code>:</p> <ul> <li><a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer">nginx doc</a></li> <li><a href="https://stackoverflow.com/questions/59846238/guide-on-how-to-use-regex-in-nginx-location-block-section">Guide on how to use regex in Nginx location block section?</a></li> </ul> </li> </ol> <p><strong>Final result:</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: ingressClassName: nginx # add rules: - host: demo.localdev.me http: paths: - pathType: Prefix path: / # change backend: service: name: apa000ser port: number: 8099 </code></pre>
<p>i want to ingested containers json log data using filebeat deployed on kubernetes, i am able to ingest the logs to but i am unable to format the json logs in to fields</p> <p>following is the logs visible in kibana</p> <p><a href="https://i.stack.imgur.com/xd22X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xd22X.png" alt="enter image description here" /></a></p> <p>I want to take out the fields from messages above e.g. field for log.level, message, service.name and so on</p> <p>Following are the filebeat configuration we are using</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: kube-system labels: k8s-app: filebeat data: filebeat.yml: |- filebeat.inputs: - type: container paths: - /var/log/containers/*.log - /var/log/containers/*.json processors: - add_kubernetes_metadata: host: ${NODE_NAME} matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: filebeat.autodiscover: providers: - type: kubernetes node: ${NODE_NAME} templates: - condition: contains: kubernetes.container.name: &quot;no-json-logging&quot; config: - type: container paths: - &quot;/var/log/containers/*-${data.kubernetes.container.id}.log&quot; - condition: contains: kubernetes.container.name: &quot;json-logging&quot; config: - type: container paths: - &quot;/var/log/containers/*-${data.kubernetes.container.id}.log&quot; json.keys_under_root: true json.add_error_key: true json.message_key: message processors: - add_cloud_metadata: - add_host_metadata: cloud.id: ${ELASTIC_CLOUD_ID} cloud.auth: ${ELASTIC_CLOUD_AUTH} output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:XX.XX.XX.XX}:${ELASTICSEARCH_PORT:9201}'] username: ${ELASTICSEARCH_USERNAME} password: ${ELASTICSEARCH_PASSWORD} --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat spec: selector: matchLabels: k8s-app: filebeat template: metadata: labels: k8s-app: filebeat spec: serviceAccountName: filebeat terminationGracePeriodSeconds: 30 hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: filebeat image: docker.elastic.co/beats/filebeat:8.5.3 args: [ &quot;-c&quot;, &quot;/etc/filebeat.yml&quot;, &quot;-e&quot;, ] env: - name: ELASTICSEARCH_HOST value: XX.XX.XX.XX - name: ELASTICSEARCH_PORT value: &quot;9201&quot; - name: ELASTICSEARCH_USERNAME value: elastic - name: ELASTICSEARCH_PASSWORD value: elastic - name: ELASTIC_CLOUD_ID value: - name: ELASTIC_CLOUD_AUTH value: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName securityContext: runAsUser: 0 # If using Red Hat OpenShift uncomment this: #privileged: true resources: limits: memory: 200Mi requests: cpu: 100m memory: 100Mi volumeMounts: - name: config mountPath: /etc/filebeat.yml readOnly: true subPath: filebeat.yml - name: data mountPath: /usr/share/filebeat/data - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: varlog mountPath: /var/log readOnly: true volumes: - name: config configMap: defaultMode: 0640 name: filebeat-config - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: varlog hostPath: path: /var/log # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart - name: data hostPath: # When filebeat runs as non-root user, this directory needs to be writable by group (g+w). path: /var/lib/filebeat-data type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: filebeat subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: ClusterRole name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: filebeat-kubeadm-config namespace: kube-system subjects: - kind: ServiceAccount name: filebeat namespace: kube-system roleRef: kind: Role name: filebeat-kubeadm-config apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat labels: k8s-app: filebeat rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: - namespaces - pods - nodes verbs: - get - watch - list - apiGroups: [&quot;apps&quot;] resources: - replicasets verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] - apiGroups: [&quot;batch&quot;] resources: - jobs verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat # should be the namespace where filebeat is running namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: - coordination.k8s.io resources: - leases verbs: [&quot;get&quot;, &quot;create&quot;, &quot;update&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: filebeat-kubeadm-config namespace: kube-system labels: k8s-app: filebeat rules: - apiGroups: [&quot;&quot;] resources: - configmaps resourceNames: - kubeadm-config verbs: [&quot;get&quot;] --- apiVersion: v1 kind: ServiceAccount metadata: name: filebeat namespace: kube-system labels: k8s-app: filebeat --- </code></pre> <p>How can i take out the fields from json message?</p>
<p>The issue is from configuration. One possible work around is reinstalling the filebeat and sending the logs to elastic search.</p> <p>Follow the content in the blog by <a href="https://medium.com/@semih.sezer/how-to-send-airflow-logs-to-elasticsearch-using-filebeat-and-logstash-250c074e7575" rel="nofollow noreferrer">Semih Sezer</a> which has the process of sending Airflow logs to elastic search using filebeat.</p>
<p>My pod is running in k8s and restarts every 20-30 hours, caused by an <code>OOMKilled</code> error, here are the configured limits:</p> <pre><code> limits: cpu: 600m memory: 1536Mi requests: cpu: 150m memory: 1536Mi </code></pre> <p>In the container is a JVM (Spring Boot) running with the following Options:</p> <pre><code>-Xms256m -Xmx1G -Xdebug -XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=26 </code></pre> <p><strong>Analysis of the JVM Metrics</strong></p> <p>From my point of view, I can see, that there is a load peak, but not high enough to reach the memory limits, however the CPU Usage increases rapidly: <a href="https://i.stack.imgur.com/SRbds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SRbds.png" alt="enter image description here" /></a></p> <p><strong>Analysis of the Pod Metrics</strong></p> <p>Let me show you the metrics of the pod provided by k8s: <a href="https://i.stack.imgur.com/D8ZqM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D8ZqM.png" alt="enter image description here" /></a></p> <p>Memory increases much more than in the JVM shows. CPU Throttling is active. But I do not find the root cause. It seems that the container reaches the Limits and not the JVM, but why? Can CPU throttling causes Memory issues? Throttling is an expected behavior for peaks to slow down. I do not expect any memory issues in that case.</p>
<p>It is possible for CPU throttling to indirectly cause memory issues by making garbage collection in the JVM less efficient or escalate an already inefficient memory usage, as such it may result in slower response times for your application, leading to more requests being queued up and more memory being used making the application store more objects in memory while waiting for requests to be processed.</p> <p>The JVM flags that you have set are a good starting point. To further investigate any memory leaks you might want to use the followings to dump the heap on an OOM and analyze the dump with a tool like Java VisualVM to find the root cause.</p> <pre><code>-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump.bin </code></pre> <blockquote> <p>Throttling is an expected behavior for peaks to slow down.</p> </blockquote> <p>Yes, but I would consider CPU throttling more of a handbrake here and not the only <em>solution</em>. Instead I would implement an appropriate mechanism (like rate limiting, request queuing, circuit breakers, or backpressure throttling) either in the application or at the load balancer/reverse proxy level to prevent queues from forming.</p>
<p>I have been trying to expose MySQL InnoDB Cluster setup on OCI OKE to Internet. This cluster was installed using MySQL Operator and the pods are listening on to port 6446 and 6447. I have ClusterIP to connect to pods by <code>exec</code>'ing into the pod. However, I have failed to find any best way to expose this cluster to internet using OCI Loadbalancer. As I am pretty new to OCI, trying to get some understanding on how to get this to work.</p> <p>Cluster was installed using this <a href="https://github.com/mysql/mysql-operator/tree/trunk/helm" rel="nofollow noreferrer">repo</a></p>
<p>you can install the ingress controller and expose it with a <strong>TCP</strong> port opening with the controller or you can change the service type to LB instead <strong>clusterIP</strong> which will expose the service directly however you might have to take care about the type of <strong>LB</strong> coz <strong>MySQL</strong> will be using <strong>TCP</strong> maybe so <strong>HTTP LB</strong> may not work.</p> <p>Doc for ref : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p> <p>With doc, you will get the idea to expose the <strong>TCP</strong> or <strong>UDP</strong> service with <strong>Nginx ingress controller</strong> which is more or like <strong>port</strong> opening and exposing service.</p>
<p>I have a Kubernetes app that is constantly logging ServiceBusReceiver.Receive Dependency calls. It is creating 2000 logs per hour, per instance. In the TelemtryClient there are only custom methods for TrackEvent and TrackException so these look like they are coming from somewhere else and I haven't been able to trace it to disable or find out why its logging so much. The TrackDependency method is part of the built in Microsoft.ApplicationInsights.TelemetryClient package. I have changed versions of packages to match another messaging app I have with no luck, and also updated packages to latest versions also with no luck. There isn't much other info in the logs to trace it.</p> <p>SERVICEBUS ServiceBusReceiver.Receive</p> <ul> <li>Dependency Properties</li> </ul> <p>Type: servicebus<br /> Call status: true<br /> Duration: 1.0 mins<br /> Name: ServiceBusReceiver.Receive<br /> Telemetry type: dependency<br /> Application version: 4.19.0.0<br /> SDK version dotnetc:2.21.0-429<br /> Sample rate: 1<br /> Performance: 1min-2min<br /> Base name: ServiceBusReceiver.Receive</p> <p>Other info about packages and versions installed:</p> <ul> <li><p>Sdk=&quot;Microsoft.NET.Sdk&quot;</p> </li> <li><p>net6.0</p> </li> <li><p>AzureFunctionsVersion v4</p> </li> <li><p>&quot;AutoMapper.Extensions.Microsoft.DependencyInjection&quot; Version=&quot;4.0.1&quot;</p> </li> <li><p>&quot;Azure.Messaging.ServiceBus&quot; Version=&quot;7.10.0&quot;</p> </li> <li><p>&quot;Microsoft.Azure.WebJobs.Extensions.ServiceBus&quot; Version=&quot;5.4.0&quot;</p> </li> <li><p>&quot;Microsoft.Azure.WebJobs.Logging.ApplicationInsights&quot; Version=&quot;3.0.33&quot;</p> </li> <li><p>&quot;Microsoft.NET.Sdk.Functions&quot; Version=&quot;4.0.1&quot;</p> </li> <li><p>&quot;Microsoft.Azure.Functions.Extensions&quot; Version=&quot;1.1.0&quot;</p> </li> <li><p>&quot;Microsoft.Extensions.Azure&quot; Version=&quot;1.2.0&quot;</p> </li> <li><p>&quot;Microsoft.Extensions.Configuration.AzureAppConfiguration&quot; Version=&quot;5.1.0&quot;</p> </li> <li><p>&quot;Microsoft.Extensions.Caching.Memory&quot; Version=&quot;6.0.1&quot;</p> </li> <li><p>&quot;Polly&quot; Version=&quot;7.1.0&quot;</p> </li> <li><p>&quot;Scrutor&quot; Version=&quot;4.1.0&quot;</p> </li> </ul>
<p>For that, you can write a <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling#itelemetryprocessor-and-itelemetryinitializer" rel="nofollow noreferrer">TelemetryProcessor</a>:</p> <blockquote> <p>Telemetry processors allow you to completely replace or discard a telemetry item.</p> </blockquote> <p>It could look like this:</p> <pre class="lang-cs prettyprint-override"><code>public class ServiceBusTelemetryReducer : ITelemetryProcessor { private readonly ITelemetryProcessor _next; public ServiceBusTelemetryReducer(ITelemetryProcessor next) { _next = next; } public void Process(ITelemetry item) { var isServiceBusReceiveTelemetry = item is DependencyTelemetry telemetry &amp;&amp; telemetry.Type == &quot;Azure Service Bus&quot; &amp;&amp; telemetry.Name == &quot;ServiceBusReceiver.Receive&quot;; // Only process telemetry that is relevant if (!isServiceBusReceiveTelemetry) _next.Process(item); } } </code></pre> <p>Do not forget to register the processor:</p> <pre class="lang-cs prettyprint-override"><code>services.AddApplicationInsightsTelemetryProcessor&lt;ServiceBusTelemetryReducer&gt;(); </code></pre>
<p>This is first time I am using statefulsets in openshift and would like to understand how do you expose the services to outside world? Do you create two services headless and another normal one? How to get this working?</p>
<p>It depends on your use case and what you are planning for.</p> <p>If you are expecting your service to forward a request to any of the available <strong>POD</strong> of <strong>statefulset</strong> you can create a normal <strong>clusterIP</strong> service.</p> <p>You have the requirement of getting specific <strong>DNS</strong> records or You need the <strong>IP</strong> of each available <strong>POD</strong> <strong>headless</strong> (with selector) service will do it for you.</p> <p>There is also <strong>headless service</strong> without <strong>selector</strong> could be used with <strong>Endpointslice</strong> to forward traffic to other or <strong>External</strong> thrid party services.</p> <p>Ref : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors</a></p>
<p>I am trying to install telepresence version 1 in ubuntu 22.04 but I don't have the download links to it.</p>
<p>Telepresence version 1 is no longer supported but you can install the new version <a href="https://www.getambassador.io/docs/telepresence/latest/install" rel="nofollow noreferrer">here</a>. Feel free also to join the <a href="https://a8r.io/slack" rel="nofollow noreferrer">Telepresence open source slack channel</a> if you have further questions or want to join our weekly help session for further assistance.</p>
<p>I have a helm chart which is using aws ecr to pull images and deploy them into native Kubernetes:</p> <p><em><strong>Chart.yml:</strong></em></p> <pre><code>apiVersion: v2 description: helm chart for microservice-registry name: microservice-registry type: application version: 0.0.1 </code></pre> <p><em><strong>values.yaml</strong></em></p> <pre><code>namespace: replica: 1 labels: app: microservice-registry imagePullSecrets: microservice-registry-secret image: repository: 123456789.dkr.ecr.us-east-1.amazonaws.com/microservice-registry tag: 0.0.1 imagePullPolicy: Always strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate service: containerPort: 8761 port: 80 type: NodePort nodePort: 30061 secrets: data: .dockerconfigjson: 123456789 type: kubernetes.io/dockerconfigjson awscreds: data: SECRET_KEY: 123456789 ACCESS_KEY: 123456789 name: awscreds-registry region: us-east-1 cronjob: name: microservice-registry-job image: gtsopour/awscli-kubectl:latest </code></pre> <p>When I deploy the helm chart I get:</p> <pre><code>root@node1:~# kubectl get pods NAME READY STATUS RESTARTS AGE microservice-registry-7c4654467c-q292p 0/1 ImagePullBackOff 0 17m cicd 1/1 Running 1 (22d ago) 44d grafana-f6d989bbf-hrr6c 1/1 Running 1 (22d ago) 45d ingress-nginx-controller-6b8bfd7f69-cwd2b 1/1 Running 1 (22d ago) 35d nfs-subdir-external-provisioner-5f7dff96d9-rj75j 1/1 Running 39 (2d3h ago) 38d zookeeper-0 1/1 Running 1 (22d ago) 38d root@node1:~# </code></pre> <p><em><strong>log:</strong></em></p> <pre><code>root@node1:~# kubectl logs microservice-registry-7c4654467c-q292p Error from server (BadRequest): container &quot;microservice-registry&quot; in pod &quot;microservice-registry-7c4654467c-q292p&quot; is waiting to start: trying and failing to pull image root@node1:~# </code></pre> <p>I have a similar service which is using the same aws ecr credentials. It's deployed successfully. Do you know how I can fix this issue?</p> <p><em><strong>Policy JSON:</strong></em></p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;ecr:BatchCheckLayerAvailability&quot;, &quot;ecr:BatchGetImage&quot;, &quot;ecr:GetDownloadUrlForLayer&quot;, &quot;ecr:GetAuthorizationToken&quot; ], &quot;Resource&quot;: &quot;*&quot; } ] } </code></pre>
<p>According to your <a href="https://pastebin.com/7x3NjD8s" rel="nofollow noreferrer">https://pastebin.com/7x3NjD8s</a>, you got <code>401 Unauthorized</code> from ECR. Since you have an On-Premises Kubernetes instance you need to use a process to set your AWS credentials on <code>ImagePullSecrets</code> on your PODs. I would use this one: <a href="https://github.com/upmc-enterprises/registry-creds" rel="nofollow noreferrer">https://github.com/upmc-enterprises/registry-creds</a>, it will collect your AWS credentials and update the <code>ImagePullSecrets</code> automatically with token to login on AWS ECR.</p>
<p>I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service.</p> <p>My testing setup is a single Kubernetes node, with a deployment of two nginx pods. The deployment is then exposed with a NodePort service.</p> <p>I've then deployed an ingress contoller (I've tried both <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Kubernetes Nginx Ingress Controller</a> and <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Kubernetes Ingress Controller</a>, separately) and created an ingress rule for the NodePort service.</p> <p>I edited index.html on each of the nginx pods, so that one shows &quot;SERVER A&quot; and the other &quot;SERVER B&quot;, and ran a script that then <code>curl</code>s the NodePort service 100 times. It <code>grep</code>s &quot;SERVER x&quot; each time, appends it to an output file, and then tallies the number of each at the end.</p> <p>As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods.</p> <p>Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well.</p> <p><strong>However</strong>, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please?</p> <p>I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.</p>
<p>Looks like there are 2 versions of ingress controller.</p> <ol> <li><p>Which K8S community has been maintaining which is <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> </li> <li><p>Which Nginx is maintaining(Opensource &amp; Paid): <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress</a></p> </li> </ol> <p>The second one seems to be doing strict round robin(still testing) after adding <code>nginx.org/lb-method: “round_robin”</code> while the first one does 50:50 aggregate load balancing between replica's</p> <p>In my opinion its an important difference but with lot of confusion with names, difference between them can be read <a href="https://www.nginx.com/blog/guide-to-choosing-ingress-controller-part-4-nginx-ingress-controller-options/" rel="nofollow noreferrer">here</a></p> <p>I composed this answer with help of comments from @hiiamelliott...</p>
<p>In my terraform project, I have my own module which creates an AKS cluster with RBAC enabled. This module has an output <strong>aks_public_fqdn</strong> which holds the FQDN of the cluster.</p> <pre><code>module &quot;aks&quot; { source = &quot;./aks&quot; environment = local.environment region = local.region instance_id = local.workload_id application = local.workload_name resource_group = local.rg_name kubernetes_version = local.kubernetes_version } </code></pre> <p>Once the cluster is up and running, I would like to create some K8S resources through Terraform with the Kubernetes provider.</p> <p>To authenticate I am using the following code snippet:</p> <pre><code>provider &quot;kubernetes&quot; { host = &quot;https://${module.aks.aks_public_fqdn}&quot; insecure = true exec { api_version = &quot;client.authentication.k8s.io/v1beta1&quot; command = &quot;kubelogin&quot; args = [ &quot;get-token&quot;, &quot;--environment&quot;, &quot;AzurePublicCloud&quot;, &quot;--server-id&quot;, &quot;3333333-3333333-3333333&quot;, &quot;--login&quot;, &quot;azurecli&quot; ] } } </code></pre> <p>When I try to run terraform apply I get:</p> <pre><code>Error: Provider configuration: cannot load Kubernetes client config invalid configuration: default cluster has no server defined </code></pre> <p>The only entry I have in my kubeconfig file is the context for my local kind cluster. What is missing here?</p> <p>Am I hit by the following github issue? <a href="https://github.com/hashicorp/terraform/issues/2430" rel="nofollow noreferrer">Provider Issue</a></p>
<p>The <code>kubernetes</code> provider configuration contains an output from the module declared as <code>aks</code>. Prior to version <code>2.4.0</code> of the Kubernetes provider it was possible to simultaneously manage a Kubernetes cluster backing infrastructure and the cluster itself at initial provisioning (or subsequent Delete/Create) by configuring the provider with either resource attributes, or with <code>data</code> attributes. At version <code>2.4.0</code> of the Kubernetes provider, the new experimental Kubernetes provider with the latest Terraform SDK and Kubernetes Go SDK bindings (and consequently Kubernetes API) stabilized and replaced the former Kubernetes provider (ergo why some resources are marked <code>v1</code> and <code>v2</code> as the provider still contained some of the legacy code for backwards support, and only the minor version of the provider was iterated and not the major version according to semantic versioning rules). With this new provider and its use of the modern Kubernetes API it became no longer possible for this simultaneous management at initial provisioning. Therefore the error in the question is observed as Terraform's Kubernetes provider is attempting to configure with a non-existent cluster instead of charting a dependency hierarchy that would imply the provider is dependent upon the <code>aks</code> module.</p> <p>With all this in mind it becomes clear there are two workarounds. The first is to downgrade the Kubernetes provider to the last version of the old provider:</p> <pre><code>terraform { required_providers { kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;&gt;= 2.3.2&quot; } } } </code></pre> <p>The other solution would be to <a href="https://developer.hashicorp.com/terraform/cli/commands/plan#resource-targeting" rel="nofollow noreferrer"><code>-target</code></a> the <code>aks</code> module first to manually enforce the dependency and populate the outputs, and then subsequently manage the entire Terraform config:</p> <pre><code>terraform plan -target=module.aks </code></pre>
<p>We currently have a 3 node redis cluster running on OKE which has 3 Master nodes only( no replica nodes, each running on their own pod ) with no persistent storage . As an enhancement we are adding 3 replica nodes and Block volumes for persistent storage. We will have 6 pods ( one for each redis cluster node with 3 master and 3 replica nodes) each pod will have its own block volume.</p> <pre><code>Master1 , Master2 ,Master3 ,Replica1, Replica2, Replica3 BV1 , BV2 ,BV3 ,BV4 ,BV5 ,BV6 </code></pre> <p>I wanted to know if Master pod goes down and Replica pod becomes the new Master then how does the data from Block volume of master will get replicated to Replica's block volume ? or do we only need to have 3 block volumes ( only for master nodes) so that when master node goes down and replica becomes the new master then it can start using Master pods block volume.</p>
<p>Not sure how you are setting up the cluster but if you using Helm chart or something it might be creating the statefulsets.</p> <p><strong>Statefulsets</strong> will be having separate <strong>PVC</strong> unless you setup or plan to use shared Block storage (PVC).</p> <p>Each stateful set suppose <strong>3</strong> <strong>Node</strong> and <strong>3 Salve</strong> total have <strong>6 PVC</strong> and between those <strong>Redis</strong> bus will take care of replication.</p> <p>Unless you with using some <strong>NFS</strong> or shared volume option which in the background uses <strong>2-3</strong> <strong>Block volume</strong> that's a whole different case.</p> <p>With a normal cluster setup of <strong>Redis</strong> with <strong>Helm</strong> or <strong>Operator</strong> scenario will be something like the above.</p>
<p>While this question might seem duplicate at first, I would ask people to go through it once. I have checked SO, for all similar questions before posting this.</p> <p>I have an ALB ingress controller which has a registered Target Group for an application that I am trying to access via the ALB. However the target group binding is not getting created for the application due to which the &quot;registered targets&quot; under the target group always comes as 0. Also the LoadBalancerAssociated also comes as None. This can be seen from the image below.</p> <p>I have the checked the ALB pod logs and there is no error w.r.t creating the targetgroupbinding.</p> <p>Based on some documentation here :</p> <p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/</a></p> <p>I see that the ALB is supposed to create the targetgroupbinding itself:</p> <p><code>The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used.</code></p> <p>Since there is no error in the pod logs, I am wondering how can I debug this issue?</p> <p><a href="https://i.stack.imgur.com/yz4g2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yz4g2.png" alt="enter image description here" /></a>Any help would be appreciated.</p> <p>Update 1 (Current scenario) : The ALB is supposed to load balance a number of applications. So the ingress has many services under it. The targetgroupbindings have been created for all the service except the one mentioned above.</p>
<p>I was able to fix the issue by adding below annotation</p> <pre><code>&quot;alb.ingress.kubernetes.io/target-type&quot;: ip </code></pre> <p>Referece - <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1695" rel="nofollow noreferrer">1695</a></p>
<p>I'm using kustomize to manage a rather standard deployment. I have a <code>namePrefix</code> to modify the resource names.</p> <p>I need to add a custom resource to my configuration which itself autogenerates a secret after creation. The secret name consists of a fixed prefix and the name of the custom resource. I want to reference this secret in my deployment.</p> <pre class="lang-yaml prettyprint-override"><code># kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt </code></pre> <pre class="lang-yaml prettyprint-override"><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-my-custom-resource &lt;-- does not work </code></pre> <pre class="lang-yaml prettyprint-override"><code># custom-resource.yaml apiVersion: some.crd.io/v1 kind: CustomResource metadata: name: my-custom-resource </code></pre> <p>The custom resource will autogenerate: (not result of <code>kubectl kustomize .</code>)</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Secret metadata: name: prefix-my-custom-resource </code></pre> <p>Due to the use of the PrefixTransformer, the name of the custom resource is changed to <code>my-prefix-my-custom-resource</code>. Therefore, the <code>secretRef</code> in the deployment yaml needs to be updated to <code>prefix-my-prefix-my-custom-resource</code>. I tried to solve this with a nameReference configuration, but I don't think the <code>fieldSpec</code> allows for a substring. Is there any solution to this?</p>
<p>With the help of a colleague, we were able to solve this with vars:</p> <pre class="lang-yaml prettyprint-override"><code># kustomization.yaml resources: - deployment.yaml - custom-resource.yaml namePrefix: my-prefix- secretGenerator: - name: my-secret files: - password.txt configurations: - configurations/var-reference.yaml vars: - name: MY-VARIABLE objref: kind: CustomResource name: my-custom-resource apiVersion: some.crd.io/v1 fieldref: fieldpath: metadata.name </code></pre> <pre class="lang-yaml prettyprint-override"><code># configurations/var-reference.yaml varReference: - kind: Deployment path: spec/template/spec/containers/envFrom/secretRef/name </code></pre> <pre class="lang-yaml prettyprint-override"><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-image image: my-image:latest envFrom: - secretRef: name: my-secret - secretRef: name: prefix-$(MY-VARIABLE) </code></pre> <p>I'm aware that vars are deprecated and might try to find a solution with replacements, but for now I'm good with this solution.</p>
<p>I want to resize Kubernetes Persistent Volume. So I tried by editing</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pvc labels: app: postgres spec: storageClassName: aws-gp2 accessModes: - ReadWriteOnce resources: requests: storage: 50Gi </code></pre> <p>But I got error following error</p> <pre><code>error: persistentvolumeclaims &quot;postgres-pvc&quot; could not be patched: persistentvolumeclaims &quot;postgres-pvc&quot; is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize </code></pre> <p>Then I went to AWS console and resized the PVC to 50GB. Now I see volume size to 50GB</p> <pre><code>root@postgres-8654848bb4-6897j:/# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:1 0 100G 0 part /etc/hosts └─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 5G 0 disk nvme2n1 259:4 0 5G 0 disk nvme3n1 259:5 0 5G 0 disk nvme4n1 259:6 0 1G 0 disk nvme5n1 259:7 0 5G 0 disk nvme6n1 259:8 0 50G 0 disk /var/lib/postgresql/data nvme7n1 259:9 0 1G 0 disk nvme8n1 259:10 0 1G 0 disk nvme9n1 259:11 0 200G 0 disk </code></pre> <p>But its not reflecting in <code>df -h</code></p> <pre><code>root@postgres-8654848bb4-6897j:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 100G 30G 71G 30% / tmpfs 64M 0 64M 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/nvme0n1p1 100G 30G 71G 30% /etc/hosts shm 64M 236K 64M 1% /dev/shm /dev/nvme6n1 9.7G 8.8G 930M 91% /var/lib/postgresql/data tmpfs 7.0G 12K 7.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 3.9G 0 3.9G 0% /proc/acpi tmpfs 3.9G 0 3.9G 0% /sys/firmware </code></pre> <p>Then I tried <code>resize2fs</code> command but got following error</p> <pre><code>root@postgres-8654848bb4-6897j:/# resize2fs /dev/nvme6n1 resize2fs 1.46.2 (28-Feb-2021) open: No such file or directory while opening /dev/nvme6n1 root@postgres-8654848bb4-6897j:/# resize2fs /var/lib/postgresql/data resize2fs 1.46.2 (28-Feb-2021) open: Is a directory while opening /var/lib/postgresql/data </code></pre> <p>Any help with this here ?</p>
<p>Easiest way you can use the <strong>allowVolumeExpansion</strong> in <strong>StorageClass</strong> , i did for <strong>Elasticsearch</strong> Database</p> <p>Checkout my article on that : <a href="https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b</a></p> <p><strong>strogeclasss</strong></p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true </code></pre> <p>Update <strong>PVC</strong> after this with the <strong>desired size</strong> you are looking forward to keep and <strong>restart</strong> the <strong>POD</strong>.</p>
<p>Hi I was just trying to install argo CD in my local machine. I have installed and running minikube.</p> <p>After creating argocd namespace, I just try these commands</p> <pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml </code></pre> <p>This error persists:</p> <pre><code>Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout </code></pre> <p>Could I get some help? Byw I'm new to argo...</p>
<p>The error “<code>Unable to connect to the Server TCP I/O timeout</code>” happens usually due to some common causes and you can try to troubleshoot based on below steps :</p> <p>1)Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address.</p> <p>2)There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place which is preventing the access.</p> <p>3)You have configured a cluster that does not exist any more. Also error might have resulted due to the IP address difference in the kubelet configuration.</p> <p>4)Refer to this official <a href="https://argo-cd.readthedocs.io/en/stable/developer-guide/running-locally/" rel="nofollow noreferrer">doc</a> about how to install ArgoCD in a local machine, as mentioned in the doc you need to run in the same namespace where Argo CD is installed. Try setting the current context as the default namespace by using below steps :</p> <pre><code>kubectl config set-context --current --namespace=argocd </code></pre> <p><strong>To see your current context</strong>:</p> <p><code>kubectl config current-context</code></p> <p><strong>To see the contexts you have</strong>:</p> <p><code>kubectl config view</code></p> <p><strong>To switch context</strong>:</p> <pre><code>kubectl config use-context context-cluster-name` </code></pre> <p>Make sure you are using the correct kubectl context.</p> <p>Also you can refer to this <a href="https://www.techcrumble.net/2019/06/kubernetes-error-unable-to-connect-to-the-server-tcp-i-o-timeout/" rel="nofollow noreferrer">doc</a> authored by Aruna Lakmal for more information about this error.</p>
<p>I have 2 templates in my Ansible playbook:</p> <p><em>template-1.yaml.j2</em>:</p> <pre><code>{ &quot;auths&quot;: { &quot;https://index.docker.io/v1/&quot;: { &quot;auth&quot;: {{ (user.name + ':' + user.pass) | b64encode }} } } </code></pre> <p><em>template-2.yaml.j2</em>:</p> <pre><code>--- apiVersion: v1 data: .dockerconfigjson: '{% include &quot;template-1.yaml.j2&quot; %}' kind: Secret metadata: name: pull-secret namespace: default type: kubernetes.io/dockerconfigjson </code></pre> <p>The nested template works great, but I want <code>template-2.yaml.j2</code> to encode <code>template-1.yaml.j2</code> in <strong>base64</strong>, I tried adding a filter like so:</p> <p><code>{% include &quot;template-1.yaml.j2&quot; | b64encode %}</code></p> <p>And it fails with the following error:</p> <p><code>An unhandled exception occured while running the lookup plugin 'template'. Error was a &lt;class 'jinja2.exceptions.TemplateNotFound'&gt;, original message: dGVtcGxhdGUtMi55YW1sLmoy</code></p> <p>What actually happens is that the literal string <code>template-2.yaml.j2</code> is encoded and fails.</p> <p>Is there any solution beside separating the templates to different tasks in my Ansible playbook?</p>
<p>Simply replace the following line (wrong):</p> <pre class="lang-yaml prettyprint-override"><code>.dockerconfigjson: '{% include &quot;template-1.yaml.j2&quot; | b64decode %}' </code></pre> <p>with (right):</p> <pre class="lang-yaml prettyprint-override"><code>.dockerconfigjson: '{{ lookup(&quot;template&quot;, &quot;template-1.yaml.j2&quot;) | b64encode }}' </code></pre> <p>The <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_lookup.html" rel="nofollow noreferrer"><code>template</code> lookup</a> will fetch the result of the given template as a string which can then be passed to the <code>b64encode</code> filter</p>
<p>Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask:</p> <ul> <li><p>if there is some helper that can help me to reduce this boilarplate code ? something like <code>createOrUpdate</code> func</p> </li> <li><p>am I doing it right ?</p> </li> </ul> <pre><code>if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &amp;eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &amp;eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &amp;eCmp, patch); err != nil { return ctrl.Result{}, err } } </code></pre> <p><strong>if enverything is as recomended please let me know.</strong> I need also to do the stratgic merge but the code doesnt support it</p> <p>I found the following <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate</a> but I dont want it to be related to timestamp, just <code>if something was change -&gt; update it</code> or something <code>doesnt exist -&gt; create it</code></p> <p><strong>when trying to update the CR with the following code it doesnt works , any idea?</strong></p> <pre><code>// If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &amp;eCmp, patch); err != nil { return ctrl.Result{}, err } </code></pre>
<p>There isn't anything wrong with how you've done it, you could potentially restructure it a little to reduce the if nesting.</p> <pre><code>err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &amp;eCmp) // Object exists, patch it if err == nil { patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &amp;eCmpp, patch); err != nil { return ctrl.Result{}, err } return patch, nil // Adjust appropriately } else if apierrors.IsNotFound(err) { // Object does not exist, create a new one if err := r.Create(ctx, &amp;eCmp); err != nil { return ctrl.Result{}, err } } return ctrl.Result{}, err </code></pre> <p>That <code>CreateOrUpdate</code> function looks like it should also do what you want. It doesn't do any comparison with a timestamp. The example on there is doing that specifically but if you look at the underlying function <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/v0.14.6/pkg/controller/controllerutil/controllerutil.go#L195" rel="nofollow noreferrer">controllerutil.go#L195</a> you will see that there is no timestamp comparison there.</p> <p>What you will need to do is either define your mutate function (a function with the patch logic) and pass that as a parameter or create an anonymous function and pass that in.</p> <p>For example:</p> <pre><code>op, err := controllerutil.CreateOrUpdate(context.TODO(), client, object, func() error { // Your mutate logic goes in here return nil }) </code></pre> <p>If you look at the <code>CreateOrUpdate</code> function, it's essentially doing the same thing you are in your code except for the mutate (patching) of the resource.</p>
<p>Im using the following code inside kubebuilder controller to read before update for k8s custom resource, im checking if the object exist if yes check if need to update, if not create it , as I need to use it in several places I want to ask:</p> <ul> <li><p>if there is some helper that can help me to reduce this boilarplate code ? something like <code>createOrUpdate</code> func</p> </li> <li><p>am I doing it right ?</p> </li> </ul> <pre><code>if err := r.Get(ctx, client.ObjectKey{Name: sCrName, Namespace: sCrNs}, &amp;eCmp); err != nil { if apierrors.IsNotFound(err) { // If the object does not exist, create a new one if err := r.Create(ctx, &amp;eCmp); err != nil { return ctrl.Result{}, err } } else { // If there was an error other than 'not found', return the error return ctrl.Result{}, err } } else { // If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &amp;eCmp, patch); err != nil { return ctrl.Result{}, err } } </code></pre> <p><strong>if enverything is as recomended please let me know.</strong> I need also to do the stratgic merge but the code doesnt support it</p> <p>I found the following <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller/controllerutil#CreateOrUpdate</a> but I dont want it to be related to timestamp, just <code>if something was change -&gt; update it</code> or something <code>doesnt exist -&gt; create it</code></p> <p><strong>when trying to update the CR with the following code it doesnt works , any idea?</strong></p> <pre><code>// If the object exists, patch it patch := client.MergeFrom(eCmp.DeepCopy()) if err := r.Patch(ctx, &amp;eCmp, patch); err != nil { return ctrl.Result{}, err } </code></pre>
<p>you can use the controllerutil.CreateOrUpdate() function from the sigs.k8s.io/controller-runtime/pkg/controller/controllerutil package to reduce boilerplate code.</p> <p>use controllerutil.CreateOrUpdate() function:</p> <pre><code>if err := controllerutil.CreateOrUpdate(ctx, r.Client, &amp;eCmp, func() error { return r.Patch(ctx, &amp;eCmp, client.MergeFrom(eCmp.DeepCopy())) }); err != nil { return ctrl.Result{}, err } </code></pre> <p>strategic merge, you can add the strategic merge patch to the callback function to patch the object strategically</p> <pre><code> if err := controllerutil.CreateOrUpdate(ctx, r.Client, &amp;eCmp, func() error { // Create a strategic merge patch strategicMergePatch, err := strategicpatch.CreateTwoWayMergePatch(eCmp, &amp;newECmp, eCmp) if err != nil { return err } // Patch the object strategically return r.Patch(ctx, &amp;eCmp, client.ConstantPatch(types.StrategicMergePatchType, strategicMergePatch)) }); err != nil { return ctrl.Result{}, err } </code></pre>
<p>I have followed the directives of this web site <a href="https://dzone.com/articles/how-to-deploy-apache-kafka-with-kubernetes" rel="nofollow noreferrer">How To Deploy Apache Kafka With Kubernetes (DZone)</a> to deploy a <em>Kafka</em> broker with its <em>Zookeeper</em>. (I've only changed the namespaces names).</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service namespace: ecoemploi spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper namespace: ecoemploi spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper:latest imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 </code></pre> <p>But to make it working, the writers of that solution do wait for the cluster to start a first time,<br /> then copy the IP address of <code>zookeeper-service</code></p> <pre class="lang-bash prettyprint-override"><code>kubectl get services -n kafka NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE zookeeper-service  NodePort  10.100.69.243  &lt;none&gt;      2181:30181/TCP 3m4s </code></pre> <p>and paste it into the <code>kafka-broker</code> deployment file:</p> <pre class="lang-yaml prettyprint-override"><code>[...] spec: hostname: kafka-broker containers: - env: - name: KAFKA_BROKER_ID value: &quot;1&quot; - name: KAFKA_ZOOKEEPER_CONNECT value: 10.100.69.243:2181 - name: KAFKA_LISTENERS value: PLAINTEXT://:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://kafka-broker:9092 [...] </code></pre> <p>Then re-applying <code>kafka-broker</code> makes it working.</p> <p>I'm using this solution, but in a <em>Vagrant</em> box it isn't convenient: each time I'm recreating my vagrant, that IP changes...</p> <p>Is there a solution to get the Cluster IP of that <code>zookeeper-service</code> and prepare the correct <code>KAFKA_ZOOKEEPER_CONNECT</code> environment variable for <code>kafka-broker</code> automatically?</p>
<p>You can the service name, which will be the same each time K8s DNS will take care of it.</p> <pre><code>- name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-service </code></pre> <p><strong>K8s</strong> work that way, internal communication happens over the <strong>service name</strong> use the <strong>IP</strong> if required else <strong>ClusterIP</strong> in nature may changes <strong>IP</strong> with new deployment of <strong>service</strong> or any changes.</p> <p>DNS resolution will be taken care by Kubernetes(coredns, kubedns), so we just mention the <strong>service-name</strong>.</p> <p>Here is example for ref : <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</a></p> <p>Wordpess connecting to MySQL database with <strong>MySQL</strong> service name <strong>wordpress-mysql</strong> variable <strong>WORDPRESS_DB_HOST</strong></p>
<p>In a Azure AKS kubernetes cluster, after a cluster version upgrade the nodepool nodes, I have a PV that has this node affinity:</p> <pre><code>Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/region in [westeurope] </code></pre> <p>The nodes don't have the label so the Deployment creates a Pod that cannot be scheduled for the corresponding PVC for this PV. The Pod is never started:</p> <pre><code> Warning FailedScheduling 15m default-scheduler 0/3 nodes are available: 3 node(s) had volume node affinity conflict. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. </code></pre> <p>How can I add the label to the node or remove the label from the PV? I have tried to add the label to the node but I get:</p> <pre><code>Error from server: admission webhook &quot;aks-node-validating-webhook.azmk8s.io&quot; denied the request: (UID: 931bf139-1579-4e96-b164-e4e6e2fdae65) User is trying to add or update a non-changeable system label (failure-domain.beta.kubernetes.io/region:westeurope). This action is denied.. </code></pre> <p>Is the only solution to backup and restore the PV into a new one that does not have that deprecated label? What would the best process to do it (or any alternative solution)</p>
<p>We had the same problem. How we resolved it:</p> <ol> <li>Copy output from &quot;kubectl get pvc&quot; to get the link between the pvc and the pv.</li> <li>Locate the disk in azure portal and create a snapshot of the disk. (In the MC_ resource group to the aks</li> <li>Edit deployment in kubernetes and set replicacout to 0. Save and see that the pod are stopping ang removed.</li> <li>Delete the PVC for this pod.</li> <li>Edit deployment in kubernetes and set replicacout to 1. Save and see that there is a new PVC and a new PV created.</li> <li>Edit deployment again and set replicacount to 0.</li> <li>Locate the new disk in azure portal. Use &quot;kubectl get pvc&quot; to locate.</li> <li>Delete new disk in azure portal</li> <li>Locate snapshot created in pt 2.</li> <li>Create a new disk based on the snapshot. New disk should have the same name as the disk deleted in pt 6.</li> <li>Edit deployment in kubernetes and set replicacount to 1. It should now start using the old disk with the new pvc and pv.</li> </ol> <p>Take backup of what ever referances and disk you can before starting</p>
<p>Running into the following error when use custom UDF</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/worker.py&quot;, line 603, in main func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type) File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/worker.py&quot;, line 449, in read_udfs udfs.append(read_single_udf(pickleSer, infile, eval_type, runner_conf, udf_index=i)) File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/worker.py&quot;, line 251, in read_single_udf f, return_type = read_command(pickleSer, infile) File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/worker.py&quot;, line 71, in read_command command = serializer._read_with_length(file) File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py&quot;, line 160, in _read_with_length return self.loads(obj) File &quot;/usr/local/lib/python3.8/dist-packages/pyspark/serializers.py&quot;, line 430, in loads return pickle.loads(obj, encoding=encoding) ModuleNotFoundError: No module named 'jobs' </code></pre> <p>The import spark scripts looks something like this</p> <pre><code>from jobs.lib_a import a from jobs.udf import udf_function #This is a UDF </code></pre> <p>The scripts itself is located in <code>jobs/scripts/test_script.py</code>, the entire jobs folder is zipped and then added to spark using pyFiles.</p> <p>The weird thing is that the other import from jobs module works, only fail for udf.</p> <p>I have tried approach in this <a href="https://stackoverflow.com/questions/59741832/pyspark-custom-udf-modulenotfounderror-no-module-named">post</a>, creating a separate zip file called udf.zip, putting udf at top level and then add it to spark via pyFiles, but still run into ModuleNotFoundError when I try to import <code>udf</code>.</p> <p>I have also tried <code>sys.path.append(&lt;the udf path&gt;)</code></p> <p>The only approach works is when I copy the udf_function into the spark script <code>test_script.py</code>. This wouldn't work in reality as the udf_function can be shared by other spark script.</p> <p>The underlying system is: Python 3.8 Spark 3.2 Spark is running in kubernetes</p>
<p>I was able to make it work.</p> <p>Some more context is that we are leveraging spark on k8s operator, so we pass in the zip file via pyFiles <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/spark-on-k8s-operator</a></p> <p>This work the same as the following</p> <pre><code>spark_session.sparkContext.addPyFile('/opt/spark/pyfiles/python.zip') </code></pre> <p>if we set it up in spark script</p> <p>Initially, we pass in as</p> <pre><code>pyFiles: - local:///opt/spark/pyfiles/spinner-python.zip </code></pre> <p>But local:/// point to the working directory, we have to change it to the following with extra slash(/) to point to absolute.</p> <pre><code>pyFiles: - local:////opt/spark/pyfiles/spinner-python.zip </code></pre> <p>When Pyspark serializes a UDF, it sends a copy of the UDF code to all the worker nodes, we have to point PyFiles to absolute path instead of relative path</p>
<p>I'm currently working on a Kubernetes deployment file and I need to set the environment variables that the container will have.</p> <p>Is there a way to set Kubernetes secrets as environment variables in the deployment file?</p> <p>Here is a dummy file to help explain:</p> <p><a href="https://i.stack.imgur.com/mv4Rr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mv4Rr.png" alt="dummy deployment file" /></a></p>
<p>The syntax that worked for me was:</p> <pre><code>spec: ... template: ... spec ... containers: env: - name: PORT valueFrom: configMapKeyRef: name: env-configmap key: PORT - name: ELASTIC_PASSWORD valueFrom: secretKeyRef: name: env-secret key: ELASTIC_PASSWORD </code></pre> <p>The other syntax may also work and be acceptable, I'm just offering this answer as another alternative syntax for <em>mapping</em> Kubernetes configuration and secret values into environment variables.</p> <p>When I checked the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">URL referenced in the answer from @Valentin BOUSSON</a>. The page now (2023-05-16) says:</p> <blockquote> <p>To use a Secret in an environment variable in a Pod:</p> <p>For each container in your Pod specification, add an environment variable for each Secret key that you want to use to the env[].valueFrom.secretKeyRef field. Modify your image and/or command line so that the program looks for values in the specified environment variables.</p> </blockquote>
<p>I can still access kubectl after disabling Kubernetes from Docker Desktop v4.18.0</p> <p><a href="https://i.stack.imgur.com/LIubu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LIubu.png" alt="Disabled Kubernetes service in Docker Desktop" /></a></p> <p>On running the <code>kubectl version</code> command, following output is displayed:</p> <p><a href="https://i.stack.imgur.com/270LG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/270LG.png" alt="kubectl version command in terminal" /></a></p> <pre><code>Environment details: - OS: MacOS Ventura 13.3.1 (M1) - Docker Desktop: v4.18.0 </code></pre> <p>I want to entirely remove kubectl and related services, so that I can do a fresh install of minikube for local development. Please suggest required steps for the same.</p>
<p>As per official <a href="https://docs.docker.com/desktop/kubernetes/#disable-kubernetes" rel="nofollow noreferrer">doc</a>:</p> <blockquote> <p>After disabling Kubernetes Select Apply &amp; Restart to save the settings.This stops and removes Kubernetes containers, and also removes the /usr/local/bin/kubectl command.</p> </blockquote> <p>Below troubleshooting steps can help you:</p> <ul> <li><p>If you don't have kubectl installed in <code>/usr/local/bin</code>, Docker will install it for you on startup, so check where the kubectl is located then remove the kubectl binary.</p> </li> <li><p>Check if kubernetes is still in the starting state.</p> </li> <li><p>Check if any existing repositories are there using <code>brew search kubectl</code>.</p> </li> <li><p>Try clean / purge data or reset to factory defaults in the <a href="https://docs.docker.com/desktop/troubleshoot/overview/#troubleshoot-menu" rel="nofollow noreferrer">troubleshooting menu</a>.</p> </li> </ul> <p>Also as @David Maze suggested you can install minikube using <code>brew install minikube</code> without removing kubectl.</p>
<p>Hi I was just trying to install argo CD in my local machine. I have installed and running minikube.</p> <p>After creating argocd namespace, I just try these commands</p> <pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml </code></pre> <p>This error persists:</p> <pre><code>Unable to connect to the server: dial tcp [2405:200:1607:2820:41::36]:443: i/o timeout </code></pre> <p>Could I get some help? Byw I'm new to argo...</p>
<p>Play with argocd in the killercoda playground. This is probably the easiest way to get started learning and messing around with it. <a href="https://killercoda.com/mabusaa/course/argocd-endusers-scenarios/00-argocd-playground" rel="nofollow noreferrer">https://killercoda.com/mabusaa/course/argocd-endusers-scenarios/00-argocd-playground</a></p> <p>Or follow a guide like this: <a href="https://dev.to/abhinavd26/all-about-argocd-a-beginners-guide-33c9" rel="nofollow noreferrer">https://dev.to/abhinavd26/all-about-argocd-a-beginners-guide-33c9</a></p> <p>Make sure your endpoints and services are up and you can curl the endpoint. Make sure to read the sections about ingresses and where to curl or connect to.</p> <p>You may also need to ensure your network is set up properly to allow you to access the endpoint.</p> <p>Here is a quick example from running the killercoda example...</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl create namespace argocd namespace/argocd created $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created ... networkpolicy.networking.k8s.io/argocd-server-network-policy created </code></pre> <p>Look for the <code>argocd-server</code> pod to be running (bottom one)</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl get pods --namespace argocd NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 55s argocd-applicationset-controller-84c8d6fc9b-nstm5 1/1 Running 0 55s argocd-dex-server-59dc6fc75b-d8pnz 1/1 Running 0 55s argocd-notifications-controller-5d7dd8c5b7-j2hb4 1/1 Running 0 55s argocd-redis-74cb89f466-52d22 1/1 Running 0 55s argocd-repo-server-c456f6997-4qcww 1/1 Running 0 55s argocd-server-7555f5c778-fx2s5 1/1 Running 0 55s </code></pre> <p>Look at the endpoints - note the <code>argocd-server</code> endpoint on <code>192.168.1.21:8080</code></p> <pre class="lang-bash prettyprint-override"><code>$ kubectl --namespace argocd get endpoints NAME ENDPOINTS AGE argocd-applicationset-controller 192.168.1.18:8080,192.168.1.18:7000 85s argocd-dex-server 192.168.1.20:5558,192.168.1.20:5557,192.168.1.20:5556 85s argocd-metrics 192.168.1.23:8082 85s argocd-notifications-controller-metrics 192.168.1.19:9001 85s argocd-redis 192.168.1.22:6379 85s argocd-repo-server 192.168.1.17:8084,192.168.1.17:8081 85s argocd-server 192.168.1.21:8080,192.168.1.21:8080 85s argocd-server-metrics 192.168.1.21:8083 85s </code></pre> <p>Look at the services - note the <code>argocd-server</code> service with <a href="https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/" rel="nofollow noreferrer">ClusterIP</a> 10.105.38.254 port(s) 80/TCP,443/TCP</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl --namespace argocd get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-applicationset-controller ClusterIP 10.107.62.45 &lt;none&gt; 7000/TCP,8080/TCP 113s argocd-dex-server ClusterIP 10.102.41.206 &lt;none&gt; 5556/TCP,5557/TCP,5558/TCP 113s argocd-metrics ClusterIP 10.111.77.173 &lt;none&gt; 8082/TCP 113s argocd-notifications-controller-metrics ClusterIP 10.100.24.197 &lt;none&gt; 9001/TCP 113s argocd-redis ClusterIP 10.100.133.38 &lt;none&gt; 6379/TCP 113s argocd-repo-server ClusterIP 10.106.117.208 &lt;none&gt; 8081/TCP,8084/TCP 113s argocd-server ClusterIP 10.105.38.254 &lt;none&gt; 80/TCP,443/TCP 113s argocd-server-metrics ClusterIP 10.98.252.11 &lt;none&gt; 8083/TCP 113s </code></pre> <p>If you curl the argocd server cluster ip and port (10.105.38.254:443) or endpoint (192.168.1.21:8080), you should get something like this:</p> <pre class="lang-bash prettyprint-override"><code>$ curl -k https://192.168.1.21:8080/swagger-ui &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;API documentation&lt;/title&gt; &lt;meta charset=&quot;utf-8&quot;/&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot;&gt; &lt;link href=&quot;https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700&quot; rel=&quot;stylesheet&quot;&gt; &lt;style&gt; body { margin: 0; padding: 0; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;redoc spec-url='/swagger.json'&gt;&lt;/redoc&gt; &lt;script src=&quot;/assets/scripts/redoc.standalone.js&quot;&gt; &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>If the curl responds, you can try to put the url in your browser and see if it connects. Otherwise look at the options in step 3 of this url for exposing the service (<a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="nofollow noreferrer">https://argo-cd.readthedocs.io/en/stable/getting_started/</a>)</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl port-forward svc/argocd-server -n argocd 8080:443 </code></pre>
<p>I'm having trouble understanding what pod Eviction means mechanically in terms of K8s's actions -- what exactly does K8s do with the pod during eviction?</p> <p>Specifically, my main question is this: Under what conditions is an Evicted pod actually deleted from ETCD? Under what conditions is an Evicted pod just killed without being deleted from the API server?</p> <p>If I Evict a pod directly using the Eviction API, the pod object is actually deleted. On the other hand, I've definitely seen pods hang in &quot;Evicted&quot; in the status column after I run &quot;kubectl get pod&quot;.</p> <p>Edit: Removed follow-up questions about Preemption and OOM-Killing to conform to the guideline of one question per post. Might post a separate question about OOM management later.</p>
<p>I'm also confused by this lately. Here are some findings after a while of digging in the source code and the docs.</p> <p>'Eviction' here actually means two slightly different concepts, which are both documented in the official docs: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction" rel="nofollow noreferrer">Node-Pressure Eviction</a> and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction" rel="nofollow noreferrer">API-Initiated Eviction</a>. They can really be mixed up when we just talk about 'Eviction' because they both do the same thing: evict pods from nodes.</p> <p>Actually the doc of 'Node-Pressure Eviction' states:</p> <blockquote> <p>Node-pressure eviction is not the same as API-initiated eviction.</p> </blockquote> <p>The difference between these two is that 'API-Initiated Eviction' is, as the doc said:</p> <blockquote> <p>performing a policy-controlled DELETE operation on the Pod.</p> </blockquote> <p>So it will eventually delete the object stored in API server if the pod is evicted.</p> <p>But 'Node-Pressure Eviction' is issued directly by the kubelet and what it does is set the PodPhase in pod's status to 'Failed' and the Reason to 'Evicted'</p> <blockquote> <p>During a node-pressure eviction, the kubelet sets the PodPhase for the selected pods to Failed. This terminates the pods.</p> </blockquote> <p>This will result in the <code>Evicted</code> pods shown when running <code>kubectl get pod</code>.</p> <p>So the direct answer to your question is: If the pod is evicted using the Eviction API, the pod object will be deleted. If the pod is evicted by kubelet due to node pressure, the pod object will remain and will be in Failed status.</p>
<p>when i use docker as CRI:</p> <pre><code>{&quot;log&quot;:&quot;I0421 14:23:18.944348 1 node.go:172] Successfully retrieved node IP: 192.168.49.2\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:18.944635198Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:18.944724 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:18.944838628Z&quot;} {&quot;log&quot;:&quot;W0421 14:23:19.008388 1 server_others.go:578] Unknown proxy mode \&quot;\&quot;, assuming iptables proxy\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008544314Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.008581 1 server_others.go:185] Using iptables Proxier.\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008653777Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.008904 1 server.go:650] Version: v1.20.0\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008963124Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009762 1 config.go:315] Starting service config controller\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.009986673Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009867 1 shared_informer.go:240] Waiting for caches to sync for service config\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.009999075Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009973 1 config.go:224] Starting endpoint slice config controller\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.010041688Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009979 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.01004961Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.110110 1 shared_informer.go:247] Caches are synced for endpoint slice config \n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.110273306Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.110120 1 shared_informer.go:247] Caches are synced for service config \n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.110525274Z&quot;} </code></pre> <p>And</p> <p>when i use containerd as CRI:</p> <pre><code>2022-07-19T11:13:32.228206987+08:00 stderr F I0719 11:13:32.228176 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288 2022-07-19T11:13:32.228229469+08:00 stderr F I0719 11:13:32.228203 1 conntrack.go:52] Setting nf_conntrack_max to 524288 2022-07-19T11:13:32.228460095+08:00 stderr F I0719 11:13:32.228431 1 conntrack.go:83] Setting conntrack hashsize to 131072 2022-07-19T11:13:32.237601129+08:00 stderr F I0719 11:13:32.237568 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 2022-07-19T11:13:32.237638594+08:00 stderr F I0719 11:13:32.237613 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 300 2022-07-19T11:13:32.237787689+08:00 stderr F I0719 11:13:32.237756 1 config.go:315] Starting service config controller </code></pre> <p>Any body know why the stream of log file is always stderr?</p> <p>i expect that info messages is writing to stdout and err messages is writing to stderr.</p>
<p>STDOUT is usually a command’s normal output, and STDERR is typically used to output error messages. By default, docker/container logs show the command’s STDOUT and STDERR. This information that is logged and the format of the log depends almost entirely on the container’s endpoint command. <em><strong>When you use docker/ container as CRI to write the logs in Json it shows the default as STDERR and this needs to be parsed before moving to the logging agent.</strong></em></p> <p>Refer to this <a href="https://github.com/fluent/fluent-bit/issues/522" rel="nofollow noreferrer">GIT LINK</a> by stevenAcoffman for more information and here they used fluentbit as log parser.</p> <p>You can also use <a href="https://sematext.com/docs/logagent/installation-docker/#kubernetes-with-containerd-and-ibm-cloud" rel="nofollow noreferrer">sematext</a> as a Log agent which supports CRI-O log format as well and install its <a href="https://sematext.com/docs/logagent/input-filter-containerd/" rel="nofollow noreferrer">Plugin</a> to parse the logs.</p> <blockquote> <p>By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container. <code>{&quot;log&quot;:&quot;Log line is here\n&quot;,&quot;stream&quot;:&quot;stdout&quot;,&quot;time&quot;:&quot;2019-01-01T11:11:11.111111111Z&quot;}</code></p> </blockquote> <p>Along with the above, you need to install <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer">JSON Logging file driver</a></p>
<p>I am using 'crictl' tool to work with containerd runtime containers (under kubernetes) in a managed cluster.</p> <p>I'm trying to set the memory limit (in bytes) to 16MB with the command: <code>crictl -r unix:///run/containerd/containerd.sock update --memory 16777216 c60df9ef3381e</code></p> <p>And get the following error:</p> <pre><code>E1219 11:10:11.616194 1241 remote_runtime.go:640] &quot;UpdateContainerResources from runtime service failed&quot; err=&lt; rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown &gt; containerID=&quot;c60df9ef3381e&quot; FATA[0000] updating container resources for &quot;c60df9ef3381e&quot;: rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248) : unknown </code></pre> <p>At first I thought that maybe I cannot set a memory limit directly to a running container lower than the limit that appears in the kubernetes yaml. Here Are the limits from K8s: <code>Requests:{&quot;cpu&quot;:&quot;100m&quot;,&quot;memory&quot;:&quot;64Mi&quot;} Limits:{&quot;cpu&quot;:&quot;200m&quot;,&quot;memory&quot;:&quot;128Mi&quot;} </code> But not, even setting a memory limit above the K8S request (e.g. 65MB) gives this same error!</p> <p><strong>This works on Docker runtime</strong> - I'm able to limit the memory of the container. Yes, it might crash, but the operation works..</p> <p>Then, I tried to give a memory limit higher than the current usage, and it succeeded...</p> <p>Can anyone help understanding this error and what might be causing it on containerd runtime?? Is this indeed a limitation that I cannot limit to a lower memory currently used by the container? Is there a way to overcome that?</p> <p>Thanks a lot for your time!!!</p>
<p>The error suggests that the container is already using more memory than the limit you're trying to impose. If the limit were to be imposed in this case, then any further attempts to allocate memory would fail. That would usually cause processes to abort, which is bad. Also, the container would already be violating the limit you just set.</p> <p>The limit isn't allowed to be imposed in this case because data loss is highly likely to occur, and the choice of which data to lose should not be left to chance.</p> <p>I suggest if you want to lower the limit, you need to lower the current usage to below that limit before you do so, to retain control of what memory is freed.</p>
<p>I am trying to create EKS Fargate cluster and deploy example Spring Boot application with 1 endpoint, I successfully create stack with following CloudFormation script:</p> <pre><code>--- AWSTemplateFormatVersion: '2010-09-09' Description: 'AWS CloudFormation template for EKS Fargate managed Kubernetes cluster with exposed endpoints' Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.0.0.0/16 EnableDnsSupport: true EnableDnsHostnames: true InternetGateway: Type: AWS::EC2::InternetGateway VPCGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicSubnet: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.2.0/24 MapPublicIpOnLaunch: true AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetA: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.0.0/24 AvailabilityZone: !Select [ 0, !GetAZs '' ] PrivateSubnetB: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.1.0/24 AvailabilityZone: !Select [ 1, !GetAZs '' ] PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PublicRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway SubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable EIP: Type: AWS::EC2::EIP NatGateway: Type: AWS::EC2::NatGateway Properties: SubnetId: !Ref PublicSubnet AllocationId: !GetAtt EIP.AllocationId PrivateRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC PrivateRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !Ref NatGateway PrivateSubnetRouteTableAssociationA: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetA RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociationB: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref PrivateSubnetB RouteTableId: !Ref PrivateRouteTable EKSCluster: Type: AWS::EKS::Cluster Properties: Name: EKSFargateCluster Version: '1.26' ResourcesVpcConfig: SubnetIds: - !Ref PrivateSubnetA - !Ref PrivateSubnetB RoleArn: !GetAtt EKSClusterRole.Arn FargateProfile: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: FargateProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: default Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargateProfileCoredns: Type: AWS::EKS::FargateProfile Properties: ClusterName: !Ref EKSCluster FargateProfileName: CorednsProfile PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn Selectors: - Namespace: kube-system Labels: - Key: k8s-app Value: kube-dns Subnets: - !Ref PrivateSubnetA - !Ref PrivateSubnetB FargatePodExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks-fargate-pods.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy EKSClusterRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy - arn:aws:iam::aws:policy/AmazonEKSVPCResourceController </code></pre> <p>I run following command to path the CoreDNS for Fargate:</p> <pre><code>kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{&quot;op&quot;: &quot;remove&quot;, &quot;path&quot;: &quot;/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type&quot;}]' </code></pre> <p>Then I deploy my example application image from public ECR with following kubernetes manifest:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 2 selector: matchLabels: app: example-app template: metadata: labels: app: example-app spec: containers: - name: ventu image: public.ecr.aws/not_real_url/public_ecr_name:latest ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: example-service spec: type: LoadBalancer selector: app: example-app ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <p>Then when I run:</p> <pre><code>kubectl get svc </code></pre> <p>I see result:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service LoadBalancer 172.20.228.77 aa0116829ac2647a7bf39a97bffb0183-1208408433.eu-central-1.elb.amazonaws.com 80:31915/TCP 16m kubernetes ClusterIP 172.20.0.1 &lt;none&gt; 443/TCP 29m </code></pre> <p>However when I try to reach the EXTERNAL-IP on my LoadBalancer example-service, I get empty response, I can't reach my application on only path defined in my Spring Boot application: /api/v1/info</p> <pre><code>server.port=8080 server.servlet.context-path=/api/v1 </code></pre> <p>What am I missing?</p> <p>Couple of information:</p> <ul> <li>my pods spin up successfully, I can see Spring Boot logging when I run kubectl logs pod-name</li> <li>my coredns pods spin up correctly as well</li> <li>I use busybox to test my cluster's dns, and everything seems to be working too</li> </ul>
<p>I solved my issue, by following this <a href="https://repost.aws/knowledge-center/eks-alb-ingress-controller-fargate" rel="nofollow noreferrer">guide</a></p> <p>I then exported resulting stack into my CloudFormation script.</p> <p>Then to deploy my application I updated my kubernetes manifest to:</p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: example --- apiVersion: apps/v1 kind: Deployment metadata: namespace: example name: deployment-example-be-app spec: selector: matchLabels: app.kubernetes.io/name: example-be-app replicas: 2 template: metadata: labels: app.kubernetes.io/name: example-be-app spec: containers: - name: example-be-app image: public.ecr.aws/fake_url/example:latest imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: namespace: example name: service-example-be-app annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP selector: app.kubernetes.io/name: example-be-app </code></pre> <p>Now I access my example application form browser.</p>
<p>I am trying out the minio-operator.</p> <p>This is a two part question regarding namespaces mentioned in the docs.</p> <p>I would like have a namespace in which I deploy different kinds of operators. Eventhough the docs require a dedicated namespace for the operator, I seem to experience no problems so far, putting all my operators in the same namespace.</p> <blockquote> <p>The MinIO Operator exists in its own namespace.</p> </blockquote> <p>When it comes to tenants the docs also require a dedicated namespace for each tenant.</p> <blockquote> <p>When you use the Operator to create a tenant, the tenant must have its own namespace.</p> </blockquote> <p>Does this mean nothing else but the tenant can be in this namespace, or just, that no more than one tenant can exist per namespace. What I would like to do is to put a tenant into the same namespace as the solution, that is consuming the minio API.</p> <p>Is my attempt wrong, and if so, why? What is the reason for those constraints?</p> <p>I could not find any explanation in the <a href="https://min.io/docs/minio/kubernetes/upstream/operations/installation.html" rel="nofollow noreferrer">article</a> I cited above.</p>
<p>Currently, it is not possible to have multiple tenants in the same namespace because the MinIO operator always creates services with the names <code>minio</code>, <code>minio-console</code>, and <code>minio-hl</code> in the installation namespace.</p> <p>However, a workaround is to use a different namespace for each tenant and use the DNS in Kubernetes to access them: <code>minio.&lt;other-namespace&gt;.svc</code></p> <p>For more information, refer to <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a>.</p>
<p>I am trying to enable the local account on an AKS cluster (version 1.24.10) by running commands from an AzureDevOps yaml pipeline.</p> <p>The “old” az aks command : <code>az aks update -g &lt;myresourcegroup&gt; -n &lt;myclustername&gt; --enable-local</code> used to serve me well to enable a local account on an AKS cluster. In the yaml pipeline, however, this does not seem to work and I resorted to running the <a href="https://learn.microsoft.com/en-us/powershell/module/az.aks/set-azakscluster?view=azps-9.7.1" rel="nofollow noreferrer">Set-AzAksCluster</a> command from within a AzurePowerShell@5 task</p> <pre><code> - task: AzurePowerShell@5 displayName: 'disable-local-account' name: disablelocalaccount inputs: azureSubscription: 'myazsubscription' ScriptType: InlineScript Inline: | Set-AzAksCluster -ResourceGroupName myresourcegrp -Name mycluster -DisableLocalAccount azurePowerShellVersion: LatestVersion </code></pre> <p>By passing the -DisableLocalAccount switch to the command we can disable the local account on the cluster. The enabling of the local account on the cluster just seems to elude me somehow…sigh.</p> <p>Does anybody know if it is possible to enable the local account using the <a href="https://learn.microsoft.com/en-us/powershell/module/az.aks/set-azakscluster?view=azps-9.7.1" rel="nofollow noreferrer">Set-AzAksCluster</a> command? And if so, what is the correct way to go about this?</p> <p>Many thanks!</p> <p>Kind regards,<br /> Morné</p>
<p>Wow, great spot... The enable flag isn't there. You might want to raise this on the <a href="https://github.com/Azure/azure-powershell/discussions/categories/feature-requests" rel="nofollow noreferrer">Azure PowerShell GitHub</a>.</p> <p>As a workaround, the Azure CLI does allow enabling of local accounts.</p> <pre class="lang-bash prettyprint-override"><code>az aks update --name cluster --resource-group rg --enable-local-accounts </code></pre> <p><a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-update</a></p>
<p>I have been trying to deploy spark and jupyter note on minikube. I used helm charts for deploying both</p> <p>Jupyter notebook - <a href="https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/pyspark-notebook-helm/pyspark-notebook</a></p> <p>Spark - <a href="https://bitnami.com/stack/spark/helm" rel="nofollow noreferrer">https://bitnami.com/stack/spark/helm</a></p> <p>While able to establish to the master using</p> <pre><code>spark = SparkSession.builder.master(&quot;spark://my-release-spark-master-0.my-release-spark-headless.default.svc.cluster.local:7077&quot;).getOrCreate() </code></pre> <p>When running the following snippet</p> <pre><code>nums= sc.parallelize([1,2,3,4]) squared = nums.map(lambda x: x*x).collect() for num in squared: print('%i ' % (num)) </code></pre> <p>The execution takes a long time and never completes when it runs the collect() method</p>
You can check the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#prerequisites" rel="nofollow noreferrer">prerequisites</a> when running Spark in Kubernetes to determine if it will increase its performance</p> A running Kubernetes cluster at version &gt;= 1.22 with access configured to it using <a href="https://kubernetes.io/docs/user-guide/prereqs/" rel="nofollow noreferrer">kubectl</a>. If you do not already have a working Kubernetes cluster, you may set up a test cluster on your local machine using <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">minikube</a>.</p> </li> We recommend using the latest release of minikube with the DNS addon enabled.</p> </li> Be aware that the default minikube configuration is not enough for running Spark applications. We recommend 3 CPUs and 4g of memory to be able to start a simple Spark application with a single executor.</p> </li> Check <a href="https://github.com/fabric8io/kubernetes-client" rel="nofollow noreferrer">kubernetes-client library</a>&rsquo;s version of your Spark environment, and its compatibility with your Kubernetes cluster&rsquo;s version.</p> </li> You must have appropriate permissions to list, create, edit and delete <a href="https://kubernetes.io/docs/user-guide/pods/" rel="nofollow noreferrer">pods</a> in your cluster. You can verify that you can list these resources by running&nbsp;</p> </li> &nbsp;&nbsp;&nbsp; kubectl auth can-i &lt;list|create|edit|delete&gt; pods</p> The service account credentials used by the driver pods must be allowed to create pods, services and configmaps.</p> </li> You must have <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes DNS</a> configured in your cluster.</p>
<p>This is a very wried thing.</p> <p>I created a <strong>private</strong> GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. </p> <p>So one of these pods always get <code>ImagePullBackOff</code>, I check the error </p> <pre><code>Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>And the pods scheduled to the remaining two nodes work well.</p> <p>I ssh to that node, run <code>docker pull</code> and everything is fine. I cannot find another way to troubleshoot this error. </p> <p>I tried to <code>drain</code> or <code>delete</code> that node and let the cluster to recreate the node. but it is still not working.</p> <p>Help me, please.</p> <p>Update: From GCP <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#docker_hub" rel="nofollow noreferrer">documentation</a>, it will fail to pull images from the docker hub. </p> <p>BUT the weirdest thing is ONLY ONE node is unable to pull the images. </p>
<p>I recall seeing this before and finding an answer before.</p> <p><a href="https://cloud.google.com/container-registry/docs/pulling-cached-images" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/pulling-cached-images</a><br /> Talks about it a little, but I'll explain it so it's easy to follow.</p> <p>If I spin up a private GKE cluster and I create 3 deployments:</p> <ul> <li>1st uses image: nginx:latest</li> <li>2nd uses image: nginx:stable</li> <li>3rd uses image: docker.io/busybox:1.36.0-glibc</li> </ul> <p>nginx:latest (common tag) will almost always work<br /> nginx:stable (popular tag) will work sometimes<br /> The super specific tag (rarely used tag) will almost always fail with ImagePullBackOff</p> <p><strong>So why is this the case?</strong><br /> <strong>1. The ImagePullBackOff happens when the pods/nodes have no NAT Gateway/no Internet Access</strong><br /> <code>kubectl exec -it working-nginx-latest-pod -- curl yahoo.com</code><br /> ^-- You can prove no internet with this, note <code>curl google.com</code> is a bad test on GKE, because it's still reachable via googles internal network / you'll get a response, because google's network can reach google.com without having to go through the internet, that's why I recommend testing with a non google URL like yahoo.com<br /> (Google's networking also occasionally does some counterintuitive / non-standard things, like route public IP Addresses over their internal network, so sometimes you can reach public IP addresses w/o internet access, it's usually google services with public IPs that are sometimes reachable w/o internet access.)</p> <p><strong>2. So the next question is, but wait... how are <code>nginx:latest</code> and <code>nginx:stable</code> able to pull image that exists on the internet/on docker hub, when there's no internet access? Basically why is it working for some images and not others?</strong><br /> Answer boils down to popularity of the image:tag pair. Is it popular enough to get cached in mirror.gcr.io?</p> <p>The initial link I shared at the top mentions &quot;Container Registry caches frequently-accessed public Docker Hub images on mirror.gcr.io&quot;, so basically if you reference a common tag of a popular image, you can sometimes get lucky enough to pull it even without internet, because the cache is reachable via private IP space / without internet access.</p> <p>When a pod running on GKE private cluster gives you ImagePullBackOff, and you're like, what's going on? I know this image exists! <code>docker pull docker.io/busybox:1.36.0-glibc</code> pulls fine from my local machine, what's happening is that rarely used tag doesn't exist in their cache, that mirrors common tags of popular images.</p> <p>Best way to fix it is to either pull all images from pkg.dev (GCP's Artifact Registry, which GKE should be able to access w/o internet access) or set up NAT gateway/ensure the private cluster has internet access. And you can use <code>kubectl exec -it working-nginx-latest-pod -- curl yahoo.com</code> as a feedback loop to check if the cluster has internet access as you tinker with VPC settings to add NAT GW.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/best-practices/networking#use-cloudnat" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/best-practices/networking#use-cloudnat</a><br /> mentions By default, (GKE) &quot;private clusters don't have internet access. In order to allow Pods to reach the internet, enable Cloud NAT for each region. At a minimum, enable Cloud NAT for the primary and secondary ranges in the GKE subnet.&quot;</p>
<p>Since 2 days I try to create a simple OPA Gatekeeper Policy which blocks the creation of pods with &quot;privileged:true&quot; for some namespaces.</p> <p>Some more details:</p> <p>Im using opa-gatekeeper in version 3.13 installed by following <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/install" rel="nofollow noreferrer">these instructions</a>.</p> <p>To enable the policy, first I created a ConstraintTemplate:</p> <pre><code>apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: disallowprivilegedpods annotations: description: &quot;Disallow creation of privileged pods in alpha and beta namespaces&quot; spec: crd: spec: names: kind: DisallowPrivilegedPods targets: - target: admission.k8s.gatekeeper.sh rego: | package disallow_privileged_pods violation[{&quot;msg&quot;: msg}] { input.request.kind.kind == &quot;Pod&quot; input.request.operation == &quot;CREATE&quot; input.request.namespace == [&quot;alpha&quot;, &quot;beta&quot;] input.request.object.spec.securityContext.privileged == true msg := &quot;Privileged pods are not allowed in the Alpha and Beta namespaces.&quot; } </code></pre> <p>Next I created the constraint:</p> <pre><code>apiVersion: constraints.gatekeeper.sh/v1beta1 kind: DisallowPrivilegedPods metadata: name: disallow-privileged-pods-alpha-beta spec: match: kinds: - apiGroups: [&quot;&quot;] kinds: [&quot;Pod&quot;] namespaces: - alpha - beta </code></pre> <p>To test if the policy is working correctly, I tried to deploy this pod in one of these namespaces:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: privileged-pod namespace: alpha spec: containers: - name: my-container image: nginx securityContext: privileged: true restartPolicy: Never </code></pre> <p>Unfortunately the policy doesnt seem to work and the pod can be created.</p> <p>Can anyone give me some hints, whats wrong with the policy?</p> <p>Cheers,</p> <p>Christian</p>
<p>This check - <code>input.request.namespace == [&quot;alpha&quot;, &quot;beta&quot;]</code> will evaluate to true only if the <code>input.request.namespace</code> field is exactly identical to the array it is being compared to. i.e. - Only if <code>input.request.namespace</code> is an array with exactly two values, the first one being &quot;alpha&quot; and the second one &quot;beta&quot;</p> <p>To check if an array input field holds one of two values, use <a href="https://www.eknert.com/tech/2020/02/04/incremental-rules-in-opa.html" rel="nofollow noreferrer">incremental rules</a> and <a href="https://www.openpolicyagent.org/docs/latest/policy-reference/#arrays" rel="nofollow noreferrer">array lookups</a>:</p> <pre><code>namespace_alpha_or_beta { &quot;alpha&quot; = input.request.namespace[_] } namespace_alpha_or_beta { &quot;beta&quot; = input.request.namespace[_] } </code></pre>
<p>I want to filter namespaces based on the annotation. I do not want namespace which has annotation as <code>components\.gke\.io/layer==&quot;addon&quot;</code>.</p> <p>I tried <code>kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer==&quot;addon&quot;)].metadata.name}'</code> command and at least it gives me the result but it is not coming in normal table format, the way it comes when you execute <code>kubectl get ns</code>.</p> <p><a href="https://i.stack.imgur.com/Oe5YW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oe5YW.png" alt="enter image description here" /></a></p> <p>However, this is not what I want, I want the inverse of this, which means filtering out these GKE rows. I tried <code>kubectl get ns -o=jsonpath='{.items[?(@.metadata.annotations.components\.gke\.io/layer==&quot;&quot;)].metadata.name}'</code> but it doesn't work, it give empty result.</p> <p>I tried to format but unable to create the desired command. Anyone has tried this before - (1.) filter out namespaces which has certain annotation (2.) format that output in table format.</p>
<p>I think (!?) you can't use negation with <code>kubectl</code>'s implementation of <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">JSONPath</a>.</p> <p>Once you use the <code>--output</code> flag with <code>kubectl</code>, you lose the ability to render the command's output using the default table format. With e.g. Bash <a href="https://man7.org/linux/man-pages/man1/column.1.html" rel="nofollow noreferrer"><code>column</code></a> can provide similar out.</p> <p>Consider using a generic JSON processing tool such as <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a>.</p> <p>An advantage of using a standalone tool is that, once you're familiar with the tool, you can use it with any command that emits JSON (not just with <code>kubectl</code>.</p> <pre class="lang-bash prettyprint-override"><code>FILTER=' [&quot;NAME&quot;,&quot;STATUS&quot;], (.items[]| [ select(.metadata.annotations[&quot;components.gke.io/layer&quot;]!=&quot;addon&quot;) |.metadata.name,.status.phase ] ) |@tsv' kubectl get namespaces --output=json \ | jq -r &quot;${FILTER}&quot; \ | column --table --separator $'\t' </code></pre> <p>Explanation of <code>FILTER</code>:</p> <ol> <li>Returns a JSON array that is piped into <code>@tsv</code> (tab-formatted output)</li> <li>The first array entry is a an array (!) of column titles</li> <li><code>.items</code> is piped through a <code>select</code> that filters Namespaces that don't contain the <code>annotation</code></li> <li>Emit not filtered Namespace name and status as JSON array items</li> </ol> <p>The tab-formatted (no longer JSON) that's emitted by <code>jq</code> is then piped into <code>column</code> which formats the results as a table.</p>
<p>Having an app running perfectly on my local with docker.</p> <p>When I deploy it, I can't get why a new session is started on each request. It seems it can't come from the code since it's exactly the same.</p> <p>The session is stored on a redis DB (I have the same bug if I use the filesystem session). It's there I can see all the new session created. (see last code block).</p> <p>From the log I can clearly identify that the <code>$request-&gt;getSession()-&gt;getId()</code> changes on each request but not the PHPSESSID cookie.</p> <p>For example:</p> <p><em>First request</em></p> <pre><code>[2020-11-02 15:03:59] request.INFO: Matched route &quot;app_login&quot;. {&quot;route&quot;:&quot;app_login&quot;,&quot;route_parameters&quot;:{&quot;_route&quot;:&quot;app_login&quot;,&quot;_controller&quot;:&quot;App\\Controller\\SecurityController::login&quot;},&quot;request_uri&quot;:&quot;https://foo.bar.dev/login&quot;,&quot;method&quot;:&quot;POST&quot;} [] [2020-11-02 15:03:59] security.DEBUG: Checking for guard authentication credentials. {&quot;firewall_key&quot;:&quot;main&quot;,&quot;authenticators&quot;:1} [] [2020-11-02 15:03:59] security.DEBUG: Checking support on guard authenticator. {&quot;firewall_key&quot;:&quot;main&quot;,&quot;authenticator&quot;:&quot;App\\Security\\LoginFormAuthenticator&quot;} [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $request session id =&gt; 6491ddf4e8f3e2eaa22b44b3a98c094a [] [] [2020-11-02 15:03:59] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE =&gt; {&quot;PHPSESSID&quot;:&quot;87cf6185b652f8d713c45031ebe6d8a4&quot;} [] </code></pre> <p><em>Second one</em></p> <pre><code>[2020-11-02 15:04:33] request.INFO: Matched route &quot;app_login&quot;. {&quot;route&quot;:&quot;app_login&quot;,&quot;route_parameters&quot;:{&quot;_route&quot;:&quot;app_login&quot;,&quot;_controller&quot;:&quot;App\\Controller\\SecurityController::login&quot;},&quot;request_uri&quot;:&quot;https://foo.bar.dev/login&quot;,&quot;method&quot;:&quot;POST&quot;} [] [2020-11-02 15:04:33] security.DEBUG: Checking for guard authentication credentials. {&quot;firewall_key&quot;:&quot;main&quot;,&quot;authenticators&quot;:1} [] [2020-11-02 15:04:33] security.DEBUG: Checking support on guard authenticator. {&quot;firewall_key&quot;:&quot;main&quot;,&quot;authenticator&quot;:&quot;App\\Security\\LoginFormAuthenticator&quot;} [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $request session id =&gt; 41b08dac8a803337a48dca7d5b33b840 [] [] [2020-11-02 15:04:33] app.DEBUG: [LoginFormAuthenticator::supports] $_COOKIE =&gt; {&quot;PHPSESSID&quot;:&quot;87cf6185b652f8d713c45031ebe6d8a4&quot;} [] </code></pre> <p><em><strong>KUBERNETES</strong></em></p> <p><strong>ingress-nginx.yaml</strong></p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: secured-front namespace: foo-apis-dev annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; nginx.ingress.kubernetes.io/affinity-mode: &quot;persistent&quot; nginx.ingress.kubernetes.io/session-cookie-name: &quot;PHPSESSID&quot; nginx.ingress.kubernetes.io/session-cookie-path: &quot;/&quot; nginx.ingress.kubernetes.io/session-cookie-samesite: &quot;Lax&quot; nginx.ingress.kubernetes.io/session-cookie-expires: &quot;172800000&quot; nginx.ingress.kubernetes.io/session-cookie-max-age: &quot;172800000&quot; spec: tls: - hosts: - bar.foo.dev secretName: tls-secret rules: - host: bar.foo.dev http: paths: - backend: serviceName: bar-nginx servicePort: 80 path: /(.*) </code></pre> <p><em><strong>Symfony APP</strong></em></p> <p><strong>security.yaml</strong></p> <pre><code>[...] firewalls: dev: pattern: ^/(_(profiler|wdt)|css|images|js)/ security: false main: anonymous: lazy pattern: ^/.* logout: path: app_logout target: login guard: authenticators: - App\Security\LoginFormAuthenticator [...] </code></pre> <p><strong>services.yaml</strong></p> <pre><code>[...] Redis: class: Redis calls: - connect: - '%env(REDIS_HOST)%' - '%env(int:REDIS_PORT)%' Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler: arguments: - '@Redis' - { prefix: 'admin_phpsess_' } [...] </code></pre> <p><strong>packages.framework.yaml</strong></p> <pre><code>[...] session: handler_id: Symfony\Component\HttpFoundation\Session\Storage\Handler\RedisSessionHandler cookie_secure: auto cookie_samesite: lax [...] </code></pre> <p><strong>REDIS</strong></p> <pre><code>127.0.0.1:6379&gt; KEYS *admin* 1) &quot;admin_phpsess_245e4a79fe35e2320943770061884c24&quot; 2) &quot;admin_phpsess_0ff29464322b3c2cfc5d8f5fd323ef75&quot; 3) &quot;admin_phpsess_26812c17f93a5d28a71853b77ac85386&quot; 4) &quot;admin_phpsess_7fbae6f0b1fdbe9576e41c9eee2cd60f&quot; </code></pre> <p>VERSIONS:</p> <ul> <li>PHP 7.4.12</li> <li>Symfony 4.4</li> <li>Kubernetes 1.17.9</li> <li>redis (pecl) 5.3.2</li> </ul> <p><strong>IMPORTANT NOTE</strong></p> <p>The issue was about the redis configuration.</p> <p>I took the step of using <code>PdoSessionHandler</code> and it works. The problem comes from Redis and / or Kubernetes, I have been trying for 2 hours to point out the configuration which generates this bug but impossible for now.</p>
<p>Try using <a href="https://github.com/predis/predis" rel="nofollow noreferrer">Predis</a> session handler instead.</p> <ol> <li>Setup your services</li> </ol> <pre class="lang-yaml prettyprint-override"><code># services.yaml Redis: class: \Predis\Client arguments: - 'redis://%env(REDIS_HOST)%:%env(REDIS_PORT)%' predis_session_handler: class: 'Predis\Session\Handler' arguments: - '@Redis' </code></pre> <ol start="2"> <li>Configure Symfony</li> </ol> <pre class="lang-yaml prettyprint-override"><code># framework.yaml framework: session: handler_id: predis_session_handler </code></pre>
<p>I am trying to modify a kubernetes keycloak deployment to respond on <code>/auth</code> in place of <code>/</code>. I see in the documentation that this should be possible by setting <code>--http-relative-path</code> to <code>/auth</code>. I have attempted this and it causes all the services to receive <code>Connection refused</code> from the keycloak service. Is this possibly an issue with keycloak or could there be another config required?</p>
<p>The issue was that the networking solution was not routing to the service because the health checks had not been updated with the correct routes.</p> <pre><code> readinessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 startupProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 livenessProbe: httpGet: path: /health/ready port: 8080 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 failureThreshold: 20 </code></pre> <p>Changing these to <code>/auth/health/ready</code> fixed the ready checks.</p>
<p>I'm using this <code>Dockerfile</code> configuration</p> <pre><code>FROM openjdk:17-alpine ARG APP_HOME=/app WORKDIR $APP_HOME COPY ./target/ws-exec.jar ws.jar ENV JAVA_OPTS=&quot;-Dspring.profiles.active=prod -Dspring.application.name=words&quot; ENTRYPOINT java $JAVA_OPTS -jar ./ws.jar $JAVA_ARGS </code></pre> <p>After deploying it to minikube, I see the only log: <code>Error: Unable to access jarfile /ws.jar</code>.</p> <p>I've tried to run <code>docker run -it &lt;image&gt;</code> with my image's name, and it successfuly started with docker. Running <code>docker exec -it &lt;container&gt;</code> shew me that the <code>jar</code> is present in the right folder. I tried to make the jar executable adding a <code>CMD</code> or <code>RUN</code> layer into my <code>Dockerfile</code>, but nothing helped. Where is my mistake, or what I don't understand?</p> <p>UPD here is my deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: revise-words-ws name: revise-words-ws namespace: default spec: replicas: 1 minReadySeconds: 45 selector: matchLabels: app: revise-words-ws strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: revise-words-ws spec: containers: - image: maxrybalkin91/revise-words-ws:1.0 imagePullPolicy: IfNotPresent name: revise-words-ws env: - name: VAULT_TOKEN valueFrom: secretKeyRef: name: words key: vault_token - name: VAULT_HOST valueFrom: secretKeyRef: name: words key: vault_host - name: VAULT_PORT valueFrom: secretKeyRef: name: words key: vault_port ports: - name: liveness-port containerPort: 8089 resources: requests: cpu: 100m memory: 256Mi limits: cpu: 300m memory: 512Mi readinessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 livenessProbe: httpGet: path: / port: liveness-port failureThreshold: 5 periodSeconds: 10 initialDelaySeconds: 60 terminationGracePeriodSeconds: 30 restartPolicy: Always </code></pre>
<p>As ENTRYPOINT you specify Java to run <code>./ws.jar</code>, assuming it would resolve from within work directory <code>/app</code>. At runtime you get the error message that <code>/ws.jar</code> is not accessible, which looks like an absolute path.</p> <p>When running your container with /bin/bash, please check out where your jar file exists and which mode it has. Then decide who is right and who is broken: Your docker file or your error message. Fix the broken one.</p>
<ol> <li>I have been trying to make use of Azure Spot instances on Azure Kubernetes Service (AKS) - version 1.19.11 and to enable scheduling of pods onto those nodes, I am trying to use the <code>PodTolerationRestriction</code> admission controller.</li> <li>I can confirm that the PodTolerationRestriction controller is enabled as I am having no issues deploying a replicaset to the default namespace. This is another namespace but we are not specifically adding any tolerations while creating it.</li> <li>I gathered from elsewhere that, along with whitelisting against a specific taint (in my case spot), it is also necessary to whitelist certain default tolerations. As a result, I have added certain annotations to my namespace.</li> <li>I do not have any additional tolerations pre-defined for this statefulset.</li> <li>The node has taints - the first two are taken care of through the helm chart values <ul> <li>RabbitMQ=true:NoSchedule</li> <li>Allow=true:NoExecute</li> <li>kubernetes.azure.com/scalesetpriority=spot:NoSchedule</li> </ul> </li> </ol> <p>I am wondering what additional tolerations need to be whitelisted.</p> <p>The annotations I added -</p> <pre><code>scheduler.alpha.kubernetes.io/defaultTolerations: '[{&quot;operator&quot;: &quot;Equal&quot;, &quot;value&quot;: &quot;spot&quot;, &quot;key&quot;: &quot;kubernetes.azure.com/scalesetpriority&quot;}]' scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{&quot;operator&quot;: &quot;Equal&quot;, &quot;value&quot;: &quot;spot&quot;, &quot;key&quot;: &quot;kubernetes.azure.com/scalesetpriority&quot;}, {&quot;operator&quot;: &quot;Exists&quot;, &quot;effect&quot;: &quot;NoSchedule&quot;, &quot;key&quot;: &quot;node.kubernetes.io/memory-pressure&quot;}, {&quot;operator&quot;: &quot;Exists&quot;, &quot;tolerationSeconds&quot;: 300, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;key&quot;: &quot;node.kubernetes.io/unreachable&quot;}, {&quot;operator&quot;: &quot;Exists&quot;, &quot;tolerationSeconds&quot;: 300, &quot;effect&quot;: &quot;NoExecute&quot;, &quot;key&quot;: &quot;node.kubernetes.io/not-ready&quot;}]' </code></pre> <p>Statefulset describe -</p> <pre><code>Name: &lt;release name&gt; Namespace: &lt;namespace&gt; CreationTimestamp: Tue, 18 Jan 2022 19:37:38 +0530 Selector: app.kubernetes.io/instance=&lt;name&gt;,app.kubernetes.io/name=rabbitmq Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: meta.helm.sh/release-name: &lt;release name&gt; meta.helm.sh/release-namespace: &lt;namespace&gt; Replicas: 3 desired | 0 total Update Strategy: RollingUpdate Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=rabbit app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=rabbitmq helm.sh/chart=rabbitmq-8.6.1 Annotations: checksum/config: 1a138ded5a3ade049cbee9f4f8e2d0fd7253c126d49b790495a492601fd9f280 checksum/secret: 05af38634eb4b46c2f8db5770013e1368e78b0d5af057aed5fa4fe7eec4c92de prometheus.io/port: 9419 prometheus.io/scrape: true Service Account: sa-rabbitmq Containers: rabbitmq: Image: docker.io/bitnami/rabbitmq:3.8.9-debian-10-r64 Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP, 9419/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q ping] delay=120s timeout=200s period=30s #success=1 #failure=6 Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running &amp;&amp; rabbitmq-diagnostics -q check_local_alarms] delay=10s timeout=200s period=30s #success=1 #failure=3 Environment: &lt;multiple environment variables&gt; Mounts: /bitnami/rabbitmq/conf from configuration (rw) /bitnami/rabbitmq/mnesia from data (rw) Volumes: configuration: Type: ConfigMap (a volume populated by a ConfigMap) Name: rabbit-rabbitmq-config Optional: false data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; Volume Claims: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 31s (x14 over 72s) statefulset-controller create Pod &lt;pod-name&gt; in StatefulSet &lt;release name&gt; failed error: pod tolerations (possibly merged with namespace default tolerations) conflict with its namespace whitelist </code></pre>
<p>I had the same issue. Fixed it with:</p> <ol> <li><p>removed whitelist annotation from ns.</p> </li> <li><p>deployed the pod.</p> </li> <li><p><code>kubectl get pod &lt;pod name&gt; -o yaml</code></p> <p>in my case, I had a couple of extra toleration injected without me realizing.</p> </li> <li><p>revert 1 (annotations on ns).</p> </li> </ol>
<p>I deployed kubernetes dashboard in Azure Kubernetes cluster with the help of following command</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml </code></pre> <p>Applied clusterrole and clusterrolebinding to same cluster on my azure AD object.</p> <p><strong>ClusterRole</strong>.<strong>yml</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: admin-user-testing rules: - apiGroups: [&quot;*&quot;] resources: [&quot;*&quot;] verbs: [&quot;*&quot;] </code></pre> <p><strong>ClusterRoleBinding.yaml</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user-testing subjects: - kind: User name: &lt;azure user object id&gt; apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: admin-user-testing apiGroup: rbac.authorization.k8s.io </code></pre> <p>For logging into dashboard I am generating the token with the following command</p> <pre><code>kubelogin get-token --login azurecli --server-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx </code></pre> <p>In output it gives token property</p> <pre><code>{&quot;kind&quot;:&quot;ExecCredential&quot;,&quot;apiVersion&quot;:&quot;client.authentication.k8s.io/v1beta1&quot;,&quot;spec&quot;:{&quot;interactive&quot;:false},&quot;status&quot;:{&quot;expirationTimestamp&quot;:&quot;2023-05-25T15:00:16Z&quot;,&quot;token&quot;:&quot;eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ii1LSTNROW5OUjdiUm9meG1lWmxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcWJIWkdldyJ9.eyJhdWQiOiI2ZGFlNDJmOC00MzY4LTQ2NzgtOTRmZi0zOTYwZTI4ZTM2MzAiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9jODk1Y2E0ZS02NTg0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxLCJuYmYiOjE2ODUwMjI2ODYsImV4cCI6MTY4NTAyNjgxNiwiYWNyIjoiMSIsImFpbyI6IkFaUUFhLzhUQUFBQW1OL09tTjU5dzhWOU9RczMwOU94dTJBQzNsaGw3aGR0OVE0MjRrYXh4TVJxaGhKajFKcWwvNWpScVI3VUw1TmJFYVUyOCswZjc5N1hUazRwNG5EbGVtWStuZGZsTmNzSkhSMW5jNGJuUmlQZGJiYWlIVWtSRkE3QWxwT0ZJS0w1NTltd1l4b3J1QlFCZ0l2R0czOEI4dnlrQWZic2ZXZS9yNGZBdzIyRTBnRCtIc20vVmtTYXh4am0yK1JIc0V5YiIsImFsdHNlY2lkIjoiNTo6MTAwMzIwMDFFQzhBMjBDNiIsImFtciI6WyJwd2QiXSwiYXBwaWQiOiIwNGIwNzc5NS04ZGRiLTQ2MWEtYmJlZS0wMmY5ZTFiZjdiNDYiLCJhcHBpZGFjciI6IjAiLCJlbWFpbCI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwiZmFtaWx5X25hbWUiOiJCYXRjaHUiLCJnaXZlbl9uYW1lIjoiU2FpIEFraGlsIiwiZ3JvdXBzIjpbIjZjMmRkMTA4LTdlYzUtNGU3NC05M2M1LWM1MTJmMmYzOWFhYiIsImUyMTBiZTM0LTI2ZGUtNDg5ZS04OThmLTFlYTc4Y2M0YTgwMiIsIjBjY2NmMjU2LTRjYWEtNDdjYS04NDBmLWVmNTliMjc0ZDU3NiIsImY0NzU2MDc3LWI1NWMtNDhhMy05YWFkLWQyZGI0YWJjNGMwZiIsIjZkMzVmMjk3LTRmMzUtNDJlNi04NWE4LTg1NDgzNTdmZWYxNiIsIjJhYzk1Y2EyLTEyZWYtNDg3Ny1hNzYxLTliNzhhYmQ2YWQ5OSIsIjk0OTBjNmE4LWRjYTItNGEwNy1hOTA0LTBiYjQ0MGQyMDEyNSIsImI2ZDdjYmI1LTVkZWEtNDdlOC1hYjJiLWE0Zjg0NTAzMDA3OSIsIjAzMTY2MGMxLWZjNTQtNDk3Yi05MzU3LThkMmJhMThjNmQ5YSIsIjI3ODFlM2NmLTMzMTUtNDlmOC1iYjdjLWEyNWU5NGZmZGYzOSIsImI0MzJhNWVlLTUzNzQtNGZlMy05NjYwLWFmOWZjOWE3NDk3NyIsIjE4NTUwOGYzLTIzYTMtNDAzOC05MzNmLThhNzMyYWYxYWQyYiJdLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8zMzU4MzZkZS00MmVmLTQzYTItYjE0NS0zNDhjMmVlOWNhNWIvIiwiaXBhZGRyIjoiMTE2Ljc1Ljk3LjE5MyIsIm5hbWUiOiJCYXRjaHUgU2FpIEFraGlsIiwib2lkIjoiM2QxNTJlN2YtNDQ1Mi00MTNiLWI2YTMtMzE5MjRjNzc3YWNmIiwicHVpZCI6IjEwMDMyMDAxRUZCQkU4QzUiLCJyaCI6IjAuQVxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxM2hHbFA4NVlPS09OakFZQURnLiIsInNjcCI6InVzZXJfaW1wZXJzb25hdGlvbiIsInN1YiI6Ilo1YVh0dm1ndnB6ZTIwdDI0Y1Z4bVprTi1CaDlWc3ZkY04wS2MxdmdqVzgiLCJ0aWQiOiJjODk1Y2E0ZS02NTg0LTQ5ZGUtOTVkNS1lMGVjNGRkNDVlZTEiLCJ1bmlxdWVfbmFtZSI6ImJhdGNodS5zYWlha2hpbEBjaXRyaXguY29tIiwidXRpIjoiYnY2ZEVqVk5ERWlJdUg2ckxiRndBQSIsInZlciI6IjEuMCIsIndpZHMiOlsiZmU5MzBiZTctNWU2Mi00N2RiLTkxYWYtOThjM2E0OWEzOGIxIiwiMTNiZDFjNzItNmY0YS00ZGNmLTk4NWYtMThkM2I4MGYyMDhhIl19.YHv6LgS0CEBUIVEXMSYUMTgPp4Y9BnYWO0oazmxeY4shakf_VCvq9HA-ZNir1JcpgZfKaZPLpaM1bkAvsmgWqS9Fa-EXxRgEhgOavC6_BowdWlaot25xTYOpOD4sUe9mZ2GBvUOC8jNW3Juqple927C9TIUD03bbjkEkuHsi0Wxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&quot;}} </code></pre> <p>After executing kubectl proxy and login to token Sign In button is not responding</p> <p>In networking tab it shows the following error</p> <pre><code>{ &quot;status&quot;: 401, &quot;plugins&quot;: [], &quot;errors&quot;: [ { &quot;ErrStatus&quot;: { &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;MSG_LOGIN_UNAUTHORIZED_ERROR&quot;, &quot;reason&quot;: &quot;Unauthorized&quot;, &quot;code&quot;: 401 } } ] } </code></pre> <p>Screenshot <a href="https://i.stack.imgur.com/oWBL8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oWBL8.jpg" alt="enter image description here" /></a></p> <p>I have tried multiple solutions but none of them have worked. Please help me resolve this error? Thank you in advance.</p>
<p>The token is probably simply too big to be handled as an auth header. Request headers have a max size of 4096 KB (depends on the browser and/or proxy implementation) - <a href="https://github.com/kubernetes/dashboard/issues/6904#issuecomment-1070971435" rel="nofollow noreferrer">link</a></p> <p>Allow logging in when a very long token is provided (i.e. Azure with groups, etc.) - <a href="https://github.com/kubernetes/dashboard/issues/2353#issue-256684365" rel="nofollow noreferrer">link to GitHub Repo of Kubernetes dashboard with Feature request where you can see this</a></p>
<p>Below is my Service object</p> <pre><code>apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 </code></pre> <p>When I am creating this object then I get below exception, do anyone knows what is wrong in this?</p> <p><code>Error from server (BadRequest): error when creating &quot;listing62.yaml&quot;: Service in version &quot;v1&quot; cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort</code></p> <p>I have tried to make changes to the service object but not working.</p>
<p>Try you are missing <strong>-</strong> in <code>ports []</code></p> <pre><code>apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre>
<p>I have a Spark cluster running on Kubernetes using <a href="https://github.com/bitnami/charts/tree/main/bitnami/spark" rel="nofollow noreferrer">this</a> Bitnami Helm chart.</p> <p>Looking at Spark documentation, I have created a log4j2 config file at the below location:</p> <pre><code>/opt/bitnami/spark/conf/log4j2.properties </code></pre> <p>Configuration within this log file works fine as expected.</p> <p>Now, I have a Spring boot based application which I am executing on this cluster using <code>spark-submit</code> command.</p> <p>This application has a log4j2 config file at below location:</p> <pre><code>src/main/resources/log4j2.xml </code></pre> <p>When I execute this jar using <code>spark-submit</code> command, configuration from this file doesn't seemed to work. I have also supply following argument from other Stackoverflow threads:</p> <pre><code>--conf &quot;spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j2.xml&quot; --conf &quot;spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j2.xml&quot; </code></pre> <p>Still has the same issue.</p> <p>How can I override cluster level logging configuration from application level logging file?</p>
<p>You need to point to the path of the log4j2.xml in the system that contains the spark-submit something like this starting with file:///:</p> <pre><code>spark.driver.extraJavaOptions=-Dlog4j.configurationFile=file:///opt/spark/log/log4j2.xml </code></pre>
<p>When nodes reboot based on the job &quot;kured_reboots&quot;, alertManager still alerts on other node alerts. Is there a way to stop alertManager alerts for nodes going through a reboot based on the job &quot;kured_reboot'? Something like this:</p> <pre><code>route: receiver: default group_by: - cluster - namespace - severity - alertname - job continue: false routes: - receiver: receiver1 matchers: - job =~ &quot;kured_reboot&quot; active_time_intervals: - business-hours inhibit_rules: - source_matchers: - job=&quot;kured_reboot&quot; target_matchers: - severity=~&quot;critical|warning|info&quot; equal: - namespace - alertname </code></pre> <p>So far, I am still researching this and have not tried anything yet. I wanted to ask if anyone has done something like this before.</p>
<p>Generally there are two way to do it:</p> <ol> <li><p>Incorporate additional check about this job into your alert rules. For this you'll need to add something like <code>unless on() my_job_status{name=&quot;kured_reboot&quot;} == 1</code> (You might need to something similar or even completely different depending on your situation and style of job and metrics related to it)</p> </li> <li><p>Create alert based on the fact that this job is running and add the <a href="https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule" rel="nofollow noreferrer">inhibition rule</a>, that will prevent other alert from firing while job is running.</p> </li> </ol>
<p>Below is my Service object</p> <pre><code>apiVersion: v1 kind: Service metadata: name: srv1 spec: selector: name: srv1 ports: protocol: TCP port: 80 targetPort: 9736 </code></pre> <p>When I am creating this object then I get below exception, do anyone knows what is wrong in this?</p> <p><code>Error from server (BadRequest): error when creating &quot;listing62.yaml&quot;: Service in version &quot;v1&quot; cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort</code></p> <p>I have tried to make changes to the service object but not working.</p>
<p>This line is the problem:</p> <p><code> protocol: TCP</code></p> <p>You are missing a hyphen. You need to put:</p> <p><code> - protocol: TCP</code></p> <p>Basically, if there is a list for a specific field (like, containers, ports), then commonly, there is a hyphen at the start.</p>