input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
I want to add to what Eric said, and mutate it a bit. One of the realizations we had during the kubernetes 1.1 "settling period" is that there are not really specs for things like resolv.conf and resolver behavior. Different resolver libraries do different things, and this was causing pain for our users. Specifically, some common resolvers assume that all nameservers are fungible and would break if you had nameservers that handled different parts of the DNS namespace. We made a decision that for kube 1.2 we will NOT pass multiple nameserver lines into containers. Instead, we pass only the kube-dns server, which handles cluster.local queries and forwards any other queries to an "upstream" nameserver. How do we know what "upstream" is? We use the nameservers of the node. There is a per-pod dnsPolicy field that governs this choice. The net result is that containers see a single nameserver in resolv.conf, which we own, and that nameserver handles the whole DNS namespace. What this practically means is that there's not a great hook for you to interject your own nameserver. You could change the --cluster-dns flag to kubelets to point to your own DNS server, which would then forward to the kube-dns, which would then forward to "upstream". The problem is that GKE doesn't really support changing flags that way. If/when the node is updated, the flag will disappear in favor of the default. Possible solutions: Have kubelets read their flags from an in-cluster config. This is already plan of record, but is not in v1.2 Have kube-dns take a flag indicating what "upstream" is. Kube-dns is a "cluster addon" and as such isn't really mutable by end users (we will update it with your cluster and lose your changes). Have kube-dns read its flags from an in-cluster config, and take a flag indicating what "upstream" is. This is a doable idea, but probably not for v1.2 (too late). It might be possible to patch this into a v1.2.x but it's not really a bugfix, it's a feature. Get your own DNS server into the resolv.conf on each node so that kube-dns would use you as upstream. I don't think GKE has a way to configure this that won't also get lost on node upgrades. You could write a controller that periodically SSH'ed to VMs and wrote that out, and subsequently checked your kube-dns container for correctness. Blech. I think the right answer is to use in-cluster configmaps to inform either kubelets or DNS (or both). If you think these might be workable answers (despite the timeframe issues), it would be great if you opened a GitHub issue to discuss. It will get more visibility there.
###Instruction: I am attempting to migrate my on premises cluster to GKE. In order to facilitate this transition I need to be able to resolve the names of legacy services. Assume that the networking/VPN is a solved problem. Is there are way to do this with GKE currently? Effectively I am attempting to add a NS to every /etc/resolv.conf ### Response: I want to add to what Eric said, and mutate it a bit. One of the realizations we had during the kubernetes 1.1 "settling period" is that there are not really specs for things like resolv.conf and resolver behavior. Different resolver libraries do different things, and this was causing pain for our users. Specifically, some common resolvers assume that all nameservers are fungible and would break if you had nameservers that handled different parts of the DNS namespace. We made a decision that for kube 1.2 we will NOT pass multiple nameserver lines into containers. Instead, we pass only the kube-dns server, which handles cluster.local queries and forwards any other queries to an "upstream" nameserver. How do we know what "upstream" is? We use the nameservers of the node. There is a per-pod dnsPolicy field that governs this choice. The net result is that containers see a single nameserver in resolv.conf, which we own, and that nameserver handles the whole DNS namespace. What this practically means is that there's not a great hook for you to interject your own nameserver. You could change the --cluster-dns flag to kubelets to point to your own DNS server, which would then forward to the kube-dns, which would then forward to "upstream". The problem is that GKE doesn't really support changing flags that way. If/when the node is updated, the flag will disappear in favor of the default. Possible solutions: Have kubelets read their flags from an in-cluster config. This is already plan of record, but is not in v1.2 Have kube-dns take a flag indicating what "upstream" is. Kube-dns is a "cluster addon" and as such isn't really mutable by end users (we will update it with your cluster and lose your changes). Have kube-dns read its flags from an in-cluster config, and take a flag indicating what "upstream" is. This is a doable idea, but probably not for v1.2 (too late). It might be possible to patch this into a v1.2.x but it's not really a bugfix, it's a feature. Get your own DNS server into the resolv.conf on each node so that kube-dns would use you as upstream. I don't think GKE has a way to configure this that won't also get lost on node upgrades. You could write a controller that periodically SSH'ed to VMs and wrote that out, and subsequently checked your kube-dns container for correctness. Blech. I think the right answer is to use in-cluster configmaps to inform either kubelets or DNS (or both). If you think these might be workable answers (despite the timeframe issues), it would be great if you opened a GitHub issue to discuss. It will get more visibility there.
I am attempting to migrate my on premises cluster to GKE. In order to facilitate this transition I need to be able to resolve the names of legacy services. Assume that the networking/VPN is a solved problem. Is there are way to do this with GKE currently? Effectively I am attempting to add a NS to every /etc/resolv.conf
The etcd instance used by the Kubernetes apiserver is generally treated as an implementation detail of the apiserver and is not designed to be reused by user applications. By default it is installed to only listen for connections on localhost and run on a machine where no user applications are scheduled. It isn't difficult to run a second etcd instance for your own use. For example, the DNS cluster add-on includes a private instance of etcd that is separate from the etcd used by the apiserver.
###Instruction: I don't want to setup another etcd cluster. How can I access the built-in etcd from kubernetes pod? I suppose first I need create a service account and use this account to launch the pod. Then how can the container in this pod discover the URI of built-in etcd? Thank you ### Response: The etcd instance used by the Kubernetes apiserver is generally treated as an implementation detail of the apiserver and is not designed to be reused by user applications. By default it is installed to only listen for connections on localhost and run on a machine where no user applications are scheduled. It isn't difficult to run a second etcd instance for your own use. For example, the DNS cluster add-on includes a private instance of etcd that is separate from the etcd used by the apiserver.
I don't want to setup another etcd cluster. How can I access the built-in etcd from kubernetes pod? I suppose first I need create a service account and use this account to launch the pod. Then how can the container in this pod discover the URI of built-in etcd? Thank you
Deployment is an Object that lets you define a declarative deploy. It encapsulates DeploymentStatus object, that is in charge of managing the number of replicas and its state. DeploymentSpec object, which holds number of replicas, templateSpec , Selectors, and some other data that deal with deployment behaviour. You can get a glimpse of actual code here: https://github.com/kubernetes/kubernetes/blob/5516b8684f69bbe9f4688b892194864c6b6d7c08/pkg/apis/extensions/v1beta1/types.go#L223-L253 You will mostly use Deployments to deploy services/applications, in a declarative manner. If you want to modify your deployment, update the yaml/json you used without changing the metadata. In contrast, kubectl rolling-update isn't declarative, no yaml/json involved, and needs an existing replication controller.
###Instruction: I have tried all the basics of Kubernetes and if you want to update your application all you can use kubectl rolling-update to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called Deployment on version v1beta1. I am confused since I there is a line on the Deployment docs: Next time we want to update pods, we can just update the deployment again. Isn't this the role for rolling-update? Any inputs would be very useful. ### Response: Deployment is an Object that lets you define a declarative deploy. It encapsulates DeploymentStatus object, that is in charge of managing the number of replicas and its state. DeploymentSpec object, which holds number of replicas, templateSpec , Selectors, and some other data that deal with deployment behaviour. You can get a glimpse of actual code here: https://github.com/kubernetes/kubernetes/blob/5516b8684f69bbe9f4688b892194864c6b6d7c08/pkg/apis/extensions/v1beta1/types.go#L223-L253 You will mostly use Deployments to deploy services/applications, in a declarative manner. If you want to modify your deployment, update the yaml/json you used without changing the metadata. In contrast, kubectl rolling-update isn't declarative, no yaml/json involved, and needs an existing replication controller.
I have tried all the basics of Kubernetes and if you want to update your application all you can use kubectl rolling-update to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called Deployment on version v1beta1. I am confused since I there is a line on the Deployment docs: Next time we want to update pods, we can just update the deployment again. Isn't this the role for rolling-update? Any inputs would be very useful.
Update (That is what I did to use a private image): First log in quay.io: $ docker login quay.io Username (username): Password: WARNING: login credentials saved in /Users/user/.docker/config.json Login Succeeded Then I created a new file (my-credentials.json) that only had the quay.io credentials that docker added in the config.json file (I realized that it had more credentials apart from the quay.io). $ cat config.json { "quay.io": { "auth": "xxxxxxxxxxxxxxxxxxxxx", "email": "user@example.com" } } After that, I generated the base64: $ cat ./my-credentials.json | base64 <base64-value> And I created the secret resource: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: <base64-value> type: kubernetes.io/dockercfg $ kubectl create -f image-pull-secret.yaml Finally, I created the pod: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "web", "labels": { "app": "web" } }, "spec": { "replicas": 1, "selector": { "app": "web" }, "template": { "metadata": { "labels": { "app": "web" } }, "spec": { "containers": [ { "name": "web", "image": "quay.io/username/myimage", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ], "imagePullSecrets": [ { "name": "myregistrykey" } ] } } } } $ kubectl create -f pod.yaml I have used myregistrykey as the name of the imagePullSecrets instead of quay.io, but I don't think that the issue is for that. The issue seems to be due to you didn't create a secret to save your credentials. Note that the value of the name key in the imagePullSecrets section (in your case "quay.io") should be the same that you specified in your secret resource.
###Instruction: I have specified a certain image pull secret for my replication controller, but it doesn't appear to be applied when downloading the Docker image: $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get events FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE 8h 8s 3074 web-73na4 Pod spec.containers{web} Pulling {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Pulling image "quay.io/aknuds1/realtime-music" 8h 5s 3074 web-73na4 Pod spec.containers{web} Failed {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Failed to pull image "quay.io/aknuds1/realtime-music": image pull failed for quay.io/aknuds1/realtime-music, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission Denied\"}") How do I make the replication controller use the image pull secret "quay.io" when downloading the image? The replication controller spec looks as follows: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "web", "labels": { "app": "web" } }, "spec": { "replicas": 1, "selector": { "app": "web" }, "template": { "metadata": { "labels": { "app": "web" } }, "spec": { "containers": [ { "name": "web", "image": "quay.io/aknuds1/realtime-music", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ], "imagePullSecrets": [ { "name": "quay.io" } ] } } } } Edit I created the quay.io secret like this: ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf create -f /tmp/image-pull-secret.yaml. The content of /tmp/image-pull-secret.yaml is basically like this: apiVersion: v1 kind: Secret metadata: name: quay.io data: .dockercfg: <base64 encoded dockercfg> type: kubernetes.io/dockercfg Output from kubectl get pods web-73na4 -o yaml, in response to @PaulMorie apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/created-by: | {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"e1b7a3f0-e349-11e5-a136-02420a022e02","apiVersion":"v1","resourceVersion":"31503"}} creationTimestamp: 2016-03-06T03:16:56Z generateName: web- labels: app: web name: web-73na4 namespace: default resourceVersion: "31516" selfLink: /api/v1/namespaces/default/pods/web-73na4 uid: e1b89066-e349-11e5-a136-02420a022e02 spec: containers: - image: quay.io/aknuds1/realtime-music imagePullPolicy: IfNotPresent name: web ports: - containerPort: 80 name: http-server protocol: TCP resources: {} terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-5s7kd readOnly: true dnsPolicy: ClusterFirst nodeName: ip-172-31-29-110.eu-central-1.compute.internal restartPolicy: Always serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-5s7kd secret: secretName: default-token-5s7kd status: conditions: - lastProbeTime: null lastTransitionTime: null status: "False" type: Ready containerStatuses: - image: quay.io/aknuds1/realtime-music imageID: "" lastState: {} name: web ready: false restartCount: 0 state: waiting: message: 'image pull failed for quay.io/aknuds1/realtime-music, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission Denied\"}")' reason: PullImageError hostIP: 172.31.29.110 phase: Pending podIP: 10.2.2.3 startTime: 2016-03-06T03:16:56Z ### Response: Update (That is what I did to use a private image): First log in quay.io: $ docker login quay.io Username (username): Password: WARNING: login credentials saved in /Users/user/.docker/config.json Login Succeeded Then I created a new file (my-credentials.json) that only had the quay.io credentials that docker added in the config.json file (I realized that it had more credentials apart from the quay.io). $ cat config.json { "quay.io": { "auth": "xxxxxxxxxxxxxxxxxxxxx", "email": "user@example.com" } } After that, I generated the base64: $ cat ./my-credentials.json | base64 <base64-value> And I created the secret resource: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: <base64-value> type: kubernetes.io/dockercfg $ kubectl create -f image-pull-secret.yaml Finally, I created the pod: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "web", "labels": { "app": "web" } }, "spec": { "replicas": 1, "selector": { "app": "web" }, "template": { "metadata": { "labels": { "app": "web" } }, "spec": { "containers": [ { "name": "web", "image": "quay.io/username/myimage", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ], "imagePullSecrets": [ { "name": "myregistrykey" } ] } } } } $ kubectl create -f pod.yaml I have used myregistrykey as the name of the imagePullSecrets instead of quay.io, but I don't think that the issue is for that. The issue seems to be due to you didn't create a secret to save your credentials. Note that the value of the name key in the imagePullSecrets section (in your case "quay.io") should be the same that you specified in your secret resource.
I have specified a certain image pull secret for my replication controller, but it doesn't appear to be applied when downloading the Docker image: $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get events FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE 8h 8s 3074 web-73na4 Pod spec.containers{web} Pulling {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Pulling image "quay.io/aknuds1/realtime-music" 8h 5s 3074 web-73na4 Pod spec.containers{web} Failed {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Failed to pull image "quay.io/aknuds1/realtime-music": image pull failed for quay.io/aknuds1/realtime-music, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission Denied\"}") How do I make the replication controller use the image pull secret "quay.io" when downloading the image? The replication controller spec looks as follows: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "web", "labels": { "app": "web" } }, "spec": { "replicas": 1, "selector": { "app": "web" }, "template": { "metadata": { "labels": { "app": "web" } }, "spec": { "containers": [ { "name": "web", "image": "quay.io/aknuds1/realtime-music", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ], "imagePullSecrets": [ { "name": "quay.io" } ] } } } } Edit I created the quay.io secret like this: ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf create -f /tmp/image-pull-secret.yaml. The content of /tmp/image-pull-secret.yaml is basically like this: apiVersion: v1 kind: Secret metadata: name: quay.io data: .dockercfg: <base64 encoded dockercfg> type: kubernetes.io/dockercfg Output from kubectl get pods web-73na4 -o yaml, in response to @PaulMorie apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/created-by: | {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"e1b7a3f0-e349-11e5-a136-02420a022e02","apiVersion":"v1","resourceVersion":"31503"}} creationTimestamp: 2016-03-06T03:16:56Z generateName: web- labels: app: web name: web-73na4 namespace: default resourceVersion: "31516" selfLink: /api/v1/namespaces/default/pods/web-73na4 uid: e1b89066-e349-11e5-a136-02420a022e02 spec: containers: - image: quay.io/aknuds1/realtime-music imagePullPolicy: IfNotPresent name: web ports: - containerPort: 80 name: http-server protocol: TCP resources: {} terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-5s7kd readOnly: true dnsPolicy: ClusterFirst nodeName: ip-172-31-29-110.eu-central-1.compute.internal restartPolicy: Always serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-5s7kd secret: secretName: default-token-5s7kd status: conditions: - lastProbeTime: null lastTransitionTime: null status: "False" type: Ready containerStatuses: - image: quay.io/aknuds1/realtime-music imageID: "" lastState: {} name: web ready: false restartCount: 0 state: waiting: message: 'image pull failed for quay.io/aknuds1/realtime-music, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission Denied\"}")' reason: PullImageError hostIP: 172.31.29.110 phase: Pending podIP: 10.2.2.3 startTime: 2016-03-06T03:16:56Z
Kubernetes will pull upon Pod creation if either (see updating-images doc): Using images tagged :latest imagePullPolicy: Always is specified This is great if you want to always pull. But what if you want to do it on demand: For example, if you want to use some-public-image:latest but only want to pull a newer version manually when you ask for it. You can currently: Set imagePullPolicy to IfNotPresent or Never and pre-pull: Pull manually images on each cluster node so the latest is cached, then do a kubectl rolling-update or similar to restart Pods (ugly easily broken hack!) Temporarily change imagePullPolicy, do a kubectl apply, restart the pod (e.g. kubectl rolling-update), revert imagePullPolicy, redo a kubectl apply (ugly!) Pull and push some-public-image:latest to your private repository and do a kubectl rolling-update (heavy!) No good solution for on-demand pull. If that changes, please comment; I'll update this answer.
###Instruction: I have the following replication controller in Kubernetes on GKE: apiVersion: v1 kind: ReplicationController metadata: name: myapp labels: app: myapp spec: replicas: 2 selector: app: myapp deployment: initial template: metadata: labels: app: myapp deployment: initial spec: containers: - name: myapp image: myregistry.com/myapp:5c3dda6b ports: - containerPort: 80 imagePullPolicy: Always imagePullSecrets: - name: myregistry.com-registry-key Now, if I say kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b the rolling update is performed, but no re-pull. Why? ### Response: Kubernetes will pull upon Pod creation if either (see updating-images doc): Using images tagged :latest imagePullPolicy: Always is specified This is great if you want to always pull. But what if you want to do it on demand: For example, if you want to use some-public-image:latest but only want to pull a newer version manually when you ask for it. You can currently: Set imagePullPolicy to IfNotPresent or Never and pre-pull: Pull manually images on each cluster node so the latest is cached, then do a kubectl rolling-update or similar to restart Pods (ugly easily broken hack!) Temporarily change imagePullPolicy, do a kubectl apply, restart the pod (e.g. kubectl rolling-update), revert imagePullPolicy, redo a kubectl apply (ugly!) Pull and push some-public-image:latest to your private repository and do a kubectl rolling-update (heavy!) No good solution for on-demand pull. If that changes, please comment; I'll update this answer.
I have the following replication controller in Kubernetes on GKE: apiVersion: v1 kind: ReplicationController metadata: name: myapp labels: app: myapp spec: replicas: 2 selector: app: myapp deployment: initial template: metadata: labels: app: myapp deployment: initial spec: containers: - name: myapp image: myregistry.com/myapp:5c3dda6b ports: - containerPort: 80 imagePullPolicy: Always imagePullSecrets: - name: myregistry.com-registry-key Now, if I say kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b the rolling update is performed, but no re-pull. Why?
You can use the oc edit command to edit an existing object. For example, oc edit buildconfig/myapp to edit the BuildConfig named myapp.
###Instruction: I'm working through the sample tutorial on OpenShift 3. I created the example application nodejs-mongodb-example. But in the "edit parameters" step, I put the wrong URL to my forked repository, and I get a failed build. I thought maybe I'd be able to extract the buildConfig file (is that a template?) on the command line, but I haven't found a way to do that. Is there a way to edit and replace this bad buildConfig without deleting all of the application objects and starting over? ### Response: You can use the oc edit command to edit an existing object. For example, oc edit buildconfig/myapp to edit the BuildConfig named myapp.
I'm working through the sample tutorial on OpenShift 3. I created the example application nodejs-mongodb-example. But in the "edit parameters" step, I put the wrong URL to my forked repository, and I get a failed build. I thought maybe I'd be able to extract the buildConfig file (is that a template?) on the command line, but I haven't found a way to do that. Is there a way to edit and replace this bad buildConfig without deleting all of the application objects and starting over?
Can you provide the output of kubectl config view? I think the problem might be you need to do something like kubectl config set-credentials cluster-admin --username=admin --password=PASSWORD instead of kubectl config set-credentials admin --password="PASSWORD".
###Instruction: I have implemented basic authentication on my kubernetes api-server, now I am trying to configure my ./kube/config file in a way I could simply run, kubectl get pods kubectl config set-cluster digitalocean \ --server=https://SERVER:6443 \ --insecure-skip-tls-verify=true \ --api-version="v1" kubectl config set-context digitalocean --cluster=digitalocean --user=admin kubectl config set-credentials admin --password="PASSWORD" kubectl config use-context digitalocean But now, it asks for credentials twice like : dev@desktop: ~/code/go/src/bitbucket.org/cescoferraro $ kubectl get pods Please enter Username: admin enter Password: PASSWORD Please enter Username: admin Please enter Password: PASSWORD NAME READY STATUS RESTARTS AGE or I need to pass the flags like kubectl get pods --username=admin --password=PASSWORD is this the default behavior? I want my config to know me. What can I do? ### Response: Can you provide the output of kubectl config view? I think the problem might be you need to do something like kubectl config set-credentials cluster-admin --username=admin --password=PASSWORD instead of kubectl config set-credentials admin --password="PASSWORD".
I have implemented basic authentication on my kubernetes api-server, now I am trying to configure my ./kube/config file in a way I could simply run, kubectl get pods kubectl config set-cluster digitalocean \ --server=https://SERVER:6443 \ --insecure-skip-tls-verify=true \ --api-version="v1" kubectl config set-context digitalocean --cluster=digitalocean --user=admin kubectl config set-credentials admin --password="PASSWORD" kubectl config use-context digitalocean But now, it asks for credentials twice like : dev@desktop: ~/code/go/src/bitbucket.org/cescoferraro $ kubectl get pods Please enter Username: admin enter Password: PASSWORD Please enter Username: admin Please enter Password: PASSWORD NAME READY STATUS RESTARTS AGE or I need to pass the flags like kubectl get pods --username=admin --password=PASSWORD is this the default behavior? I want my config to know me. What can I do?
There were two things I needed to do: Configure kube2sky via kubeconfig, so that it's properly configured for TLS. Configure kube-proxy via kubeconfig, so that it's properly configured for TLS and finds the master node. /etc/kubernetes/kube.conf on master node apiVersion: v1 kind: Config clusters: - name: kube cluster: server: https://127.0.0.1:443 certificate-authority: /etc/ssl/etcd/ca.pem users: - name: kubelet user: client-certificate: /etc/ssl/etcd/master-client.pem client-key: /etc/ssl/etcd/master-client-key.pem contexts: - context: cluster: kube user: kubelet /etc/kubernetes/kube.conf on worker node apiVersion: v1 kind: Config clusters: - name: local cluster: certificate-authority: /etc/ssl/etcd/ca.pem server: https://<master IP>:443 users: - name: kubelet user: client-certificate: /etc/ssl/etcd/worker.pem client-key: /etc/ssl/etcd/worker-key.pem contexts: - context: cluster: local user: kubelet name: kubelet-context current-context: kubelet-context dns-addon.yaml (install this on master) apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m # Kube2sky watches all pods. memory: 200Mi requests: cpu: 100m memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - name: kubernetes-etc mountPath: /etc/kubernetes readOnly: true - name: etcd-ssl mountPath: /etc/ssl/etcd readOnly: true readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local. - --kubecfg-file=/etc/kubernetes/kube.conf - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local \ 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} - name: kubernetes-etc hostPath: path: /etc/kubernetes - name: etcd-ssl hostPath: path: /etc/ssl/etcd dnsPolicy: Default # Don't use cluster DNS. /etc/kubernetes/manifests/kube-proxy.yaml on master node apiVersion: v1 kind: Pod metadata: name: kube-proxy namespace: kube-system spec: hostNetwork: true containers: - name: kube-proxy image: gcr.io/google_containers/hyperkube:v1.1.2 command: - /hyperkube - proxy - --master=https://127.0.0.1:443 - --proxy-mode=iptables - --kubeconfig=/etc/kubernetes/kube.conf securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: /etc/kubernetes name: kubernetes readOnly: true - mountPath: /etc/ssl/etcd name: kubernetes-certs readOnly: true volumes: - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host - hostPath: path: /etc/kubernetes name: kubernetes - hostPath: path: /etc/ssl/etcd name: kubernetes-certs /etc/kubernetes/manifests/kube-proxy.yaml on worker node apiVersion: v1 kind: Pod metadata: name: kube-proxy namespace: kube-system spec: hostNetwork: true containers: - name: kube-proxy image: gcr.io/google_containers/hyperkube:v1.1.2 command: - /hyperkube - proxy - --kubeconfig=/etc/kubernetes/kube.conf - --proxy-mode=iptables - --v=2 securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: "ssl-certs" - mountPath: /etc/kubernetes/kube.conf name: "kubeconfig" readOnly: true - mountPath: /etc/ssl/etcd name: "etc-kube-ssl" readOnly: true volumes: - name: "ssl-certs" hostPath: path: "/usr/share/ca-certificates" - name: "kubeconfig" hostPath: path: "/etc/kubernetes/kube.conf" - name: "etc-kube-ssl" hostPath: path: "/etc/ssl/etcd"
###Instruction: I have set up DNS in my Kubernetes (v1.1.2+1abf20d) system, on CoreOS/AWS, but I cannot look up services via DNS. I have tried debugging, but cannot for the life of me find out why. This is what happens when I try to look up the kubernetes service, which should always be available: $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf exec busybox-sleep -- nslookup kubernetes.default Server: 10.3.0.10 Address 1: 10.3.0.10 ip-10-3-0-10.eu-central-1.compute.internal nslookup: can't resolve 'kubernetes.default' error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1 I have installed the DNS addon according to this spec: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v10 namespace: kube-system labels: k8s-app: kube-dns version: v10 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v10 template: metadata: labels: k8s-app: kube-dns version: v10 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.12 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local. ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 1 timeoutSeconds: 5 - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.3.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP Why isn't DNS lookup for services working in my Kubernetes setup? Please let me know what other info I need to provide. ### Response: There were two things I needed to do: Configure kube2sky via kubeconfig, so that it's properly configured for TLS. Configure kube-proxy via kubeconfig, so that it's properly configured for TLS and finds the master node. /etc/kubernetes/kube.conf on master node apiVersion: v1 kind: Config clusters: - name: kube cluster: server: https://127.0.0.1:443 certificate-authority: /etc/ssl/etcd/ca.pem users: - name: kubelet user: client-certificate: /etc/ssl/etcd/master-client.pem client-key: /etc/ssl/etcd/master-client-key.pem contexts: - context: cluster: kube user: kubelet /etc/kubernetes/kube.conf on worker node apiVersion: v1 kind: Config clusters: - name: local cluster: certificate-authority: /etc/ssl/etcd/ca.pem server: https://<master IP>:443 users: - name: kubelet user: client-certificate: /etc/ssl/etcd/worker.pem client-key: /etc/ssl/etcd/worker-key.pem contexts: - context: cluster: local user: kubelet name: kubelet-context current-context: kubelet-context dns-addon.yaml (install this on master) apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m # Kube2sky watches all pods. memory: 200Mi requests: cpu: 100m memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 volumeMounts: - name: kubernetes-etc mountPath: /etc/kubernetes readOnly: true - name: etcd-ssl mountPath: /etc/ssl/etcd readOnly: true readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local. - --kubecfg-file=/etc/kubernetes/kube.conf - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting # it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local \ 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} - name: kubernetes-etc hostPath: path: /etc/kubernetes - name: etcd-ssl hostPath: path: /etc/ssl/etcd dnsPolicy: Default # Don't use cluster DNS. /etc/kubernetes/manifests/kube-proxy.yaml on master node apiVersion: v1 kind: Pod metadata: name: kube-proxy namespace: kube-system spec: hostNetwork: true containers: - name: kube-proxy image: gcr.io/google_containers/hyperkube:v1.1.2 command: - /hyperkube - proxy - --master=https://127.0.0.1:443 - --proxy-mode=iptables - --kubeconfig=/etc/kubernetes/kube.conf securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: /etc/kubernetes name: kubernetes readOnly: true - mountPath: /etc/ssl/etcd name: kubernetes-certs readOnly: true volumes: - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host - hostPath: path: /etc/kubernetes name: kubernetes - hostPath: path: /etc/ssl/etcd name: kubernetes-certs /etc/kubernetes/manifests/kube-proxy.yaml on worker node apiVersion: v1 kind: Pod metadata: name: kube-proxy namespace: kube-system spec: hostNetwork: true containers: - name: kube-proxy image: gcr.io/google_containers/hyperkube:v1.1.2 command: - /hyperkube - proxy - --kubeconfig=/etc/kubernetes/kube.conf - --proxy-mode=iptables - --v=2 securityContext: privileged: true volumeMounts: - mountPath: /etc/ssl/certs name: "ssl-certs" - mountPath: /etc/kubernetes/kube.conf name: "kubeconfig" readOnly: true - mountPath: /etc/ssl/etcd name: "etc-kube-ssl" readOnly: true volumes: - name: "ssl-certs" hostPath: path: "/usr/share/ca-certificates" - name: "kubeconfig" hostPath: path: "/etc/kubernetes/kube.conf" - name: "etc-kube-ssl" hostPath: path: "/etc/ssl/etcd"
I have set up DNS in my Kubernetes (v1.1.2+1abf20d) system, on CoreOS/AWS, but I cannot look up services via DNS. I have tried debugging, but cannot for the life of me find out why. This is what happens when I try to look up the kubernetes service, which should always be available: $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf exec busybox-sleep -- nslookup kubernetes.default Server: 10.3.0.10 Address 1: 10.3.0.10 ip-10-3-0-10.eu-central-1.compute.internal nslookup: can't resolve 'kubernetes.default' error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1 I have installed the DNS addon according to this spec: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v10 namespace: kube-system labels: k8s-app: kube-dns version: v10 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v10 template: metadata: labels: k8s-app: kube-dns version: v10 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.12 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local. ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 1 timeoutSeconds: 5 - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.3.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP Why isn't DNS lookup for services working in my Kubernetes setup? Please let me know what other info I need to provide.
As you mentioned, as of Kubernetes 1.1 the load balancing algorithm is a plain Round Robin so the location of the pods is not taken into account.
###Instruction: Imagine the common scenario of a memcache (e.g. Redis) running with multiple pods across several nodes. Another service, such as a PHP application, uses Redis, and is configured to use the cluster IP of the Redis service. From my understanding this is routed through to kube-proxy (or in newer Kubernetes versions, handled in iptables) which then pushes the request to a running pod. My question is, if a running pod is available on the local node, that should be preferred to one running on a remote node, as it reduces the network bandwidth usage. Does kube-proxy do this, or does it blindly RR loadbalance across all available pods? ### Response: As you mentioned, as of Kubernetes 1.1 the load balancing algorithm is a plain Round Robin so the location of the pods is not taken into account.
Imagine the common scenario of a memcache (e.g. Redis) running with multiple pods across several nodes. Another service, such as a PHP application, uses Redis, and is configured to use the cluster IP of the Redis service. From my understanding this is routed through to kube-proxy (or in newer Kubernetes versions, handled in iptables) which then pushes the request to a running pod. My question is, if a running pod is available on the local node, that should be preferred to one running on a remote node, as it reduces the network bandwidth usage. Does kube-proxy do this, or does it blindly RR loadbalance across all available pods?
First of all you should change your service type in the guestbook service definition: diff --git a/guestbook-service.json b/guestbook-service.json index cc7640e..fadef78 100644 --- a/guestbook-service.json +++ b/guestbook-service.json @@ -17,6 +17,6 @@ "selector":{ "app":"guestbook" }, - "type": "LoadBalancer" + "type": "NodePort" } } Then stop and restart the service with: kubectl delete -f guestbook-service.json kubectl create -f guestbook-service.json Look at your node IP address with: kubectl get nodes For example, for me this was the result: $ kubectl get nodes NAME LABELS STATUS AGE 172.17.4.99 kubernetes.io/hostname=172.17.4.99 Ready 3h Finally, you can find out your service nodeport with: kubectl describe services guestbook For example, for me this was the result: $ kubectl describe services guestbook Name: guestbook Namespace: default Labels: app=guestbook Selector: app=guestbook Type: NodePort IP: 10.3.0.47 Port: <unnamed> 3000/TCP NodePort: <unnamed> 32757/TCP Endpoints: 10.2.76.12:3000,10.2.76.8:3000,10.2.76.9:3000 Session Affinity: None No events. At this point, using the node IP you got earlier, and the NodePort you just found, you should be able to connect: $ curl 172.17.4.99:32757 <!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <meta charset="utf-8"> <meta content="width=device-width" name="viewport"> <link href="/style.css" rel="stylesheet"> <title>Guestbook</title> [...] Note: the NodePort is usually allocated from a flag-configured range, by default it is 30000-32767.
###Instruction: Could someone explain how to setup the external Ip on the 'frontend' service. I know that Vagrant don't support "type: LoadBalancer" and I don't know how to expose an Ip to my host. Thanks ### Response: First of all you should change your service type in the guestbook service definition: diff --git a/guestbook-service.json b/guestbook-service.json index cc7640e..fadef78 100644 --- a/guestbook-service.json +++ b/guestbook-service.json @@ -17,6 +17,6 @@ "selector":{ "app":"guestbook" }, - "type": "LoadBalancer" + "type": "NodePort" } } Then stop and restart the service with: kubectl delete -f guestbook-service.json kubectl create -f guestbook-service.json Look at your node IP address with: kubectl get nodes For example, for me this was the result: $ kubectl get nodes NAME LABELS STATUS AGE 172.17.4.99 kubernetes.io/hostname=172.17.4.99 Ready 3h Finally, you can find out your service nodeport with: kubectl describe services guestbook For example, for me this was the result: $ kubectl describe services guestbook Name: guestbook Namespace: default Labels: app=guestbook Selector: app=guestbook Type: NodePort IP: 10.3.0.47 Port: <unnamed> 3000/TCP NodePort: <unnamed> 32757/TCP Endpoints: 10.2.76.12:3000,10.2.76.8:3000,10.2.76.9:3000 Session Affinity: None No events. At this point, using the node IP you got earlier, and the NodePort you just found, you should be able to connect: $ curl 172.17.4.99:32757 <!DOCTYPE html> <html lang="en"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <meta charset="utf-8"> <meta content="width=device-width" name="viewport"> <link href="/style.css" rel="stylesheet"> <title>Guestbook</title> [...] Note: the NodePort is usually allocated from a flag-configured range, by default it is 30000-32767.
Could someone explain how to setup the external Ip on the 'frontend' service. I know that Vagrant don't support "type: LoadBalancer" and I don't know how to expose an Ip to my host. Thanks
With a slow network it will take a while to set up everything behind the scenes; for me it took about 1h before I saw every pod in a running state: $ kubectl get pods NAME READY STATUS RESTARTS AGE guestbook-7ihd2 1/1 Running 0 2h guestbook-8fjs3 1/1 Running 0 2h guestbook-fifmm 1/1 Running 0 2h redis-master-ivebc 1/1 Running 0 3h redis-slave-6qxga 1/1 Running 0 2h redis-slave-r8bk4 1/1 Running 0 2h But in the end it worked!
###Instruction: I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment. I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do? The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file? Thank you in advance. Regards :) ### Response: With a slow network it will take a while to set up everything behind the scenes; for me it took about 1h before I saw every pod in a running state: $ kubectl get pods NAME READY STATUS RESTARTS AGE guestbook-7ihd2 1/1 Running 0 2h guestbook-8fjs3 1/1 Running 0 2h guestbook-fifmm 1/1 Running 0 2h redis-master-ivebc 1/1 Running 0 3h redis-slave-6qxga 1/1 Running 0 2h redis-slave-r8bk4 1/1 Running 0 2h But in the end it worked!
I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment. I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do? The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file? Thank you in advance. Regards :)
You can actually do this in the way that you initially were expecting to. Thanks to this answer for helping me get there - https://stackoverflow.com/a/33888424/370364 You can use the following approach to shove hosts into your container's /etc/hosts file command: ["/bin/sh","-c"] args: ["echo '192.168.200.200 node1.example.com' >> /etc/hosts && commandX"] If you want to dynamically set the ip at pod start time you can create a pod from stdin and pass it through sed to perform substitution before passing it to kubectl. So the pod yaml would look like the following command: ["/bin/sh","-c"] args: ["echo 'NODE_1_IP node1.example.com' >> /etc/hosts && commandX"] Then execute it with cat pod1.yaml | sed -- "s|NODE_1_IP|${node1ip}|" | kubectl create -f - I realise that this is not way that kubernetes intended for this kind of thing to be achieved but we are using this for starting up a test pod locally and we need to point it at the default network device on the local machine. Creating a service just to satisfy the test pod seems like overkill. So we do this instead.
###Instruction: I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below. docker run \ --name mongo \ -v /home/core/mongo-files/data:/data/db \ -v /home/core/mongo-files:/opt/keyfile \ --hostname="node1.example.com" \ --add-host node1.example.com:${node1ip} \ --add-host node2.example.com:${node2ip} \ --add-host node3.example.com:${node3ip} \ -p 27017:27017 -d mongo:2.6.5 \ --smallfiles \ --keyFile /opt/keyfile/mongodb-keyfile \ --replSet "rs0" Any pointers are highly appreciated. Thank you. Regards, Aj ### Response: You can actually do this in the way that you initially were expecting to. Thanks to this answer for helping me get there - https://stackoverflow.com/a/33888424/370364 You can use the following approach to shove hosts into your container's /etc/hosts file command: ["/bin/sh","-c"] args: ["echo '192.168.200.200 node1.example.com' >> /etc/hosts && commandX"] If you want to dynamically set the ip at pod start time you can create a pod from stdin and pass it through sed to perform substitution before passing it to kubectl. So the pod yaml would look like the following command: ["/bin/sh","-c"] args: ["echo 'NODE_1_IP node1.example.com' >> /etc/hosts && commandX"] Then execute it with cat pod1.yaml | sed -- "s|NODE_1_IP|${node1ip}|" | kubectl create -f - I realise that this is not way that kubernetes intended for this kind of thing to be achieved but we are using this for starting up a test pod locally and we need to point it at the default network device on the local machine. Creating a service just to satisfy the test pod seems like overkill. So we do this instead.
I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below. docker run \ --name mongo \ -v /home/core/mongo-files/data:/data/db \ -v /home/core/mongo-files:/opt/keyfile \ --hostname="node1.example.com" \ --add-host node1.example.com:${node1ip} \ --add-host node2.example.com:${node2ip} \ --add-host node3.example.com:${node3ip} \ -p 27017:27017 -d mongo:2.6.5 \ --smallfiles \ --keyFile /opt/keyfile/mongodb-keyfile \ --replSet "rs0" Any pointers are highly appreciated. Thank you. Regards, Aj
Deletion is graceful by default, meaning the pods are given an opportunity to terminate themselves. You can force a graceless delete with oc delete all --grace-period=0 ...
###Instruction: I have am using OpenShift 3, and have been trying to get Fabric8 setup. Things havent been going to well, so I decided to remove all services and pods. When I run oc delete all -l provider=fabric8 The cli output claims to have deleted a lot of pods, however, they are still showing in the web console, and I can run the same command in the CLI again and get the exact same list of pods that OpenShift cli claims it deleted. How do I actually delete these pods? Why is this not working as designed? Thanks ### Response: Deletion is graceful by default, meaning the pods are given an opportunity to terminate themselves. You can force a graceless delete with oc delete all --grace-period=0 ...
I have am using OpenShift 3, and have been trying to get Fabric8 setup. Things havent been going to well, so I decided to remove all services and pods. When I run oc delete all -l provider=fabric8 The cli output claims to have deleted a lot of pods, however, they are still showing in the web console, and I can run the same command in the CLI again and get the exact same list of pods that OpenShift cli claims it deleted. How do I actually delete these pods? Why is this not working as designed? Thanks
Jared pointed to some good examples in his comments above. As Brian Grant mentioned in the linked github issue, pushing log data and loading data are the most common uses inside of Google. For a concrete example in the Kubernetes repository, you can look at the definition for the DNS cluster add-on. It uses a pod to co-locate a DNS server (skyDNS), local storage using etcd, and a simple program to pull Kubernetes API objects down, convert them, and put them into local storage. Rather than building a new custom DNS server, this pod leverages an existing DNS server and add some customization to it to make it aware of the cluster environment. Since all of the containers are in a pod, they can rely on localhost networking to communicate and don't need any form of sophisticated service discovery.
###Instruction: After playing with docker landscape for several months, I still found it is really counter intuitive to use Kubernetes Pod. I have not encountered any use case where pod is more natural a fit than container. When I am asked to use a Pod, I usually just use a single container Pod. I am trying to do a demo showcasing the strength of pod concept, but I just couldn't figure out a non-trival use case. In my demo, I started a server pod with two service container listening on different port, one for transcribe letters to upper case, and one for transcribe letters to lower case. Then I have a client pod with two client containers talking to each server container... This use case seems really forced, I don't see why I need to use the Pod concept. I have read through lots of tutorials and docs, and they all just touch on WHAT is a pod, without a convincing use case of WHY we must use pod... Am I missing something? What is a solid use case for using a Pod concept? Thanks. Edit: To be specific, suppose there are two services A and B that requires co-location and shared network stack, and this is a natural fit for Pod concept. What is the advantage of using the Pod (with two collocated containers running service A and service B) compared to having service A and B running in the same container, which guarantees the collocation and shared network stack? Is there a rule of thumb for the granularity? My original question is to find out such service A and service B that requires co-location and shared network stack. Thanks to Jared and Robert for the pointers, and I will dig through these use cases. ### Response: Jared pointed to some good examples in his comments above. As Brian Grant mentioned in the linked github issue, pushing log data and loading data are the most common uses inside of Google. For a concrete example in the Kubernetes repository, you can look at the definition for the DNS cluster add-on. It uses a pod to co-locate a DNS server (skyDNS), local storage using etcd, and a simple program to pull Kubernetes API objects down, convert them, and put them into local storage. Rather than building a new custom DNS server, this pod leverages an existing DNS server and add some customization to it to make it aware of the cluster environment. Since all of the containers are in a pod, they can rely on localhost networking to communicate and don't need any form of sophisticated service discovery.
After playing with docker landscape for several months, I still found it is really counter intuitive to use Kubernetes Pod. I have not encountered any use case where pod is more natural a fit than container. When I am asked to use a Pod, I usually just use a single container Pod. I am trying to do a demo showcasing the strength of pod concept, but I just couldn't figure out a non-trival use case. In my demo, I started a server pod with two service container listening on different port, one for transcribe letters to upper case, and one for transcribe letters to lower case. Then I have a client pod with two client containers talking to each server container... This use case seems really forced, I don't see why I need to use the Pod concept. I have read through lots of tutorials and docs, and they all just touch on WHAT is a pod, without a convincing use case of WHY we must use pod... Am I missing something? What is a solid use case for using a Pod concept? Thanks. Edit: To be specific, suppose there are two services A and B that requires co-location and shared network stack, and this is a natural fit for Pod concept. What is the advantage of using the Pod (with two collocated containers running service A and service B) compared to having service A and B running in the same container, which guarantees the collocation and shared network stack? Is there a rule of thumb for the granularity? My original question is to find out such service A and service B that requires co-location and shared network stack. Thanks to Jared and Robert for the pointers, and I will dig through these use cases.
Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical). There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
###Instruction: According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster. Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead? Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes? ### Response: Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical). There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster. Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead? Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?
This is because the keys to your old cluster is cached. I'm not sure why they are not updated by the gcloud init command (that's what one would intuitively expect, or at least some kinder error message from kubectl) You solve it by simply getting the credentials of the cluster in the new configuration: gcloud container clusters get-credentials YOURCLUSTERHERE --zone YOURCLUSTERZONEHERE
###Instruction: After switching gcloud projects using ... gcloud init ... and then try to do some kubectl command, like this for instance: kubectl get rc ... I get this error: error: couldn't read version from server: Get https://130.211.59.254/api: x509: certificate signed by unknown authority Why is this and how can I solve it? ### Response: This is because the keys to your old cluster is cached. I'm not sure why they are not updated by the gcloud init command (that's what one would intuitively expect, or at least some kinder error message from kubectl) You solve it by simply getting the credentials of the cluster in the new configuration: gcloud container clusters get-credentials YOURCLUSTERHERE --zone YOURCLUSTERZONEHERE
After switching gcloud projects using ... gcloud init ... and then try to do some kubectl command, like this for instance: kubectl get rc ... I get this error: error: couldn't read version from server: Get https://130.211.59.254/api: x509: certificate signed by unknown authority Why is this and how can I solve it?
The difference is that if a Job doesn't complete because the node that its pod was on went offline for some reason, then a new pod will be created to run on a different node. If a single pod doesn't complete because its node became unavailable, it won't be rescheduled onto a different node.
###Instruction: What are the benefits of a Job with single Pod over just a single Pod with restart policy OnFailure to reliably execute once in kubernetes? As discussed in Job being constanly recreated despite RestartPolicy: Never, in case of a Job a new Pod will be created endlessly in case container returned non-zero status. The same applies to a single OnFailure Pod, only this time no new pods are created which is even cleaner. What are the cons and pros of either approach? Can Pod restart parameters, such as restart delay, or number of retry attempts can be controlled in either case? ### Response: The difference is that if a Job doesn't complete because the node that its pod was on went offline for some reason, then a new pod will be created to run on a different node. If a single pod doesn't complete because its node became unavailable, it won't be rescheduled onto a different node.
What are the benefits of a Job with single Pod over just a single Pod with restart policy OnFailure to reliably execute once in kubernetes? As discussed in Job being constanly recreated despite RestartPolicy: Never, in case of a Job a new Pod will be created endlessly in case container returned non-zero status. The same applies to a single OnFailure Pod, only this time no new pods are created which is even cleaner. What are the cons and pros of either approach? Can Pod restart parameters, such as restart delay, or number of retry attempts can be controlled in either case?
If your application is running inside the cluster, you can use Kubernetes Service Accounts to authenticate to the API server. If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file. Instead of using your credentials, you could grab a service account's token (inside a pod, read from /var/run/secrets/kubernetes.io/serviceaccount/token), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.
###Instruction: I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the Golang client library. Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's this guide on creating new certificates explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.) Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials? Thanks. ### Response: If your application is running inside the cluster, you can use Kubernetes Service Accounts to authenticate to the API server. If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file. Instead of using your credentials, you could grab a service account's token (inside a pod, read from /var/run/secrets/kubernetes.io/serviceaccount/token), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.
I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the Golang client library. Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's this guide on creating new certificates explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.) Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials? Thanks.
Thanks to all those who helped on GitHub. The workaround turned out to be as follows: tanen01 commented on Feb 4 Seeing the same problem here on k8s v1.1.7 stable Issue occurs with: kube-proxy --proxy-mode=iptables Once I changed it to: --proxy-mode=userspace (also the default), then it works again. So, if you are experiencing this, please try turning off --proxy-mode when you start kube-proxy.
###Instruction: I have a kubernetes single-node setup (see https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html ) I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has). When I connect from any node to the service, it will be distributed (as expected) to one of the pods. This will work until it load balances to itself (the container that I am on). Then it fails to connect. Sorry to be verbose, but I am going to attach all my files so that you can see what I'm doing in this little example. Dockerfile: FROM ubuntu MAINTAINER Eric H RUN apt-get update; apt-get install netcat EXPOSE 8080 COPY ./entry.sh / ENTRYPOINT ["/entry.sh"] Here is the entry point #!/bin/bash # wait for a connection, then tell them who we are while : ; do echo "hello, the date=`date`; my host=`hostname`" | nc -l 8080 sleep .5 done build the dockerfile docker build -t echoserver . tag and upload to my k8s cluster's registry docker tag -f echoserver:latest 127.0.0.1:5000/echoserver:latest docker push 127.0.0.1:5000/echoserver:latest Here is my Replication Controller apiVersion: v1 kind: ReplicationController metadata: labels: role: echo-server app: echo name: echo-server-1 spec: replicas: 3 template: metadata: labels: entity: echo-server-1 role: echo-server app: echo spec: containers: - image: 127.0.0.1:5000/echoserver:latest name: echo-server-1 ports: - containerPort: 8080 And finally, here is my Service kind: Service metadata: labels: app: echo role: echo-server name: echo-server-1 name: echo-server-1 spec: selector: entity: echo-server-1 role: echo-server ports: - port: 8080 targetPort: 8080 Create my service kubectl create -f echo.service.yaml Create my rc kubectl create -f echo.controller.yaml Get my PODs kubectl get po NAME READY STATUS RESTARTS AGE echo-server-1-jp0aj 1/1 Running 0 39m echo-server-1-shoz0 1/1 Running 0 39m echo-server-1-y9bv2 1/1 Running 0 39m Get the service IP kubectl get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE echo-server-1 10.3.0.246 <none> 8080/TCP entity=echo-server-1,role=echo-server 39m Exec into one of the pods kubectl exec -t -i echo-server-1-jp0aj /bin/bash Now connect to the service multiple times... It will give me the app-message for all pods except for when it gets to itself, whereupon it hangs. root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:02:38 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 ^C root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:02:43 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 ^C root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:19 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:23 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:26 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:27 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 How can I configure things so that all members of a service can connect to all other members, including itself? ### Response: Thanks to all those who helped on GitHub. The workaround turned out to be as follows: tanen01 commented on Feb 4 Seeing the same problem here on k8s v1.1.7 stable Issue occurs with: kube-proxy --proxy-mode=iptables Once I changed it to: --proxy-mode=userspace (also the default), then it works again. So, if you are experiencing this, please try turning off --proxy-mode when you start kube-proxy.
I have a kubernetes single-node setup (see https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html ) I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has). When I connect from any node to the service, it will be distributed (as expected) to one of the pods. This will work until it load balances to itself (the container that I am on). Then it fails to connect. Sorry to be verbose, but I am going to attach all my files so that you can see what I'm doing in this little example. Dockerfile: FROM ubuntu MAINTAINER Eric H RUN apt-get update; apt-get install netcat EXPOSE 8080 COPY ./entry.sh / ENTRYPOINT ["/entry.sh"] Here is the entry point #!/bin/bash # wait for a connection, then tell them who we are while : ; do echo "hello, the date=`date`; my host=`hostname`" | nc -l 8080 sleep .5 done build the dockerfile docker build -t echoserver . tag and upload to my k8s cluster's registry docker tag -f echoserver:latest 127.0.0.1:5000/echoserver:latest docker push 127.0.0.1:5000/echoserver:latest Here is my Replication Controller apiVersion: v1 kind: ReplicationController metadata: labels: role: echo-server app: echo name: echo-server-1 spec: replicas: 3 template: metadata: labels: entity: echo-server-1 role: echo-server app: echo spec: containers: - image: 127.0.0.1:5000/echoserver:latest name: echo-server-1 ports: - containerPort: 8080 And finally, here is my Service kind: Service metadata: labels: app: echo role: echo-server name: echo-server-1 name: echo-server-1 spec: selector: entity: echo-server-1 role: echo-server ports: - port: 8080 targetPort: 8080 Create my service kubectl create -f echo.service.yaml Create my rc kubectl create -f echo.controller.yaml Get my PODs kubectl get po NAME READY STATUS RESTARTS AGE echo-server-1-jp0aj 1/1 Running 0 39m echo-server-1-shoz0 1/1 Running 0 39m echo-server-1-y9bv2 1/1 Running 0 39m Get the service IP kubectl get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE echo-server-1 10.3.0.246 <none> 8080/TCP entity=echo-server-1,role=echo-server 39m Exec into one of the pods kubectl exec -t -i echo-server-1-jp0aj /bin/bash Now connect to the service multiple times... It will give me the app-message for all pods except for when it gets to itself, whereupon it hangs. root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:02:38 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 ^C root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:02:43 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 ^C root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:19 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:23 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:26 UTC 2016; my host=echo-server-1-y9bv2 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 hello, the date=Mon Jan 11 22:31:27 UTC 2016; my host=echo-server-1-shoz0 root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080 How can I configure things so that all members of a service can connect to all other members, including itself?
If you want your users to authenticate using HTTP Basic Auth (user:password), you can add: --basic-auth-file=/basic_auth.csv to your kube-apiserver command line, where each line of the file should be password, user-name, user-id. E.g.: @dm1nP@ss,admin,admin w3rck3rP@ss,wercker,wercker etc... If you'd rather use access tokens (HTTP Authentication: Bearer), you can specify: --token-auth-file=/known-tokens.csv where each line should be token,user-name,user-id[,optional groups]. E.g.: @dm1nT0k3n,admin,admin,adminGroup,devGroup w3rck3rT0k3n,wercker,wercker,devGroup etc... For more info, checkout the Authentication docs. Also checkout example_policy_file.jsonl for an example ABAC file.
###Instruction: I am using Kubernetes on a coreOS cluster hosted on DigitalOcean. And using this repo to set it up. I started the apiserver with the following line: /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --insecure-bind-address=0.0.0.0 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 The problem is that it accepts requests from anyone! I want to be able to provide a simple user/password authentication. I have been reading this and this and it seems that I have to do something like the below, but I cannot afford to take the cluster down for a long period of time, so I need your guys to help with this one. Btw, my pods do not create another pods, so I only need a few user, like 1/2 for devs and 1 for CI. I am thinking of doing something like including authorization-mode and authorization-policy-file flags as it seems required and making the insecure-bind-address localhost to make it only available locally. I am missing something? /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --authorization-mode=ABAC --authorization-policy-file=/access.json \ --insecure-bind-address=127.0.0.1 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 ###/access.json {"user":"admin"} {"user":"wercker"} {"user":"dev1"} {"user":"dev2"} But where are the passwords? How do I actually make the request with kubectl and curl or httpie? ### Response: If you want your users to authenticate using HTTP Basic Auth (user:password), you can add: --basic-auth-file=/basic_auth.csv to your kube-apiserver command line, where each line of the file should be password, user-name, user-id. E.g.: @dm1nP@ss,admin,admin w3rck3rP@ss,wercker,wercker etc... If you'd rather use access tokens (HTTP Authentication: Bearer), you can specify: --token-auth-file=/known-tokens.csv where each line should be token,user-name,user-id[,optional groups]. E.g.: @dm1nT0k3n,admin,admin,adminGroup,devGroup w3rck3rT0k3n,wercker,wercker,devGroup etc... For more info, checkout the Authentication docs. Also checkout example_policy_file.jsonl for an example ABAC file.
I am using Kubernetes on a coreOS cluster hosted on DigitalOcean. And using this repo to set it up. I started the apiserver with the following line: /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --insecure-bind-address=0.0.0.0 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 The problem is that it accepts requests from anyone! I want to be able to provide a simple user/password authentication. I have been reading this and this and it seems that I have to do something like the below, but I cannot afford to take the cluster down for a long period of time, so I need your guys to help with this one. Btw, my pods do not create another pods, so I only need a few user, like 1/2 for devs and 1 for CI. I am thinking of doing something like including authorization-mode and authorization-policy-file flags as it seems required and making the insecure-bind-address localhost to make it only available locally. I am missing something? /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --authorization-mode=ABAC --authorization-policy-file=/access.json \ --insecure-bind-address=127.0.0.1 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 ###/access.json {"user":"admin"} {"user":"wercker"} {"user":"dev1"} {"user":"dev2"} But where are the passwords? How do I actually make the request with kubectl and curl or httpie?
It's not the best solution to pull outside of OEL, but I couldn't find an OEL repository with these packages, so I used this: []# cat /etc/yum.repos.d/virt7-common.repo [virt7-common] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=http://mirror.centos.org/centos/7/extras/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
###Instruction: I've been able to successfully install Kubernetes in a CentOS 7 testing environment using the "virt7-testing" repo as described in the CentOS "Getting Started Guide" in the Kubernetes github repo. My production environment will be running on Oracle Linux 7, and so far enabling "virt7-testing" on OL7 hasn't been working. Are there any other yum repositories out there that are compatible with OL7 and include Kubernetes? ### Response: It's not the best solution to pull outside of OEL, but I couldn't find an OEL repository with these packages, so I used this: []# cat /etc/yum.repos.d/virt7-common.repo [virt7-common] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=http://mirror.centos.org/centos/7/extras/$basearch/ enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
I've been able to successfully install Kubernetes in a CentOS 7 testing environment using the "virt7-testing" repo as described in the CentOS "Getting Started Guide" in the Kubernetes github repo. My production environment will be running on Oracle Linux 7, and so far enabling "virt7-testing" on OL7 hasn't been working. Are there any other yum repositories out there that are compatible with OL7 and include Kubernetes?
Userspace and iptables refer to what actually handles the connection forwarding. In both cases, local iptables rules are installed to intercept outbound TCP connections that have a destination IP address associated with a service. In the userspace mode, the iptables rule forwards to a local port where a go binary (kube-proxy) is listening for connections. The binary (running in userspace) terminates the connection, establishes a new connection to a backend for the service, and then forwards requests to the backend and responses back to the local process. An advantage of the userspace mode is that because the connections are created from an application, if the connection is refused, the application can retry to a different backend. In iptables mode, the iptables rules are installed to directly forward packets that are destined for a service to a backend for the service. This is more efficient than moving the packets from the kernel to kube-proxy and then back to the kernel so it results in higher throughput and better tail latency. The main downside is that it is more difficult to debug, because instead of a local binary that writes a log to /var/log/kube-proxy you have to inspect logs from the kernel processing iptables rules. In both cases there will be a kube-proxy binary running on your machine. In userspace mode it inserts itself as the proxy; in iptables mode it will configure iptables rather than to proxy connections itself. The same binary works in both modes, and the behavior is switched via a flag or by setting an annotation in the apiserver for the node.
###Instruction: kube-proxy has an option called --proxy-mode,and according to the help message, this option can be userspace or iptables.(See below) # kube-proxy -h Usage of kube-proxy: ... --proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy. ... I can't figure out what does userspace mode means here. Anyone can tell me what the working principle is when kube-proxy runs under userspace mode? ### Response: Userspace and iptables refer to what actually handles the connection forwarding. In both cases, local iptables rules are installed to intercept outbound TCP connections that have a destination IP address associated with a service. In the userspace mode, the iptables rule forwards to a local port where a go binary (kube-proxy) is listening for connections. The binary (running in userspace) terminates the connection, establishes a new connection to a backend for the service, and then forwards requests to the backend and responses back to the local process. An advantage of the userspace mode is that because the connections are created from an application, if the connection is refused, the application can retry to a different backend. In iptables mode, the iptables rules are installed to directly forward packets that are destined for a service to a backend for the service. This is more efficient than moving the packets from the kernel to kube-proxy and then back to the kernel so it results in higher throughput and better tail latency. The main downside is that it is more difficult to debug, because instead of a local binary that writes a log to /var/log/kube-proxy you have to inspect logs from the kernel processing iptables rules. In both cases there will be a kube-proxy binary running on your machine. In userspace mode it inserts itself as the proxy; in iptables mode it will configure iptables rather than to proxy connections itself. The same binary works in both modes, and the behavior is switched via a flag or by setting an annotation in the apiserver for the node.
kube-proxy has an option called --proxy-mode,and according to the help message, this option can be userspace or iptables.(See below) # kube-proxy -h Usage of kube-proxy: ... --proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy. ... I can't figure out what does userspace mode means here. Anyone can tell me what the working principle is when kube-proxy runs under userspace mode?
This use case is described in both the Borg paper and the Omega paper. However, it is not presently implemented within Kubernetes. Here are some related links to ongoing proposals: QoS Tiers Preemption Policy / Scheme Resource Quality of Service
###Instruction: Is it possible to schedule upcoming Pods/Containers based on their priorities? (if container1 is critical and needs resources, the google orchestrator can kill other low priority containers) If yes, is there some specific priorities tags (like: critical, monitoring, production...)? ### Response: This use case is described in both the Borg paper and the Omega paper. However, it is not presently implemented within Kubernetes. Here are some related links to ongoing proposals: QoS Tiers Preemption Policy / Scheme Resource Quality of Service
Is it possible to schedule upcoming Pods/Containers based on their priorities? (if container1 is critical and needs resources, the google orchestrator can kill other low priority containers) If yes, is there some specific priorities tags (like: critical, monitoring, production...)?
When you start kubernetes with docker, you choose between two models: --config=/etc/kubernetes/manifests --config=/etc/kubernetes/manifests-multi. If you look in these files, you will notice one difference: --insecure-bind-address is different. When you use --config=/etc/kubernetes/manifests, you ask for a local access only. You should start with --config=/etc/kubernetes/manifests-multi. Note that: you will need to start etcd manually when you use --config=/etc/kubernetes/manifests-multi follow this post as docker support is not working for now
###Instruction: docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local \ --allow-privileged=true --v=2 A curl localhost:8080confirms that the API is running. But trying to access it with the host's IP like curl dockerHostIp:8080fails: Failed to connect to ipOfDockerHost port 8080: Connection refused How can I expose k8s to the outside? (docker-host is an ubuntu server) As far as I understand using --net=host should solve this problem. But it does not work in this case. ### Response: When you start kubernetes with docker, you choose between two models: --config=/etc/kubernetes/manifests --config=/etc/kubernetes/manifests-multi. If you look in these files, you will notice one difference: --insecure-bind-address is different. When you use --config=/etc/kubernetes/manifests, you ask for a local access only. You should start with --config=/etc/kubernetes/manifests-multi. Note that: you will need to start etcd manually when you use --config=/etc/kubernetes/manifests-multi follow this post as docker support is not working for now
docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override="127.0.0.1" \ --address="0.0.0.0" \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local \ --allow-privileged=true --v=2 A curl localhost:8080confirms that the API is running. But trying to access it with the host's IP like curl dockerHostIp:8080fails: Failed to connect to ipOfDockerHost port 8080: Connection refused How can I expose k8s to the outside? (docker-host is an ubuntu server) As far as I understand using --net=host should solve this problem. But it does not work in this case.
The only service environment variables that are populated in a pod are the services that existed before the pod created. Environment variables are not injected into running pods once they've already been started. I'm guessing that you created the abc replication controller / pods before you created the abc service. If you kill the existing pods and let them be recreated, they should have the ABC_SERVICE_HOST environment variable set.
###Instruction: I have 3 kubernetes services which are: service 1: name: abc service 2: name: def service 3: name: hgk In application running on service 1, I successfully use environment variables to get cluster IP of other services. System.getenv(DEF_SERVICE_HOST); --> success System.getenv(HGK_SERVICE_HOST); --> success However, when I call the service 1 's environemnet, it return null System.get(ABC_SERVICE_HOST); ---> null Looks like it can not get its own cluster IP. Do you guys have any ideas? Thank you very much! ### Response: The only service environment variables that are populated in a pod are the services that existed before the pod created. Environment variables are not injected into running pods once they've already been started. I'm guessing that you created the abc replication controller / pods before you created the abc service. If you kill the existing pods and let them be recreated, they should have the ABC_SERVICE_HOST environment variable set.
I have 3 kubernetes services which are: service 1: name: abc service 2: name: def service 3: name: hgk In application running on service 1, I successfully use environment variables to get cluster IP of other services. System.getenv(DEF_SERVICE_HOST); --> success System.getenv(HGK_SERVICE_HOST); --> success However, when I call the service 1 's environemnet, it return null System.get(ABC_SERVICE_HOST); ---> null Looks like it can not get its own cluster IP. Do you guys have any ideas? Thank you very much!
To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed. To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers: Nginx Google Monitoring Agent together with the Nginx plugin Note: Not fully tested yet.
###Instruction: It seems that the Google Monitoring Agent (powered by Stackdriver) should be installed on each Node (i.e. each compute instance, i.e. each machine) of a Kubernetes cluster. However the new plugins, like Nginx, Redis, ElasticSearch..., need those agents to know the IP of these services. This means having kube-proxy running and set up which should mean running that Google Monitoring Agent on a Pod. These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines. Can these Stackdriver plugins work on a Google Container Engine (GKE) / Kubernetes cluster? ### Response: To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed. To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers: Nginx Google Monitoring Agent together with the Nginx plugin Note: Not fully tested yet.
It seems that the Google Monitoring Agent (powered by Stackdriver) should be installed on each Node (i.e. each compute instance, i.e. each machine) of a Kubernetes cluster. However the new plugins, like Nginx, Redis, ElasticSearch..., need those agents to know the IP of these services. This means having kube-proxy running and set up which should mean running that Google Monitoring Agent on a Pod. These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines. Can these Stackdriver plugins work on a Google Container Engine (GKE) / Kubernetes cluster?
With v1.2 Kubernetes we use ReplicaSet (a newer form of ReplicationController). Given that you have a ReplicaSet, you must have used a v1.2 client to create it. But it doesn't stop there. What 1.2 actually creates for you is a Deployment which itself manages ReplicaSets. So what you need to know is kubectl scale deployment or kubectl delete deployment. Which tutorial are you following?
###Instruction: I'm running Kubernetes via Docker. Following the tutorial I launched an Nginx POD using kubectl run nginx --image=nginx --port=80. However this seems to create orphaned PODs (without a replication controller). kubectl get rc doesn't return anything and kubectl describe pod nginx-198147104-kqudh shows Replication Controllers: none (kubectl version "v1.2.0+5cb86ee" shows Controllers: ReplicaSet/nginx-198147104 but scaling it to 0 just causes a new Nginx pod to be created, and it can't be deleted). I would like to be able to delete the Kubernetes managed Nginx container from Docker. I haven't had much luck find out how to delete an orphan pod (without it being recreated...). Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.4", GitCommit:"65d28d5fd12345592405714c81cd03b9c41d41d9", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} ### Response: With v1.2 Kubernetes we use ReplicaSet (a newer form of ReplicationController). Given that you have a ReplicaSet, you must have used a v1.2 client to create it. But it doesn't stop there. What 1.2 actually creates for you is a Deployment which itself manages ReplicaSets. So what you need to know is kubectl scale deployment or kubectl delete deployment. Which tutorial are you following?
I'm running Kubernetes via Docker. Following the tutorial I launched an Nginx POD using kubectl run nginx --image=nginx --port=80. However this seems to create orphaned PODs (without a replication controller). kubectl get rc doesn't return anything and kubectl describe pod nginx-198147104-kqudh shows Replication Controllers: none (kubectl version "v1.2.0+5cb86ee" shows Controllers: ReplicaSet/nginx-198147104 but scaling it to 0 just causes a new Nginx pod to be created, and it can't be deleted). I would like to be able to delete the Kubernetes managed Nginx container from Docker. I haven't had much luck find out how to delete an orphan pod (without it being recreated...). Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.4", GitCommit:"65d28d5fd12345592405714c81cd03b9c41d41d9", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
Checkout the 'kubectl drain' command: # Drain node "foo", even if there are pods not managed by a ReplicationController, Job, or DaemonSet on it. $ kubectl drain foo --force # As above, but abort if there are pods not managed by a ReplicationController, Job, or DaemonSet, and use a grace period of 15 minutes. $ kubectl drain foo --grace-period=900 See also Issue 3885 and related linked issues
###Instruction: We are using kubernetes 1.1.8 (with flannel) but with 1.2 about to drop any input on this topic that is specific to 1.2 is fine. We run kubernetes in our own datacenters on bare metal which means that we need to do maintenance on worker nodes which take them in and out of production. We have a process for taking a node out of the cluster to do maintenance on it and I'm wondering if our process can be improved to minimize the potential for user facing downtime. We are using f5 load balancers. Each service that we deploy is given a static nodePort. For example appXYZ has nodePort 30173. In the F5 pool for service appXYZ all minions in the cluster are added as pool members with a tcp port open check on port 30173. During maintenance on a node we take the following steps 1. Set the node to unschedulable = true. 2. Get the list of pods running on the node and delete each pod. Sometimes this will be 40 pods per node. 3. Wait for up to two minutes for the pods in step #2 to shutdown. 4. Reboot the physical node. I'm wondering if this is what other people are doing or if we are missing one or more steps that would further minimize the amount of traffic that could potentially get set to a dead or dying pod on the node undergoing maintenance? When I read through http://kubernetes.io/docs/user-guide/pods/#termination-of-pods it makes me wonder if adding a longer (over 30 seconds) --grace-period= to our delete command and pausing for our reboot for a longer amount of time would ensure all of the kube-proxy's have been updated to remove the node from the list of endpoints. So if anyone can confirm that what we are doing is a decent practice or has any suggestions on how to improve it. Especially any tips on what to do in kubernetes 1.2. TIA! ### Response: Checkout the 'kubectl drain' command: # Drain node "foo", even if there are pods not managed by a ReplicationController, Job, or DaemonSet on it. $ kubectl drain foo --force # As above, but abort if there are pods not managed by a ReplicationController, Job, or DaemonSet, and use a grace period of 15 minutes. $ kubectl drain foo --grace-period=900 See also Issue 3885 and related linked issues
We are using kubernetes 1.1.8 (with flannel) but with 1.2 about to drop any input on this topic that is specific to 1.2 is fine. We run kubernetes in our own datacenters on bare metal which means that we need to do maintenance on worker nodes which take them in and out of production. We have a process for taking a node out of the cluster to do maintenance on it and I'm wondering if our process can be improved to minimize the potential for user facing downtime. We are using f5 load balancers. Each service that we deploy is given a static nodePort. For example appXYZ has nodePort 30173. In the F5 pool for service appXYZ all minions in the cluster are added as pool members with a tcp port open check on port 30173. During maintenance on a node we take the following steps 1. Set the node to unschedulable = true. 2. Get the list of pods running on the node and delete each pod. Sometimes this will be 40 pods per node. 3. Wait for up to two minutes for the pods in step #2 to shutdown. 4. Reboot the physical node. I'm wondering if this is what other people are doing or if we are missing one or more steps that would further minimize the amount of traffic that could potentially get set to a dead or dying pod on the node undergoing maintenance? When I read through http://kubernetes.io/docs/user-guide/pods/#termination-of-pods it makes me wonder if adding a longer (over 30 seconds) --grace-period= to our delete command and pausing for our reboot for a longer amount of time would ensure all of the kube-proxy's have been updated to remove the node from the list of endpoints. So if anyone can confirm that what we are doing is a decent practice or has any suggestions on how to improve it. Especially any tips on what to do in kubernetes 1.2. TIA!
That error message was due to an error on our part while rolling out support for Kubernetes 1.2 in Google Container Engine. We've fixed the issues, and you can now create a container cluster using the api explorer. Sorry for the trouble.
###Instruction: I tried creating a new kube cluster via googleapis with oAuth authentication. But I am getting an error that "HTTP Load Balancing requires the 'https://www.googleapis.com/auth/compute' scope.". I came to know that google has updated the kube version to 1.2 the previous night in their console (until which I was able to create cluster using same method in v1.0) I tried creating one via API explorer using google's oAuth, but it failed with same error. I think the authscope has been updated, but I could not find the new authscope in any of 'google cloud platform container engine doc' or 'kubernetes latest release doc'. Can someone please help me in identifying the new authscope? ### Response: That error message was due to an error on our part while rolling out support for Kubernetes 1.2 in Google Container Engine. We've fixed the issues, and you can now create a container cluster using the api explorer. Sorry for the trouble.
I tried creating a new kube cluster via googleapis with oAuth authentication. But I am getting an error that "HTTP Load Balancing requires the 'https://www.googleapis.com/auth/compute' scope.". I came to know that google has updated the kube version to 1.2 the previous night in their console (until which I was able to create cluster using same method in v1.0) I tried creating one via API explorer using google's oAuth, but it failed with same error. I think the authscope has been updated, but I could not find the new authscope in any of 'google cloud platform container engine doc' or 'kubernetes latest release doc'. Can someone please help me in identifying the new authscope?
You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned. Your dockerfile mentions below : ENTRYPOINT ["container-entrypoint"] What actually this "container-entrypoint" doing ? you need to check. Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything
###Instruction: I am creating an app in Origin 3.1 using my Docker image. Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff". I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause. I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID." What is CrashLoopBackOff status for openshift pods? Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting. Could anyone please help me with this. ### Response: You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned. Your dockerfile mentions below : ENTRYPOINT ["container-entrypoint"] What actually this "container-entrypoint" doing ? you need to check. Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything
I am creating an app in Origin 3.1 using my Docker image. Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff". I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause. I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID." What is CrashLoopBackOff status for openshift pods? Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting. Could anyone please help me with this.
... they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens. This is only true if you are using one of the salt-based installation mechanisms. So, how should I really install addons, f.ex. cluster-loadbalancing, with Kubernetes 1.1? Most of the add-ons can be installed by just running kubectl create -f against the replication controller and service files for the add-on. You need to create the kube-system namespace first if you haven't already, and some of the add-ons (like dns) require you to fill in a few values in a jinja template that would otherwise be handled by salt.
###Instruction: What is the correct way to install addons with Kubernetes 1.1? The docs aren't as clear as I'd like on this subject; they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens. Additionally, for added confusion, the docs imply that addons are bundled with Kubernetes: So the only persistent way to make changes in add-ons is to update the manifests on the master server. But still, users are discouraged to do it on their own - they should rather wait for a new release of Kubernetes that will also contain new versions of add-ons. So, how should I really install addons, f.ex. cluster-loadbalancing, with Kubernetes 1.1? ### Response: ... they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens. This is only true if you are using one of the salt-based installation mechanisms. So, how should I really install addons, f.ex. cluster-loadbalancing, with Kubernetes 1.1? Most of the add-ons can be installed by just running kubectl create -f against the replication controller and service files for the add-on. You need to create the kube-system namespace first if you haven't already, and some of the add-ons (like dns) require you to fill in a few values in a jinja template that would otherwise be handled by salt.
What is the correct way to install addons with Kubernetes 1.1? The docs aren't as clear as I'd like on this subject; they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens. Additionally, for added confusion, the docs imply that addons are bundled with Kubernetes: So the only persistent way to make changes in add-ons is to update the manifests on the master server. But still, users are discouraged to do it on their own - they should rather wait for a new release of Kubernetes that will also contain new versions of add-ons. So, how should I really install addons, f.ex. cluster-loadbalancing, with Kubernetes 1.1?
Not presently. This is being tracked in issue 9215. As is pointed out in the issue, use of iptables makes this non-trivial.
###Instruction: We like to know if there is a way to get the service level monitoring parameters like ( how many request / sec, latency / request , etc...) from the Kubernetes Service?. I understand that if Kubernetes service is created with type LoadBalancer, then we can leverage the cloud provider interfaces for those metrics; However I like to know if there is any provision to get the above said metrics at service level or container level without any latency.?. ### Response: Not presently. This is being tracked in issue 9215. As is pointed out in the issue, use of iptables makes this non-trivial.
We like to know if there is a way to get the service level monitoring parameters like ( how many request / sec, latency / request , etc...) from the Kubernetes Service?. I understand that if Kubernetes service is created with type LoadBalancer, then we can leverage the cloud provider interfaces for those metrics; However I like to know if there is any provision to get the above said metrics at service level or container level without any latency.?.
Set content-type to application/json-patch+json and specify the patch in http://jsonpatch.org format. $ cat > patch.json <<EOF [ { "op": "add", "path": "/metadata/labels/hello", "value": "world" } ] EOF $ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/json-patch+json" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME
###Instruction: With command, I can add label as below kubectl label pod POD_NAME KEY1=VALUE1 How could I do that from kubernetes API? I guess it can be done by PATCH /api/v1/namespaces/{namespace}/pods/{name} Here is pod.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "key1": "value1" } } } I tried with following command KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) curl --request PATCH --insecure \ --header "Authorization: Bearer $KUBE_TOKEN" \ --data "$(cat pod.json)" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME And it returns { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 } ### Response: Set content-type to application/json-patch+json and specify the patch in http://jsonpatch.org format. $ cat > patch.json <<EOF [ { "op": "add", "path": "/metadata/labels/hello", "value": "world" } ] EOF $ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/json-patch+json" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME
With command, I can add label as below kubectl label pod POD_NAME KEY1=VALUE1 How could I do that from kubernetes API? I guess it can be done by PATCH /api/v1/namespaces/{namespace}/pods/{name} Here is pod.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "key1": "value1" } } } I tried with following command KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token) curl --request PATCH --insecure \ --header "Authorization: Bearer $KUBE_TOKEN" \ --data "$(cat pod.json)" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME And it returns { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 }
According to this code, it seams that, only registries on network 10.0.0.0/8 can be insecure, is your registry on this range? What about setting EXTRA_DOCKER_OPTS="--insecure-registry YOUR_REGISTRY_IP" manually to docker environment file? Is that possible for you?
###Instruction: I am trailing Kubernetes on AWS, and I have a cluster set up, but having trouble creating an application by pulling a docker image from an insecure repo. When I created the cluster, I ensured that the environment variable KUBE_ENABLE_INSECURE_REGISTRY=true was set to true. But I still don't seem to be able to pull from this repo. The logs show (edited application name and registry URL): Error syncing pod, skipping: failed to "StartContainer" for "" with ErrImagePull: "API error (500): unable to ping registry endpoint https://docker-registry..com:5000/v0/\nv2 ping attempt failed with error: Get https://docker-registry..com:5000/v2/: EOF\n v1 ping attempt failed with error: Get https://docker-registry.*.com:5000/v1/_ping: EOF\n" Can anyone please advise on this? Thanks ### Response: According to this code, it seams that, only registries on network 10.0.0.0/8 can be insecure, is your registry on this range? What about setting EXTRA_DOCKER_OPTS="--insecure-registry YOUR_REGISTRY_IP" manually to docker environment file? Is that possible for you?
I am trailing Kubernetes on AWS, and I have a cluster set up, but having trouble creating an application by pulling a docker image from an insecure repo. When I created the cluster, I ensured that the environment variable KUBE_ENABLE_INSECURE_REGISTRY=true was set to true. But I still don't seem to be able to pull from this repo. The logs show (edited application name and registry URL): Error syncing pod, skipping: failed to "StartContainer" for "" with ErrImagePull: "API error (500): unable to ping registry endpoint https://docker-registry..com:5000/v0/\nv2 ping attempt failed with error: Get https://docker-registry..com:5000/v2/: EOF\n v1 ping attempt failed with error: Get https://docker-registry.*.com:5000/v1/_ping: EOF\n" Can anyone please advise on this? Thanks
Kubernetes documentation for docker is outdated for now. Some elements need to be added. This kubernetes issue is the one to follow. You can find at this comment the answer to setup. It's working for me with DNS/Dashboard addons. I have not done a complete test of all the features but complexe application are running on it (understand not helloworld application). If you are interested, you can find some project that try to solve this setup that is not so trivial if you don't have time: hive kmachine Note I don't put any setup here as it will certainly be outdated soon ... Kubernetes documentation is the good place (and for now the issue I pointed you to ^^)
###Instruction: I am attempting to setup Kubernetes locally using a docker instance. I am following this documentation but get stuck at the point of creating a new service and exposing the ports. I have the docker container pulled and I have a kubectl available. When I run the command kubectl get nodes --show-labels I get the following |NAME | STATUS | AGE | LABELS | |-----------|---------|--------|--------------------------------------| |127.0.0.1 | Ready | 1h | kubernetes.io/hostname=127.0.0.1 | I now create a new service with kubectl run nginx --image=nginx --port=80 as per the docs. When I run docker ps I see a container that's been created using my local nginx:latest image. CONTAINER_ID: 4192d1b423ec IMAGE: nginx COMMAND: "nginx -g 'daemon off'" CREATED: 37 minutes ago STATUS: Up 37 minutes NAMES: k8s_nginx.aab52821_nginx-198147104-qop91_default_f1cf5d8a-ef2d-11e5-b527-f0def1de109f_940ee216 The next step is where I'm having problems kubectl expose rc nginx --port=80 is supposed to expose the nginx image as a kubernetes service on port 80. I get this in the terminal. Error from server: replicationcontrollers "nginx" not found So I started reading about replicationcontrollers, I understand the concepts but I do not know how they are configured or setup. This got me to thinking that I'm following what should be an idiot proof setup guide. Can anyone help me with this? I have added my docker and kubernetes versions below. Version info Docker version (Local Ubuntu 15.10) Client: Version: 1.10.3 API version: 1.22 Go version: go1.5.3 Git commit: 20f81dd Built: Thu Mar 10 15:59:07 2016 OS/Arch: linux/amd64 Server: Version: 1.10.3 API version: 1.22 Go version: go1.5.3 Git commit: 20f81dd Built: Thu Mar 10 15:59:07 2016 OS/Arch: linux/amd64 Kubernetes version Client Version: version.Info{ Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean" } Server Version: version.Info{ Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean" } ### Response: Kubernetes documentation for docker is outdated for now. Some elements need to be added. This kubernetes issue is the one to follow. You can find at this comment the answer to setup. It's working for me with DNS/Dashboard addons. I have not done a complete test of all the features but complexe application are running on it (understand not helloworld application). If you are interested, you can find some project that try to solve this setup that is not so trivial if you don't have time: hive kmachine Note I don't put any setup here as it will certainly be outdated soon ... Kubernetes documentation is the good place (and for now the issue I pointed you to ^^)
I am attempting to setup Kubernetes locally using a docker instance. I am following this documentation but get stuck at the point of creating a new service and exposing the ports. I have the docker container pulled and I have a kubectl available. When I run the command kubectl get nodes --show-labels I get the following |NAME | STATUS | AGE | LABELS | |-----------|---------|--------|--------------------------------------| |127.0.0.1 | Ready | 1h | kubernetes.io/hostname=127.0.0.1 | I now create a new service with kubectl run nginx --image=nginx --port=80 as per the docs. When I run docker ps I see a container that's been created using my local nginx:latest image. CONTAINER_ID: 4192d1b423ec IMAGE: nginx COMMAND: "nginx -g 'daemon off'" CREATED: 37 minutes ago STATUS: Up 37 minutes NAMES: k8s_nginx.aab52821_nginx-198147104-qop91_default_f1cf5d8a-ef2d-11e5-b527-f0def1de109f_940ee216 The next step is where I'm having problems kubectl expose rc nginx --port=80 is supposed to expose the nginx image as a kubernetes service on port 80. I get this in the terminal. Error from server: replicationcontrollers "nginx" not found So I started reading about replicationcontrollers, I understand the concepts but I do not know how they are configured or setup. This got me to thinking that I'm following what should be an idiot proof setup guide. Can anyone help me with this? I have added my docker and kubernetes versions below. Version info Docker version (Local Ubuntu 15.10) Client: Version: 1.10.3 API version: 1.22 Go version: go1.5.3 Git commit: 20f81dd Built: Thu Mar 10 15:59:07 2016 OS/Arch: linux/amd64 Server: Version: 1.10.3 API version: 1.22 Go version: go1.5.3 Git commit: 20f81dd Built: Thu Mar 10 15:59:07 2016 OS/Arch: linux/amd64 Kubernetes version Client Version: version.Info{ Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean" } Server Version: version.Info{ Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean" }
You can use the downward api to figure out your own podname from within the sidecar, and then retrieving your own pod from the apiserver to lookup exist status. Let me know how this goes.
###Instruction: We use Kubernetes Jobs for a lot of batch computing here and I'd like to instrument each Job with a monitoring sidecar to update a centralized tracking system with the progress of a job. The only problem is, I can't figure out what the semantics are (or are supposed to be) of multiple containers in a job. I gave it a shot anyways (with an alpine sidecar that printed "hello" every 1 sec) and after my main task completed, the Jobs are considered Successful and the kubectl get pods in Kubernetes 1.2.0 shows: NAME READY STATUS RESTARTS AGE job-69541b2b2c0189ba82529830fe6064bd-ddt2b 1/2 Completed 0 4m job-c53e78aee371403fe5d479ef69485a3d-4qtli 1/2 Completed 0 4m job-df9a48b2fc89c75d50b298a43ca2c8d3-9r0te 1/2 Completed 0 4m job-e98fb7df5e78fc3ccd5add85f8825471-eghtw 1/2 Completed 0 4m And if I describe one of those pods State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 24 Mar 2016 11:59:19 -0700 Finished: Thu, 24 Mar 2016 11:59:21 -0700 Then GETing the yaml of the job shows information per container: status: conditions: - lastProbeTime: null lastTransitionTime: 2016-03-24T18:59:29Z message: 'containers with unready status: [pod-template]' reason: ContainersNotReady status: "False" type: Ready containerStatuses: - containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937 image: luigi-reduce:0.2 imageID: docker://sha256:5a5e15390ef8e89a450dac7f85a9821fb86a33b1b7daeab9f116be252424db70 lastState: {} name: pod-template ready: false restartCount: 0 state: terminated: containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937 exitCode: 0 finishedAt: 2016-03-24T18:59:30Z reason: Completed startedAt: 2016-03-24T18:59:29Z - containerID: docker://3d2b51436e435e0b887af92c420d175fafbeb8441753e378eb77d009a38b7e1e image: alpine imageID: docker://sha256:70c557e50ed630deed07cbb0dc4d28aa0f2a485cf7af124cc48f06bce83f784b lastState: {} name: sidecar ready: true restartCount: 0 state: running: startedAt: 2016-03-24T18:59:31Z hostIP: 10.2.113.74 phase: Running So it looks like my sidecar would need to watch the main process (how?) and exit gracefully once it detects it is alone in the pod? If this is correct, then are there best practices/patterns for this (should the sidecar exit with the return code of the main container? but how does it get that?)? ** Update ** After further experimentation, I've also discovered the following: If there are two containers in a pod, then it is not considered successful until all containers in the pod return with exit code 0. Additionally, if restartPolicy: OnFailure is set on the pod spec, then any container in the pod that terminates with non-zero exit code will be restarted in the same pod (this could be useful for a monitoring sidecar to count the number of retries and delete the job after a certain number (to workaround no max-retries currently available in Kubernetes jobs)). ### Response: You can use the downward api to figure out your own podname from within the sidecar, and then retrieving your own pod from the apiserver to lookup exist status. Let me know how this goes.
We use Kubernetes Jobs for a lot of batch computing here and I'd like to instrument each Job with a monitoring sidecar to update a centralized tracking system with the progress of a job. The only problem is, I can't figure out what the semantics are (or are supposed to be) of multiple containers in a job. I gave it a shot anyways (with an alpine sidecar that printed "hello" every 1 sec) and after my main task completed, the Jobs are considered Successful and the kubectl get pods in Kubernetes 1.2.0 shows: NAME READY STATUS RESTARTS AGE job-69541b2b2c0189ba82529830fe6064bd-ddt2b 1/2 Completed 0 4m job-c53e78aee371403fe5d479ef69485a3d-4qtli 1/2 Completed 0 4m job-df9a48b2fc89c75d50b298a43ca2c8d3-9r0te 1/2 Completed 0 4m job-e98fb7df5e78fc3ccd5add85f8825471-eghtw 1/2 Completed 0 4m And if I describe one of those pods State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 24 Mar 2016 11:59:19 -0700 Finished: Thu, 24 Mar 2016 11:59:21 -0700 Then GETing the yaml of the job shows information per container: status: conditions: - lastProbeTime: null lastTransitionTime: 2016-03-24T18:59:29Z message: 'containers with unready status: [pod-template]' reason: ContainersNotReady status: "False" type: Ready containerStatuses: - containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937 image: luigi-reduce:0.2 imageID: docker://sha256:5a5e15390ef8e89a450dac7f85a9821fb86a33b1b7daeab9f116be252424db70 lastState: {} name: pod-template ready: false restartCount: 0 state: terminated: containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937 exitCode: 0 finishedAt: 2016-03-24T18:59:30Z reason: Completed startedAt: 2016-03-24T18:59:29Z - containerID: docker://3d2b51436e435e0b887af92c420d175fafbeb8441753e378eb77d009a38b7e1e image: alpine imageID: docker://sha256:70c557e50ed630deed07cbb0dc4d28aa0f2a485cf7af124cc48f06bce83f784b lastState: {} name: sidecar ready: true restartCount: 0 state: running: startedAt: 2016-03-24T18:59:31Z hostIP: 10.2.113.74 phase: Running So it looks like my sidecar would need to watch the main process (how?) and exit gracefully once it detects it is alone in the pod? If this is correct, then are there best practices/patterns for this (should the sidecar exit with the return code of the main container? but how does it get that?)? ** Update ** After further experimentation, I've also discovered the following: If there are two containers in a pod, then it is not considered successful until all containers in the pod return with exit code 0. Additionally, if restartPolicy: OnFailure is set on the pod spec, then any container in the pod that terminates with non-zero exit code will be restarted in the same pod (this could be useful for a monitoring sidecar to count the number of retries and delete the job after a certain number (to workaround no max-retries currently available in Kubernetes jobs)).
I actually played around with the resources and limits of each container, deactivated heapster on the kube-system namespace and everthing runs now fine!
###Instruction: I've started running kubernetes on GCE. I setup a 3x f1-micro cluster and Im running: * influxdb x1 * Grafana x1 * nginx x1 * phpfpm7 x1 * golang x2 * redis x1 I keep having all my container restarting regularly, in 1h time, grafana restarted 4x, redis 3x, my golangs 2x, nginx 4x. On my local, they never restart and are perfectly working fine... The logs don't tell me anything about why it rebooted. For people using kubernetes, how often do your containers restart? Could it be a perf issue? I have also mounted persistentdisk for grafana and influxdb, but it seems after each restart, the data is wiped. Any idea? thanks for your help! ### Response: I actually played around with the resources and limits of each container, deactivated heapster on the kube-system namespace and everthing runs now fine!
I've started running kubernetes on GCE. I setup a 3x f1-micro cluster and Im running: * influxdb x1 * Grafana x1 * nginx x1 * phpfpm7 x1 * golang x2 * redis x1 I keep having all my container restarting regularly, in 1h time, grafana restarted 4x, redis 3x, my golangs 2x, nginx 4x. On my local, they never restart and are perfectly working fine... The logs don't tell me anything about why it rebooted. For people using kubernetes, how often do your containers restart? Could it be a perf issue? I have also mounted persistentdisk for grafana and influxdb, but it seems after each restart, the data is wiped. Any idea? thanks for your help!
Now that kubectl run creates a deployment, you specify that the type being exposed in a deployment rather than a replication controller: kubectl expose deployment nginx --target-port=80 --type=NodePort
###Instruction: I'm newbie of the Kubernetes while I'm using Google Cloud Container. I just follow the tutorials as belows: https://cloud.google.com/container-engine/docs/tutorials/http-balancer http://kubernetes.io/docs/hellonode/#create-your-pod In these tutorials, I'll get the replicacontroller after I run the "kubectl run" but there is no replicacontrollers so that I cannot run the command of "kubectl expose rc" in order to open a port. Here is my result of the commands: ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl run nginx --image=nginx --port=80 deployment "nginx" created ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl expose rc nginx --target-port=80 --type=NodePort Error from server: replicationcontrollers "nginx" not found Here is my result when I run "kubectl get rc,svc,ingress,deployments,pods": ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl get rc,svc,ingress,deployments,pods NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.3.240.1 <none> 443/TCP 12m NAME RULE BACKEND ADDRESS AGE basic-ingress - nginx:80 107.178.247.247 12m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 1 11m NAME READY STATUS RESTARTS AGE nginx-198147104-zgo7m 1/1 Running 0 11m One of my solution is to create yaml file which define the replicacontroller. But is there any way to create replicacontroller via kubectl run command like above tutorials? Thanks, ### Response: Now that kubectl run creates a deployment, you specify that the type being exposed in a deployment rather than a replication controller: kubectl expose deployment nginx --target-port=80 --type=NodePort
I'm newbie of the Kubernetes while I'm using Google Cloud Container. I just follow the tutorials as belows: https://cloud.google.com/container-engine/docs/tutorials/http-balancer http://kubernetes.io/docs/hellonode/#create-your-pod In these tutorials, I'll get the replicacontroller after I run the "kubectl run" but there is no replicacontrollers so that I cannot run the command of "kubectl expose rc" in order to open a port. Here is my result of the commands: ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl run nginx --image=nginx --port=80 deployment "nginx" created ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl expose rc nginx --target-port=80 --type=NodePort Error from server: replicationcontrollers "nginx" not found Here is my result when I run "kubectl get rc,svc,ingress,deployments,pods": ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl get rc,svc,ingress,deployments,pods NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.3.240.1 <none> 443/TCP 12m NAME RULE BACKEND ADDRESS AGE basic-ingress - nginx:80 107.178.247.247 12m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 1 11m NAME READY STATUS RESTARTS AGE nginx-198147104-zgo7m 1/1 Running 0 11m One of my solution is to create yaml file which define the replicacontroller. But is there any way to create replicacontroller via kubectl run command like above tutorials? Thanks,
We can use gcsfuse or s3fuse to mount Google Storage bucket in Kubernetes pod/Container. Before starting installation of fuse on container run container with SYS_ADMIN privileges like below. $ docker run -it --cap-add SYS_ADMIN --name dev --device /dev/fuse ContainerID/Name /bin/bash Install gcsfuse or s3fuse in pod/Container image. Create shell script and add mount command in it. Add privileged parameter into the YAML file to grant admin capabilities to pod/Container. example as below. securityContext: capabilities: {} privileged: true Add Postlife cycle hook parameters in YAML file to mount bucket after postStart of pod/Container. example as below. lifecycle: postStart: exec: command: - "sh" - "/usr/local/gcsfusemount.sh"
###Instruction: I would like to mount Google storage bucket in Google Container Engine using gcafuse or any other tool/other provision. The container runs under Google container engine So,we need to use yaml file to define few parameters in it. If there is any kind of thing that can be used in .yaml file to build new replication controller/service using privileged and sys_admin or any other required parameters in it. ### Response: We can use gcsfuse or s3fuse to mount Google Storage bucket in Kubernetes pod/Container. Before starting installation of fuse on container run container with SYS_ADMIN privileges like below. $ docker run -it --cap-add SYS_ADMIN --name dev --device /dev/fuse ContainerID/Name /bin/bash Install gcsfuse or s3fuse in pod/Container image. Create shell script and add mount command in it. Add privileged parameter into the YAML file to grant admin capabilities to pod/Container. example as below. securityContext: capabilities: {} privileged: true Add Postlife cycle hook parameters in YAML file to mount bucket after postStart of pod/Container. example as below. lifecycle: postStart: exec: command: - "sh" - "/usr/local/gcsfusemount.sh"
I would like to mount Google storage bucket in Google Container Engine using gcafuse or any other tool/other provision. The container runs under Google container engine So,we need to use yaml file to define few parameters in it. If there is any kind of thing that can be used in .yaml file to build new replication controller/service using privileged and sys_admin or any other required parameters in it.
Both functionally do the same, orchestrate Docker containers, but obviously they will do it in different ways and what you can easily achieve with one, it might prove difficult in the other and vice versa. Mesos has a higher complexity and learning curve in my opinion. Kubernetes is relatively simpler and easier to grasp. You can literally spawn your own Kube master and minions running one command and specifying the provider: Vagrant or AWS,etc. Kubernetes is also able to be integrated into Mesos, so there is also the possibility where you could try both. For the Hadoop specific use case you mention, Mesos might have an edge, it might integrate better in the Apache ecosystem, Mesos and Spark were created by the same minds. Final thoughts: start with Kube, progressively exploring how to make it work for your use case. Then, after you have a good grasp on it, do the same with Mesos. You might end up liking pieces of each and you can have them coexist, or find that Kube is enough for what you need.
###Instruction: We have a product which is described in some docker files, which can create the necessary docker containers. Some docker containers will just run some basic apps, while other containers will run clusters (hadoop). Now is the question which cluster manager I need to use. Kubernetes or Apache mesos or both? I read Kubernetes is good for 100% containerized environments, while Apache Mesos is better for environments which are a bit containerized and a bit not-containerized. But Apache Mesos is better for running hadoop in docker (?). Our environment is composed of only docker containers, but some with an hadoop cluster and some with some apps. What will be the best? ### Response: Both functionally do the same, orchestrate Docker containers, but obviously they will do it in different ways and what you can easily achieve with one, it might prove difficult in the other and vice versa. Mesos has a higher complexity and learning curve in my opinion. Kubernetes is relatively simpler and easier to grasp. You can literally spawn your own Kube master and minions running one command and specifying the provider: Vagrant or AWS,etc. Kubernetes is also able to be integrated into Mesos, so there is also the possibility where you could try both. For the Hadoop specific use case you mention, Mesos might have an edge, it might integrate better in the Apache ecosystem, Mesos and Spark were created by the same minds. Final thoughts: start with Kube, progressively exploring how to make it work for your use case. Then, after you have a good grasp on it, do the same with Mesos. You might end up liking pieces of each and you can have them coexist, or find that Kube is enough for what you need.
We have a product which is described in some docker files, which can create the necessary docker containers. Some docker containers will just run some basic apps, while other containers will run clusters (hadoop). Now is the question which cluster manager I need to use. Kubernetes or Apache mesos or both? I read Kubernetes is good for 100% containerized environments, while Apache Mesos is better for environments which are a bit containerized and a bit not-containerized. But Apache Mesos is better for running hadoop in docker (?). Our environment is composed of only docker containers, but some with an hadoop cluster and some with some apps. What will be the best?
You need to register the master as a node if you want the master to actually host any pods with the --register-node=true flag to the kubelet runnning on master. The CoreOs tutorial does not register the master as a node because thats the ideal scenario.
###Instruction: im trying to setup a a high available kubernetes cluster with packer and terraform instead the kube-up.sh scripts. Reason: I want bigger machines, different setup etc. Most of my configuration comes from the coreos kubernetes deployment tutorial. Something about my setup: CoreOS Everything runs on gce. Ive got 3 etcd and one skydns instances. They are working and able to reach each other. I have one instance as kubernetes master instance that is running the kubelet with manifests. My actual problem right now is that the kube-api server is not able to connect to it self. I can run a curl command from my host system with valid response. /version and others. It is also a little bit strange that 443 and 8080 are not forwarded from docker. Or is this a normal behavior? I thought i missconfigured some master endpoints. so i tried localhost and the external ip for all manifests. => Not working. Errors in the kube-api container: I0925 14:51:47.505859 1 plugins.go:69] No cloud provider specified. I0925 14:51:47.973450 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767. E0925 14:51:48.009367 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.010730 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.010996 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.011083 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.012697 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.012753 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused [restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] listing is available at https://104.155.60.74:443/swaggerapi/ [restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] https://104.155.60.74:443/swaggerui/ is mapped to folder /swagger-ui/ I0925 14:51:48.136166 1 server.go:441] Serving securely on 0.0.0.0:443 I0925 14:51:48.136248 1 server.go:483] Serving insecurely on 127.0.0.1:8080 The controller container has nearly the same erros. Every other container is fine. My config: /etc/kubelet.env KUBE_KUBELET_OPTS="\ --api_servers=http://127.0.0.1:8080 \ --register-node=false \ --allow-privileged=true \ --config=/etc/kubernetes/manifests \ --tls_cert_file=/etc/kubernetes/ssl/apiserver.pem \ --tls_private_key_file=/etc/kubernetes/ssl/apiserver-key.pem \ --cloud-provider=gce \ --cluster_dns=10.10.38.10 \ --cluster_domain=cluster.local \ --cadvisor-port=0" /etc/kubernetes/manifests/ apiVersion: v1 kind: Pod metadata: name: kube-apiserver namespace: kube-system spec: hostNetwork: true containers: - name: kube-apiserver image: gcr.io/google_containers/hyperkube:v1.0.6 command: - /hyperkube - apiserver - --bind-address=0.0.0.0 - --etcd_servers=http://10.10.125.10:2379,http://10.10.82.201:2379,http://10.10.63.185:2379 - --allow-privileged=true - --service-cluster-ip-range=10.40.0.0/16 - --secure_port=443 - --advertise-address=104.155.60.74 - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --client-ca-file=/etc/kubernetes/ssl/ca.pem - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem ports: - containerPort: 443 hostPort: 443 name: https - containerPort: 8080 hostPort: 8080 name: local volumeMounts: - mountPath: /etc/kubernetes/ssl name: ssl-certs-kubernetes readOnly: true - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true volumes: - hostPath: path: /etc/kubernetes/ssl name: ssl-certs-kubernetes - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host /etc/kubernetes/manifests/kube-controller-manager.yml apiVersion: v1 kind: Pod metadata: name: kube-controller-manager namespace: kube-system spec: containers: - name: kube-controller-manager image: gcr.io/google_containers/hyperkube:v1.0.6 command: - /hyperkube - controller-manager - --master=https://104.155.60.74:443 - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --root-ca-file=/etc/kubernetes/ssl/ca.pem - --cloud_provider=gce livenessProbe: httpGet: host: 127.0.0.1 path: /healthz port: 10252 initialDelaySeconds: 15 timeoutSeconds: 1 volumeMounts: - mountPath: /etc/kubernetes/ssl name: ssl-certs-kubernetes readOnly: true - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/ssl name: ssl-certs-kubernetes - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3e37b2ea2277 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube controll 31 minutes ago Up 31 minutes k8s_kube-controller-manager.afecd3c9_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.inte rnal_kube-system_621db46bf7b0764eaa46d17dfba8e90f_519cd0da 43917185d91b gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube proxy -- 31 minutes ago Up 31 minutes k8s_kube-proxy.a2db3197_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99a eb1ef9c2997c942cfbe48b9_c82a8a60 f548279e90f9 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube apiserve 31 minutes ago Up 31 minutes k8s_kube-apiserver.2bcb2c35_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_8 67c500deb54965609810fd0771fa92d_a306feae 94b1942a09f0 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube schedule 31 minutes ago Up 31 minutes k8s_kube-scheduler.603b59f4_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_3 9e2c582fd067b44ebe8cefaee036c0e_e0ddf6a2 9de4a4264ef6 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_controller-manager-elector.89f472b4_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_k ube-system_e23fc0902c7e6da7b315ad34130b9807_7c8d2901 af2df45f4081 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_scheduler-elector.608b6780_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-syste m_e23fc0902c7e6da7b315ad34130b9807_b11e601d ac0e068456c7 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.internal_kube-system_621d b46bf7b0764eaa46d17dfba8e90f_e9760e28 2773ba48d011 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-system_e23fc0902c7e6 da7b315ad34130b9807_4fba9edb 987531f1951d gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_867c500deb549 65609810fd0771fa92d_d15d2d66 f4453b948186 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99aeb1ef9c2 997c942cfbe48b9_07e540c8 ce01cfda007e gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_39e2c582fd067 b44ebe8cefaee036c0e_e6cb6500 Here the curl command: kubernetes-km0 ~ # docker logs a404a310b55e I0928 09:14:05.019135 1 plugins.go:69] No cloud provider specified. I0928 09:14:05.192451 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767. I0928 09:14:05.192900 1 master.go:295] Will report 10.10.247.127 as public IP address. E0928 09:14:05.226222 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226428 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226479 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226593 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226908 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused [restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] listing is available at https://10.10.247.127:443/swaggerapi/ [restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] https://10.10.247.127:443/swaggerui/ is mapped to folder /swagger-ui/ E0928 09:14:05.232632 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused I0928 09:14:05.368697 1 server.go:441] Serving securely on 0.0.0.0:443 I0928 09:14:05.368788 1 server.go:483] Serving insecurely on 127.0.0.1:8080 kubernetes-km0 ~ # curl http://127.0.0.1:8080/api/v1/limitranges { "kind": "LimitRangeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/limitranges", "resourceVersion": "100" }, "items": [] } ### Response: You need to register the master as a node if you want the master to actually host any pods with the --register-node=true flag to the kubelet runnning on master. The CoreOs tutorial does not register the master as a node because thats the ideal scenario.
im trying to setup a a high available kubernetes cluster with packer and terraform instead the kube-up.sh scripts. Reason: I want bigger machines, different setup etc. Most of my configuration comes from the coreos kubernetes deployment tutorial. Something about my setup: CoreOS Everything runs on gce. Ive got 3 etcd and one skydns instances. They are working and able to reach each other. I have one instance as kubernetes master instance that is running the kubelet with manifests. My actual problem right now is that the kube-api server is not able to connect to it self. I can run a curl command from my host system with valid response. /version and others. It is also a little bit strange that 443 and 8080 are not forwarded from docker. Or is this a normal behavior? I thought i missconfigured some master endpoints. so i tried localhost and the external ip for all manifests. => Not working. Errors in the kube-api container: I0925 14:51:47.505859 1 plugins.go:69] No cloud provider specified. I0925 14:51:47.973450 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767. E0925 14:51:48.009367 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.010730 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.010996 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.011083 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.012697 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0925 14:51:48.012753 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused [restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] listing is available at https://104.155.60.74:443/swaggerapi/ [restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] https://104.155.60.74:443/swaggerui/ is mapped to folder /swagger-ui/ I0925 14:51:48.136166 1 server.go:441] Serving securely on 0.0.0.0:443 I0925 14:51:48.136248 1 server.go:483] Serving insecurely on 127.0.0.1:8080 The controller container has nearly the same erros. Every other container is fine. My config: /etc/kubelet.env KUBE_KUBELET_OPTS="\ --api_servers=http://127.0.0.1:8080 \ --register-node=false \ --allow-privileged=true \ --config=/etc/kubernetes/manifests \ --tls_cert_file=/etc/kubernetes/ssl/apiserver.pem \ --tls_private_key_file=/etc/kubernetes/ssl/apiserver-key.pem \ --cloud-provider=gce \ --cluster_dns=10.10.38.10 \ --cluster_domain=cluster.local \ --cadvisor-port=0" /etc/kubernetes/manifests/ apiVersion: v1 kind: Pod metadata: name: kube-apiserver namespace: kube-system spec: hostNetwork: true containers: - name: kube-apiserver image: gcr.io/google_containers/hyperkube:v1.0.6 command: - /hyperkube - apiserver - --bind-address=0.0.0.0 - --etcd_servers=http://10.10.125.10:2379,http://10.10.82.201:2379,http://10.10.63.185:2379 - --allow-privileged=true - --service-cluster-ip-range=10.40.0.0/16 - --secure_port=443 - --advertise-address=104.155.60.74 - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --client-ca-file=/etc/kubernetes/ssl/ca.pem - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem ports: - containerPort: 443 hostPort: 443 name: https - containerPort: 8080 hostPort: 8080 name: local volumeMounts: - mountPath: /etc/kubernetes/ssl name: ssl-certs-kubernetes readOnly: true - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true volumes: - hostPath: path: /etc/kubernetes/ssl name: ssl-certs-kubernetes - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host /etc/kubernetes/manifests/kube-controller-manager.yml apiVersion: v1 kind: Pod metadata: name: kube-controller-manager namespace: kube-system spec: containers: - name: kube-controller-manager image: gcr.io/google_containers/hyperkube:v1.0.6 command: - /hyperkube - controller-manager - --master=https://104.155.60.74:443 - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem - --root-ca-file=/etc/kubernetes/ssl/ca.pem - --cloud_provider=gce livenessProbe: httpGet: host: 127.0.0.1 path: /healthz port: 10252 initialDelaySeconds: 15 timeoutSeconds: 1 volumeMounts: - mountPath: /etc/kubernetes/ssl name: ssl-certs-kubernetes readOnly: true - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/ssl name: ssl-certs-kubernetes - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3e37b2ea2277 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube controll 31 minutes ago Up 31 minutes k8s_kube-controller-manager.afecd3c9_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.inte rnal_kube-system_621db46bf7b0764eaa46d17dfba8e90f_519cd0da 43917185d91b gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube proxy -- 31 minutes ago Up 31 minutes k8s_kube-proxy.a2db3197_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99a eb1ef9c2997c942cfbe48b9_c82a8a60 f548279e90f9 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube apiserve 31 minutes ago Up 31 minutes k8s_kube-apiserver.2bcb2c35_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_8 67c500deb54965609810fd0771fa92d_a306feae 94b1942a09f0 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube schedule 31 minutes ago Up 31 minutes k8s_kube-scheduler.603b59f4_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_3 9e2c582fd067b44ebe8cefaee036c0e_e0ddf6a2 9de4a4264ef6 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_controller-manager-elector.89f472b4_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_k ube-system_e23fc0902c7e6da7b315ad34130b9807_7c8d2901 af2df45f4081 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_scheduler-elector.608b6780_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-syste m_e23fc0902c7e6da7b315ad34130b9807_b11e601d ac0e068456c7 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.internal_kube-system_621d b46bf7b0764eaa46d17dfba8e90f_e9760e28 2773ba48d011 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-system_e23fc0902c7e6 da7b315ad34130b9807_4fba9edb 987531f1951d gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_867c500deb549 65609810fd0771fa92d_d15d2d66 f4453b948186 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99aeb1ef9c2 997c942cfbe48b9_07e540c8 ce01cfda007e gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_39e2c582fd067 b44ebe8cefaee036c0e_e6cb6500 Here the curl command: kubernetes-km0 ~ # docker logs a404a310b55e I0928 09:14:05.019135 1 plugins.go:69] No cloud provider specified. I0928 09:14:05.192451 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767. I0928 09:14:05.192900 1 master.go:295] Will report 10.10.247.127 as public IP address. E0928 09:14:05.226222 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226428 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226479 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226593 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused E0928 09:14:05.226908 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused [restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] listing is available at https://10.10.247.127:443/swaggerapi/ [restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] https://10.10.247.127:443/swaggerui/ is mapped to folder /swagger-ui/ E0928 09:14:05.232632 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused I0928 09:14:05.368697 1 server.go:441] Serving securely on 0.0.0.0:443 I0928 09:14:05.368788 1 server.go:483] Serving insecurely on 127.0.0.1:8080 kubernetes-km0 ~ # curl http://127.0.0.1:8080/api/v1/limitranges { "kind": "LimitRangeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/limitranges", "resourceVersion": "100" }, "items": [] }
No, cascading delete is client side only. There are plans to move that logic server side in future versions
###Instruction: In the kubectl tool there is the option: --cascade[=true]: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true. But when I'm doing the 'delete' using the API(1.2) by default the deleted resources doesn't cascade. Is there a simple way to do it with the api? ### Response: No, cascading delete is client side only. There are plans to move that logic server side in future versions
In the kubectl tool there is the option: --cascade[=true]: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true. But when I'm doing the 'delete' using the API(1.2) by default the deleted resources doesn't cascade. Is there a simple way to do it with the api?
Add :z or :Z as a workaround to your mount path to work properly with selinux: volumeMounts: - name: etcd-storage mountPath: /var/etcd/data:z
###Instruction: I have mounted a emptyDir volume with memory medium to a pod. Then I logged into the pod and tried to create a file in that volume path but I got a permission denied error. touch a.txt touch: cannot touch `a.txt': Permission denied User I used is root. What could be the reason for that? ### Response: Add :z or :Z as a workaround to your mount path to work properly with selinux: volumeMounts: - name: etcd-storage mountPath: /var/etcd/data:z
I have mounted a emptyDir volume with memory medium to a pod. Then I logged into the pod and tried to create a file in that volume path but I got a permission denied error. touch a.txt touch: cannot touch `a.txt': Permission denied User I used is root. What could be the reason for that?
Yes, you just need to have the proper client credentials and you can run kubectl from anywhere that has network access to the apiserver. See Sharing Cluster Access with kubeconfig for the instructions to get a kubeconfig file onto your worker node.
###Instruction: I have my kubernetes cluster setup and I want to check for the nodes from worker/minion node, can we run kubectl form worker/minion node? ### Response: Yes, you just need to have the proper client credentials and you can run kubectl from anywhere that has network access to the apiserver. See Sharing Cluster Access with kubeconfig for the instructions to get a kubeconfig file onto your worker node.
I have my kubernetes cluster setup and I want to check for the nodes from worker/minion node, can we run kubectl form worker/minion node?
It is definitely possible to launch containers on physical machines. In fact, according to the Borg paper ( the design of which heavily influenced Container Engine/Kubernetes ), this is the norm within Google's own infrastructure: Each task maps to a set of Linux processes running in a container on a machine [62]. The vast majority of the Borg workload does not run inside virtual machines (VMs), because we don’t want to pay the cost of virtualization. Also, the system was designed at a time when we had a considerable investment in processors with no virtualization support in hardware. Since Container Engine is hosted within GCP, VMs are used to facilitate dynamic provisioning. However, these VMs are long lived compared to the lifetime of containers scheduled onto them. Pods of containers may be scheduled on and off of these VMs and jobs run to completion. However, VMs are torn down when clusters are upgraded or re-sized.
###Instruction: When a Google Container Engine cluster is created, Container Engine creates a Compute Engine managed instance group to manage the created instances. These instances are from Google Compute engine, which means, they are Virtual machines. But we read in the doc page: "VMs are heavyweight and non-portable. The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization" isn't a contradiction? correct me if I'm wrong. We use containers because they are extremely fast (either in boot time or tasks execution) compared to VMs, and they save a lot of space storage. So if we have one node(vm) that can supports 4 containers max, our clients can rapidly lunch 4 containers, but beyond this number, gcloud autoscaler will need to lunch a new node(vm) to support upcoming containers, which incurs some tasks delay. Is it impossible to launch containers over physical machines? And what do you recommend for running critical time execution tasks? ### Response: It is definitely possible to launch containers on physical machines. In fact, according to the Borg paper ( the design of which heavily influenced Container Engine/Kubernetes ), this is the norm within Google's own infrastructure: Each task maps to a set of Linux processes running in a container on a machine [62]. The vast majority of the Borg workload does not run inside virtual machines (VMs), because we don’t want to pay the cost of virtualization. Also, the system was designed at a time when we had a considerable investment in processors with no virtualization support in hardware. Since Container Engine is hosted within GCP, VMs are used to facilitate dynamic provisioning. However, these VMs are long lived compared to the lifetime of containers scheduled onto them. Pods of containers may be scheduled on and off of these VMs and jobs run to completion. However, VMs are torn down when clusters are upgraded or re-sized.
When a Google Container Engine cluster is created, Container Engine creates a Compute Engine managed instance group to manage the created instances. These instances are from Google Compute engine, which means, they are Virtual machines. But we read in the doc page: "VMs are heavyweight and non-portable. The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization" isn't a contradiction? correct me if I'm wrong. We use containers because they are extremely fast (either in boot time or tasks execution) compared to VMs, and they save a lot of space storage. So if we have one node(vm) that can supports 4 containers max, our clients can rapidly lunch 4 containers, but beyond this number, gcloud autoscaler will need to lunch a new node(vm) to support upcoming containers, which incurs some tasks delay. Is it impossible to launch containers over physical machines? And what do you recommend for running critical time execution tasks?
Likely, you will want to use Virtual Box's port forwarding capabilities. An example from the documentation: VBoxManage modifyvm "MyVM" --natpf1 "k8srule,tcp,,6443,,6443" This forwards port 6443 on all hosts interfaces to port 6443 of the guest. Port forwarding can also be configured through the VirtualBox UI.
###Instruction: I've created a kubenetes cluster on my Mac with docker-machine, following the documentation here: http://kubernetes.io/docs/getting-started-guides/docker/ I can access the normal api from inside the instance on 127.0.0.1:8080, but I want to access it externally from my macbook. I know there is a secure port :6443, but I'm unsure how to set up the credentials to access this port. There are lots of instructions on how to do it on custom installs of kubernetes, but I don't know how to do it inside the docker containers I'm running. ### Response: Likely, you will want to use Virtual Box's port forwarding capabilities. An example from the documentation: VBoxManage modifyvm "MyVM" --natpf1 "k8srule,tcp,,6443,,6443" This forwards port 6443 on all hosts interfaces to port 6443 of the guest. Port forwarding can also be configured through the VirtualBox UI.
I've created a kubenetes cluster on my Mac with docker-machine, following the documentation here: http://kubernetes.io/docs/getting-started-guides/docker/ I can access the normal api from inside the instance on 127.0.0.1:8080, but I want to access it externally from my macbook. I know there is a secure port :6443, but I'm unsure how to set up the credentials to access this port. There are lots of instructions on how to do it on custom installs of kubernetes, but I don't know how to do it inside the docker containers I'm running.
Pods started by a replication controller are all treated identically; they don't have any sort of ordinality. If you want to start a group of identical pods and enable an extra feature in just one of them, you should consider using a master election scheme and having just the elected master run the monitoring profiler.
###Instruction: I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information? ### Response: Pods started by a replication controller are all treated identically; they don't have any sort of ordinality. If you want to start a group of identical pods and enable an extra feature in just one of them, you should consider using a master election scheme and having just the elected master run the monitoring profiler.
I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information?
You will be interested in the parametrized set/templating proposal that will allow you to define indices https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/proposals/templates.md. This will most likely be included in 1.3.
###Instruction: I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information? ### Response: You will be interested in the parametrized set/templating proposal that will allow you to define indices https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/proposals/templates.md. This will most likely be included in 1.3.
I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information?
After much battling this one liner does retrieve the container name : kubectl get pods -o=jsonpath='{.items[?(@.metadata.labels.name=="web")].metadata.name}' when this is the known search criteria : items[].metadata.labels.name == "web" and this is the desired field to retrieve items[].metadata.name : "web-controller-5e6ij"
###Instruction: How to parse the json to retrieve a field from output of kubectl get pods -o json From the command line I need to obtain the system generated container name from a google cloud cluster ... Here are the salient bits of json output from above command : click here to see entire json output So the top most json key is an array : items[] followed by metadata.labels.name where the search critera value of that compound key is "web" (see above image green marks). On a match, I then need to retrieve field .items[].metadata.name which so happens to have value : web-controller-5e6ij // I need to retrieve this value Here are docs on jsonpath I want to avoid text parsing output of kubectl get pods which is NAME READY STATUS RESTARTS AGE mongo-controller-h714w 1/1 Running 0 12m web-controller-5e6ij 1/1 Running 0 9m Following will correctly parse this get pods command yet I feel its too fragile kubectl get pods | tail -1 | cut -d' ' -f1 ### Response: After much battling this one liner does retrieve the container name : kubectl get pods -o=jsonpath='{.items[?(@.metadata.labels.name=="web")].metadata.name}' when this is the known search criteria : items[].metadata.labels.name == "web" and this is the desired field to retrieve items[].metadata.name : "web-controller-5e6ij"
How to parse the json to retrieve a field from output of kubectl get pods -o json From the command line I need to obtain the system generated container name from a google cloud cluster ... Here are the salient bits of json output from above command : click here to see entire json output So the top most json key is an array : items[] followed by metadata.labels.name where the search critera value of that compound key is "web" (see above image green marks). On a match, I then need to retrieve field .items[].metadata.name which so happens to have value : web-controller-5e6ij // I need to retrieve this value Here are docs on jsonpath I want to avoid text parsing output of kubectl get pods which is NAME READY STATUS RESTARTS AGE mongo-controller-h714w 1/1 Running 0 12m web-controller-5e6ij 1/1 Running 0 9m Following will correctly parse this get pods command yet I feel its too fragile kubectl get pods | tail -1 | cut -d' ' -f1
Spinnaker supports multiple Kubernetes clusters, each is added as an 'account' in Spinnaker configuration. The configured accounts are presented as options at deployment time, and the Server Groups for each application are rolled up under the account they belong to. It is possible to change that configuration and refresh it at runtime, but it would involve editing the on-disk yaml file that backs the Clouddriver component of Spinnaker and triggering the /config-refresh endpoint.
###Instruction: We use a separate VPC per environment. Does or will spinnaker support targeting different Kubernetes clusters? Will adding environments ad-hoc be viable? ### Response: Spinnaker supports multiple Kubernetes clusters, each is added as an 'account' in Spinnaker configuration. The configured accounts are presented as options at deployment time, and the Server Groups for each application are rolled up under the account they belong to. It is possible to change that configuration and refresh it at runtime, but it would involve editing the on-disk yaml file that backs the Clouddriver component of Spinnaker and triggering the /config-refresh endpoint.
We use a separate VPC per environment. Does or will spinnaker support targeting different Kubernetes clusters? Will adding environments ad-hoc be viable?
I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.
###Instruction: What I have is Kubernetes: v.1.1.2 iptables v1.4.21 kernel: 3.10.0-327.3.1.el7.x86_64 Centos Networking is done via flannel udp no cloud provider what I do I have enabled it with --proxy_mode=iptables argument. And I checked the iptables Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- SIDR26KUBEAPMORANGE-005/26 anywhere MASQUERADE all -- 172.17.0.0/16 anywhere MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351 Chain DOCKER (2 references) target prot opt source destination Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references) target prot opt source destination MARK all -- 172.20.10.130 anywhere /* default/nc-service: */ MARK set 0x4d415351 DNAT tcp -- anywhere anywhere /* default/nc-service: */ tcp to:172.20.10.130:9000 Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references) target prot opt source destination MARK all -- SIDR26KUBEAPMORANGE-001.serviceengage.com anywhere /* default/kubernetes: */ MARK set 0x4d415351 DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:10.62.66.254:9443 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 172.21.0.1 /* default/kubernetes: cluster IP */ tcp dpt:https KUBE-SVC-362XK5X6TGXLXGID tcp -- anywhere 172.21.145.28 /* default/nc-service: cluster IP */ tcp dpt:commplex-main KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-362XK5X6TGXLXGID (1 references) target prot opt source destination KUBE-SEP-3SX6E5663KCZDTLC all -- anywhere anywhere /* default/nc-service: */ Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references) target prot opt source destination KUBE-SEP-Q4LJF4YJE6VUB3Y2 all -- anywhere anywhere /* default/kubernetes: */ When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) hello Ncat: Connection timed out. while when I do request to the 172.20.10.130:9000 server it's working fine. nc -v 172.20.10.130 9000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.20.10.130:9000. hello yes From the dmesg log, I can see [10153.318195] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0 [10153.318282] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0 [10153.318374] DBG@POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0 And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip. nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.21.145.28:5000. hello yes I am wondering why and how to fix it. ### Response: I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.
What I have is Kubernetes: v.1.1.2 iptables v1.4.21 kernel: 3.10.0-327.3.1.el7.x86_64 Centos Networking is done via flannel udp no cloud provider what I do I have enabled it with --proxy_mode=iptables argument. And I checked the iptables Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- SIDR26KUBEAPMORANGE-005/26 anywhere MASQUERADE all -- 172.17.0.0/16 anywhere MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351 Chain DOCKER (2 references) target prot opt source destination Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references) target prot opt source destination MARK all -- 172.20.10.130 anywhere /* default/nc-service: */ MARK set 0x4d415351 DNAT tcp -- anywhere anywhere /* default/nc-service: */ tcp to:172.20.10.130:9000 Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references) target prot opt source destination MARK all -- SIDR26KUBEAPMORANGE-001.serviceengage.com anywhere /* default/kubernetes: */ MARK set 0x4d415351 DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:10.62.66.254:9443 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 172.21.0.1 /* default/kubernetes: cluster IP */ tcp dpt:https KUBE-SVC-362XK5X6TGXLXGID tcp -- anywhere 172.21.145.28 /* default/nc-service: cluster IP */ tcp dpt:commplex-main KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-362XK5X6TGXLXGID (1 references) target prot opt source destination KUBE-SEP-3SX6E5663KCZDTLC all -- anywhere anywhere /* default/nc-service: */ Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references) target prot opt source destination KUBE-SEP-Q4LJF4YJE6VUB3Y2 all -- anywhere anywhere /* default/kubernetes: */ When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) hello Ncat: Connection timed out. while when I do request to the 172.20.10.130:9000 server it's working fine. nc -v 172.20.10.130 9000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.20.10.130:9000. hello yes From the dmesg log, I can see [10153.318195] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0 [10153.318282] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0 [10153.318374] DBG@POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0 And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip. nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.21.145.28:5000. hello yes I am wondering why and how to fix it.
To expand on zhb's answer: while different restart policies make sense for single pods, or even for run-to-completion jobs, a replication controller's entire purpose is to keep N instances of a pod running, so saying that you don't want the pods restarted doesn't mesh great with the concept. The part of the docs that explains this is: http://kubernetes.io/docs/user-guide/pod-states/#restartpolicy
###Instruction: I tried creating a Replication Controller via an JSON file and I have mentioned restartPolicy as "Never" for pod restartPolicy. but I am getting an error that, Error: The ReplicationController "ngnix-rc" is invalid. *spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: Always Is there any change in v1.2 that it supports only "Always" as an option for restartPolicy? I'm confused. I tried another scenario where I faced a strange behavior. I specified the restart policy as "never" and I got the error as, Error: The ReplicationController "ngnix-rc" is invalid. *spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always, OnFailure, Never *spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always As found, there are two errors in this scenario. I don't know what is the exact problem. The JSON file that i used to create RC is given below { "kind":"ReplicationController", "apiVersion":"v1", "metadata":{ "name":"ngnix-rc", "labels":{ "app":"webserver" } }, "spec":{ "replicas":1, "selector":{ "app":"webserver1" }, "template":{ "metadata":{ "name":"ngnix-pod", "labels":{ "app":"webserver1" } }, "spec":{ "containers":[ { "image":"ngnix", "name":"nginx" } ], "restartPolicy":"Never" } } } } ### Response: To expand on zhb's answer: while different restart policies make sense for single pods, or even for run-to-completion jobs, a replication controller's entire purpose is to keep N instances of a pod running, so saying that you don't want the pods restarted doesn't mesh great with the concept. The part of the docs that explains this is: http://kubernetes.io/docs/user-guide/pod-states/#restartpolicy
I tried creating a Replication Controller via an JSON file and I have mentioned restartPolicy as "Never" for pod restartPolicy. but I am getting an error that, Error: The ReplicationController "ngnix-rc" is invalid. *spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: Always Is there any change in v1.2 that it supports only "Always" as an option for restartPolicy? I'm confused. I tried another scenario where I faced a strange behavior. I specified the restart policy as "never" and I got the error as, Error: The ReplicationController "ngnix-rc" is invalid. *spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always, OnFailure, Never *spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always As found, there are two errors in this scenario. I don't know what is the exact problem. The JSON file that i used to create RC is given below { "kind":"ReplicationController", "apiVersion":"v1", "metadata":{ "name":"ngnix-rc", "labels":{ "app":"webserver" } }, "spec":{ "replicas":1, "selector":{ "app":"webserver1" }, "template":{ "metadata":{ "name":"ngnix-pod", "labels":{ "app":"webserver1" } }, "spec":{ "containers":[ { "image":"ngnix", "name":"nginx" } ], "restartPolicy":"Never" } } } }
I checked the code in pkg/kubectl/cmd/apply.go and I think the following lines of code shows what's behind the scene when you run kubectl apply -f: // Compute a three way strategic merge patch to send to server. patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current, versionedObject, true) helper := resource.NewHelper(info.Client, info.Mapping) _, err = helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) And here is the code helper.Patch: func (m *Helper) Patch(namespace, name string, pt api.PatchType, data []byte) (runtime.Object, error) { return m.RESTClient.Patch(pt). NamespaceIfScoped(namespace, m.NamespaceScoped). Resource(m.Resource). Name(name). Body(data). Do(). Get() }
###Instruction: I try to use the master api to update resources. In 1.2 to update a deployment resource I'm doing kubectl apply -f new updateddeployment.yaml How to do the same action with the api? ### Response: I checked the code in pkg/kubectl/cmd/apply.go and I think the following lines of code shows what's behind the scene when you run kubectl apply -f: // Compute a three way strategic merge patch to send to server. patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current, versionedObject, true) helper := resource.NewHelper(info.Client, info.Mapping) _, err = helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) And here is the code helper.Patch: func (m *Helper) Patch(namespace, name string, pt api.PatchType, data []byte) (runtime.Object, error) { return m.RESTClient.Patch(pt). NamespaceIfScoped(namespace, m.NamespaceScoped). Resource(m.Resource). Name(name). Body(data). Do(). Get() }
I try to use the master api to update resources. In 1.2 to update a deployment resource I'm doing kubectl apply -f new updateddeployment.yaml How to do the same action with the api?
The username and password in the kubeconfig file are used to authenticate to the Kubernetes apiserver running in your cluster. The authentication for Cockpit is entirely separate. According to the vagrant setup scripts, you should log into Cockpit as the user vagrant with the password vagrant.
###Instruction: I am working with Kubernetes tutorial and deploying the cluster locally with Vagrant. After the vagrant machine finishes its loading, I get the following outout: Kubernetes cluster is running. The master is running at: https://10.245.1.2 Administer and visualize its resources using Cockpit: https://10.245.1.2:9090 For more information on Cockpit, visit http://cockpit-project.org The user name and password to use is located in /Users/me/.kube/config When i go to https://10.245.1.2:9090 i see the login fedora screen. I do the following: ./cluster/kubectl.sh config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://10.245.1.2 name: vagrant contexts: - context: cluster: vagrant user: vagrant name: vagrant current-context: vagrant kind: Config preferences: {} users: - name: vagrant user: client-certificate-data: REDACTED client-key-data: REDACTED password: 9r5V2B2wn6oeaciX username: admin but the username and password are incorrect. how am i supposed to connect to the cockpit? Thanks ### Response: The username and password in the kubeconfig file are used to authenticate to the Kubernetes apiserver running in your cluster. The authentication for Cockpit is entirely separate. According to the vagrant setup scripts, you should log into Cockpit as the user vagrant with the password vagrant.
I am working with Kubernetes tutorial and deploying the cluster locally with Vagrant. After the vagrant machine finishes its loading, I get the following outout: Kubernetes cluster is running. The master is running at: https://10.245.1.2 Administer and visualize its resources using Cockpit: https://10.245.1.2:9090 For more information on Cockpit, visit http://cockpit-project.org The user name and password to use is located in /Users/me/.kube/config When i go to https://10.245.1.2:9090 i see the login fedora screen. I do the following: ./cluster/kubectl.sh config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://10.245.1.2 name: vagrant contexts: - context: cluster: vagrant user: vagrant name: vagrant current-context: vagrant kind: Config preferences: {} users: - name: vagrant user: client-certificate-data: REDACTED client-key-data: REDACTED password: 9r5V2B2wn6oeaciX username: admin but the username and password are incorrect. how am i supposed to connect to the cockpit? Thanks
Because the state of the DaemonSet is inconsistent, you can try to delete it using --cascade=false? flag.
###Instruction: I did an upgrade of GKE to v1.2 yesterday and started to try out the DaemonSet feature (beta), didn't work as expected and i wanted to delete it from the cluster. What happened is that the delete operation failed and now the DaemonSet is in a inconsistent state and restarts all my other pods every 5 min. What can be done without deleting and recreating the whole cluster? I did try to apply the DaemonSet with a busybox like that : And the DS looks like that : The deleting operations fails : ### Response: Because the state of the DaemonSet is inconsistent, you can try to delete it using --cascade=false? flag.
I did an upgrade of GKE to v1.2 yesterday and started to try out the DaemonSet feature (beta), didn't work as expected and i wanted to delete it from the cluster. What happened is that the delete operation failed and now the DaemonSet is in a inconsistent state and restarts all my other pods every 5 min. What can be done without deleting and recreating the whole cluster? I did try to apply the DaemonSet with a busybox like that : And the DS looks like that : The deleting operations fails :
According to the Table of Solutions for installing Kubernetes, the maintainer of the CentOS getting started guide is @coolsvap. You should reach out to him to ask about getting the pre-built binary updated to the official release.
###Instruction: I'm trying to run Kubernetes on a local Centos server and have had some issues (for example, with DNS). A version check shows that I'm running Kubernetes 1.2 Alpha 1. Since the full release is now available from the Releases Download page, I'd like to upgrade and see if that resolves my issue. The documentation for installing a prebuilt binary release states: Download the latest release and unpack this tar file on Linux or OS X, cd to the created kubernetes/ directory, and then follow the getting started guide for your cloud. However, the Getting Started Guide for Centos says nothing about using a prebuilt binary. Instead, it tells you to set up a yum repo and run a yum install command: yum -y install --enablerepo=virt7-docker-common-release kubernetes This command downloads and installs the Alpha1 release. In addition, it attempts to install Docker 1.8 (two releases down from the current 1.10), which fails if Docker is already installed. How can I install from a prebuilt binary and use an existing Docker? ### Response: According to the Table of Solutions for installing Kubernetes, the maintainer of the CentOS getting started guide is @coolsvap. You should reach out to him to ask about getting the pre-built binary updated to the official release.
I'm trying to run Kubernetes on a local Centos server and have had some issues (for example, with DNS). A version check shows that I'm running Kubernetes 1.2 Alpha 1. Since the full release is now available from the Releases Download page, I'd like to upgrade and see if that resolves my issue. The documentation for installing a prebuilt binary release states: Download the latest release and unpack this tar file on Linux or OS X, cd to the created kubernetes/ directory, and then follow the getting started guide for your cloud. However, the Getting Started Guide for Centos says nothing about using a prebuilt binary. Instead, it tells you to set up a yum repo and run a yum install command: yum -y install --enablerepo=virt7-docker-common-release kubernetes This command downloads and installs the Alpha1 release. In addition, it attempts to install Docker 1.8 (two releases down from the current 1.10), which fails if Docker is already installed. How can I install from a prebuilt binary and use an existing Docker?
You can add this ENV variable (we are using ver 1.1.8) export VPC_ID=vpc-YOURID Also Kubernetes creates a VPC with 172.20.0.0/16 and I think it expects this.
###Instruction: To create kubernetes cluster in AWS, I use the set up script "https://get.k8s.io". That script creates a new VPC automatically, but I want to create kubernetes cluster inside an existing VPC in AWS. Is there a way to do it? I checked /kubernetes/cluster/aws/config-default.sh file, but there doesn't seem to be any environment variables about VPC. ### Response: You can add this ENV variable (we are using ver 1.1.8) export VPC_ID=vpc-YOURID Also Kubernetes creates a VPC with 172.20.0.0/16 and I think it expects this.
To create kubernetes cluster in AWS, I use the set up script "https://get.k8s.io". That script creates a new VPC automatically, but I want to create kubernetes cluster inside an existing VPC in AWS. Is there a way to do it? I checked /kubernetes/cluster/aws/config-default.sh file, but there doesn't seem to be any environment variables about VPC.
after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created] Do you mean your run kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created? If this is the case, then there is no surprise since deployment is not an executable in your PATH.
###Instruction: I following the example exactly, [http://kubernetes.io/docs/hellonode/,] after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created] . the pod doesnot run ok, I get CrashLoopBackOff status.I have no deployment exec, any comment is appreciated. Nobert ========================================== norbert688@kubernete-codelab-1264:~/hellonode$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-2129762707-hte0f 0/1 CrashLoopBackOff 5 6m norbert688@kubernete-codelab-1264:~/hellonode$ kubectl describe pod hello Name: hello-node-2129762707-hte0f Namespace: default Node: gke-hello-world-16359f5d-node-zkpf/10.140.0.3 Start Time: Mon, 28 Mar 2016 20:07:53 +0800 Labels: pod-template-hash=2129762707,run=hello-node Status: Running IP: 10.16.2.3 Controllers: ReplicaSet/hello-node-2129762707 Containers: hello-node: Container ID: docker://dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d Image: gcr.io/kubernete-codelab-1264/hello-node:v1 Image ID: docker://1fab5e6a9ef21db5518db9bcfbafa52799c38609738f5b3e1c4bb875225b5d61 Port: 8080/TCP Args: deployment hello-node created QoS Tier: cpu: Burstable memory: BestEffort Requests: cpu: 100m State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: [8] System error: exec: "deployment": executable file not found in $PATH Exit Code: -1 Started: Mon, 28 Mar 2016 20:14:16 +0800 Finished: Mon, 28 Mar 2016 20:14:16 +0800 Ready: False Restart Count: 6 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-k3zl5: Type: Secret (a volume populated by a Secret) SecretName: default-token-k3zl5 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulling pulling image "gcr.io/kubernete-codelab-1264/hello-node:v1" 6m 6m 1 {default-scheduler } Normal Scheduled Successfully assigned hello-node-2129762707-hte0f to gke-hello-world-16359f5d-node-zkpf 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 41c8fde8f94b 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 41c8fde8f94b with error: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: "deployment": executable file not found in $PATH 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id a99c8dc5cc8a 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id a99c8dc5cc8a with error: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: "deployment": executable file not found in $PATH 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Successfully pulled image "gcr.io/kubernete-codelab-1264/hello-node:v1" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 977b07a9e5de 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 977b07a9e5de with error: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: "deployment": executable file not found in $PATH 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id f8ad177306bc 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id f8ad177306bc with error: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: "deployment": executable file not found in $PATH 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 5m 4m 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id d9218f5385cb 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id d9218f5385cb with error: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: "deployment": executable file not found in $PATH 4m 3m 7 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 7c3c680f18c4 with error: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: "deployment": executable file not found in $PATH 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 7c3c680f18c4 2m 40s 12 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id dfae3b1e068a with error: API error (500): Cannot start container dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d: [8] System error: exec: "deployment": executable file not found in $PATH 26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id dfae3b1e068a 6m 26s 6 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Container image "gcr.io/kubernete-codelab-1264/hello-node:v1" already present on machine 3m 14s 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync (events with common reason combined) 5m 3s 26 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning BackOff Back-off restarting failed docker container 3s 3s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" ========================================== ### Response: after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created] Do you mean your run kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created? If this is the case, then there is no surprise since deployment is not an executable in your PATH.
I following the example exactly, [http://kubernetes.io/docs/hellonode/,] after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created] . the pod doesnot run ok, I get CrashLoopBackOff status.I have no deployment exec, any comment is appreciated. Nobert ========================================== norbert688@kubernete-codelab-1264:~/hellonode$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-2129762707-hte0f 0/1 CrashLoopBackOff 5 6m norbert688@kubernete-codelab-1264:~/hellonode$ kubectl describe pod hello Name: hello-node-2129762707-hte0f Namespace: default Node: gke-hello-world-16359f5d-node-zkpf/10.140.0.3 Start Time: Mon, 28 Mar 2016 20:07:53 +0800 Labels: pod-template-hash=2129762707,run=hello-node Status: Running IP: 10.16.2.3 Controllers: ReplicaSet/hello-node-2129762707 Containers: hello-node: Container ID: docker://dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d Image: gcr.io/kubernete-codelab-1264/hello-node:v1 Image ID: docker://1fab5e6a9ef21db5518db9bcfbafa52799c38609738f5b3e1c4bb875225b5d61 Port: 8080/TCP Args: deployment hello-node created QoS Tier: cpu: Burstable memory: BestEffort Requests: cpu: 100m State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: [8] System error: exec: "deployment": executable file not found in $PATH Exit Code: -1 Started: Mon, 28 Mar 2016 20:14:16 +0800 Finished: Mon, 28 Mar 2016 20:14:16 +0800 Ready: False Restart Count: 6 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-k3zl5: Type: Secret (a volume populated by a Secret) SecretName: default-token-k3zl5 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulling pulling image "gcr.io/kubernete-codelab-1264/hello-node:v1" 6m 6m 1 {default-scheduler } Normal Scheduled Successfully assigned hello-node-2129762707-hte0f to gke-hello-world-16359f5d-node-zkpf 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 41c8fde8f94b 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 41c8fde8f94b with error: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: "deployment": executable file not found in $PATH 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id a99c8dc5cc8a 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id a99c8dc5cc8a with error: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: "deployment": executable file not found in $PATH 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Successfully pulled image "gcr.io/kubernete-codelab-1264/hello-node:v1" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 977b07a9e5de 6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 977b07a9e5de with error: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: "deployment": executable file not found in $PATH 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id f8ad177306bc 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id f8ad177306bc with error: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: "deployment": executable file not found in $PATH 5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 5m 4m 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id d9218f5385cb 4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id d9218f5385cb with error: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: "deployment": executable file not found in $PATH 4m 3m 7 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: \"deployment\": executable file not found in $PATH\n" 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 7c3c680f18c4 with error: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: "deployment": executable file not found in $PATH 3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 7c3c680f18c4 2m 40s 12 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" 26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id dfae3b1e068a with error: API error (500): Cannot start container dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d: [8] System error: exec: "deployment": executable file not found in $PATH 26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id dfae3b1e068a 6m 26s 6 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Container image "gcr.io/kubernete-codelab-1264/hello-node:v1" already present on machine 3m 14s 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync (events with common reason combined) 5m 3s 26 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning BackOff Back-off restarting failed docker container 3s 3s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)" ==========================================
You've probably created a deployment that's recreating the replica set for you. Try running kubectl get deployments and deleting the deployment from the output of that command.
###Instruction: I am having this issue except I have not created a ReplicaSet as suggested by Tim Hockin. Somehow a ReplicaSet was created with the same properties as my ReplicationController. The only difference is the name. The Controller is named 'fp-frontend' and the Set is named 'fp-frontend-389969098'. The appended number suggests that it was automatically created. Perhaps a race condition or something, who knows.... I would however like to delete it and the pods it spawns. So I try to delete it: $kubectl delete rs fp-frontend-389969098 replicaset "fp-frontend-389969098" deleted Command says it was deleted. But... $kubectl get rs NAME DESIRED CURRENT AGE fp-frontend-389969098 1 1 4s Any suggestions? I think I am going to delete and recreate the cluster? I am using google container engine and kubernetes is up to date. ``` Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} ### Response: You've probably created a deployment that's recreating the replica set for you. Try running kubectl get deployments and deleting the deployment from the output of that command.
I am having this issue except I have not created a ReplicaSet as suggested by Tim Hockin. Somehow a ReplicaSet was created with the same properties as my ReplicationController. The only difference is the name. The Controller is named 'fp-frontend' and the Set is named 'fp-frontend-389969098'. The appended number suggests that it was automatically created. Perhaps a race condition or something, who knows.... I would however like to delete it and the pods it spawns. So I try to delete it: $kubectl delete rs fp-frontend-389969098 replicaset "fp-frontend-389969098" deleted Command says it was deleted. But... $kubectl get rs NAME DESIRED CURRENT AGE fp-frontend-389969098 1 1 4s Any suggestions? I think I am going to delete and recreate the cluster? I am using google container engine and kubernetes is up to date. ``` Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
This is really tricky but after a lot of trail and error I think I've got it working. Go to the Google Developer Console > Api Manager > Credentials and click "Create credentials" and create a "service account key" Under "service account" select new and name the new key "gcr" (let the key type be json) Create the key and store the file on disk (from here on we assume that it was stored under ~/secret.json) Now login to GCR using Docker from command-line: $ docker login -e your@email.se -u _json_key -p "$(cat ~/secret.json)" https://eu.gcr.io This will generate an entry for "https://eu.gcr.io" in your ~/.docker/config.json file. 6. Copy the JSON structure under "https://eu.gcr.io" into a new file called "~/docker-config.json", remove newlines! For example: Base64 encode this file: $ cat ~/docker-config.json | base64 This will print a long base64 encoded string, copy this string and paste it into an image pull secret definition (called ~/pullsecret.yaml): apiVersion: v1 kind: Secret metadata: name: mykey data: .dockercfg: <paste base64 encoded string here> type: kubernetes.io/dockercfg Now create the secret: $ kubectl create -f ~/pullsecret.yaml 10. Now you can use this pull secret from a pod, for example: apiVersion: v1 kind: Pod metadata: name: foo namespace: awesomeapps spec: containers: - image: "janedoe/awesomeapp:v1" name: foo imagePullSecrets: - name: mykey or add it to a service account.
###Instruction: I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the docs you should create an image pull secret using: $ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL But I wonder what DOCKER_USER and DOCKER_PASSWORD I should use for authenticating with Google Container Registry? Looking at the GCR docs it says that the password is the access token that you can get by running: $ gcloud auth print-access-token This actually works... for a while. The problem seems to be that this access token expires after (what I believe to be) one hour. I need a password (or something) that doesn't expire when creating my image pull secret. Otherwise the Kubernetes cluster can't download the new images after an hour or so. What's the correct way to do this? ### Response: This is really tricky but after a lot of trail and error I think I've got it working. Go to the Google Developer Console > Api Manager > Credentials and click "Create credentials" and create a "service account key" Under "service account" select new and name the new key "gcr" (let the key type be json) Create the key and store the file on disk (from here on we assume that it was stored under ~/secret.json) Now login to GCR using Docker from command-line: $ docker login -e your@email.se -u _json_key -p "$(cat ~/secret.json)" https://eu.gcr.io This will generate an entry for "https://eu.gcr.io" in your ~/.docker/config.json file. 6. Copy the JSON structure under "https://eu.gcr.io" into a new file called "~/docker-config.json", remove newlines! For example: Base64 encode this file: $ cat ~/docker-config.json | base64 This will print a long base64 encoded string, copy this string and paste it into an image pull secret definition (called ~/pullsecret.yaml): apiVersion: v1 kind: Secret metadata: name: mykey data: .dockercfg: <paste base64 encoded string here> type: kubernetes.io/dockercfg Now create the secret: $ kubectl create -f ~/pullsecret.yaml 10. Now you can use this pull secret from a pod, for example: apiVersion: v1 kind: Pod metadata: name: foo namespace: awesomeapps spec: containers: - image: "janedoe/awesomeapp:v1" name: foo imagePullSecrets: - name: mykey or add it to a service account.
I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the docs you should create an image pull secret using: $ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL But I wonder what DOCKER_USER and DOCKER_PASSWORD I should use for authenticating with Google Container Registry? Looking at the GCR docs it says that the password is the access token that you can get by running: $ gcloud auth print-access-token This actually works... for a while. The problem seems to be that this access token expires after (what I believe to be) one hour. I need a password (or something) that doesn't expire when creating my image pull secret. Otherwise the Kubernetes cluster can't download the new images after an hour or so. What's the correct way to do this?
Commands like attach rely on the cluster's master being able to talk to the nodes in the cluster. However, because the master isn't in the same Compute Engine network as your cluster's nodes, we rely on SSH tunnels to enable secure communication. Container Engine puts an SSH public key in your Compute Engine project metadata. All Compute Engine VMs using Google-provided images regularly check their project's common metadata and their instance's metadata for SSH keys to add to the VM's list of authorized users. Container Engine also adds a firewall rule to your Compute Engine network allowing SSH access from the master's IP address to each node in the cluster. If kubectl attach (or logs, exec, and port-forward) doesn't work, it's likely that it's because the master is unable to open SSH tunnels to the nodes. To determine what the underlying problem is, you should check for these potential causes: The cluster doesn't have any nodes. If you've scaled down the number of nodes in your cluster to zero, SSH tunnels won't work. To fix it, resize your cluster to have at least one node. Pods in the cluster have gotten stuck in a terminating state and prevented nodes that no longer exist from being removed from the cluster. This is an issue that should only affect Kubernetes version 1.1, but could be caused by repeated resizing of the cluster down and up. To fix it, delete the pods that have been in a terminating state for more than a few minutes. The old nodes will then be removed from the master's API and replaced by the new nodes. Your network's firewall rules don't allow for SSH access to the master. All Compute Engine networks are created with a firewall rule called "default-allow-ssh" that allows SSH access from all IP addresses (requiring a valid private key, of course). Container Engine also inserts an SSH rule for each cluster of the form "gke---ssh" that allows SSH access specifically from the cluster's master IP to the cluster's nodes. If neither of these rules exists, then the master will be unable to open SSH tunnels. To fix it, re-add a firewall rule allowing access to VMs with the tag that's on all the cluster's nodes from the master's IP address. Your project's common metadata entry for sshKeys is full. If the project's metadata entry named "sshKeys" is close to the 32KiB size limit, then Container Engine isn't able to add its own SSH key to let it open SSH tunnels. You can see your project's metadata by running gcloud compute project-info describe [--project=PROJECT], then check the length of the list of sshKeys. To fix it, delete some of the SSH keys that are no longer needed. You have set a metadata field with the key "sshKeys" on the VMs in the cluster. The node agent on VMs prefers per-instance sshKeys to project-wide SSH keys, so if you've set any SSH keys specifically on the cluster's nodes, then the master's SSH key in the project metadata won't be respected by the nodes. To check, run gcloud compute instances describe <VM-name> and look for an "sshKeys" field in the metadata. To fix it, delete the per-instance SSH keys from the instance metadata. It's worth noting that these features are not required for the correct functioning of the cluster. If you prefer to keep your cluster's network locked down from all outside access, that's perfectly fine. Just be aware that features like these won't work as a result.
###Instruction: I'm running some containers on Google Container Engine. One day everything was fine, and the next day I can't attach to my containers anymore. Or exec, or any other docker command. I deleted the pods and let new ones be instanced, didn't help. Then I deleted the node and waited for a new one to be created and the pods deployed, didn't help either. $ kubectl attach www-controller-dev-xxxxx Error from server: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxxxxxxxxxxxxxxxxxxx"? What else can I try? The problem might have started after I've deleted the cluster and recreated it, but I can't be sure. Did that before and it never was a problem. ### Response: Commands like attach rely on the cluster's master being able to talk to the nodes in the cluster. However, because the master isn't in the same Compute Engine network as your cluster's nodes, we rely on SSH tunnels to enable secure communication. Container Engine puts an SSH public key in your Compute Engine project metadata. All Compute Engine VMs using Google-provided images regularly check their project's common metadata and their instance's metadata for SSH keys to add to the VM's list of authorized users. Container Engine also adds a firewall rule to your Compute Engine network allowing SSH access from the master's IP address to each node in the cluster. If kubectl attach (or logs, exec, and port-forward) doesn't work, it's likely that it's because the master is unable to open SSH tunnels to the nodes. To determine what the underlying problem is, you should check for these potential causes: The cluster doesn't have any nodes. If you've scaled down the number of nodes in your cluster to zero, SSH tunnels won't work. To fix it, resize your cluster to have at least one node. Pods in the cluster have gotten stuck in a terminating state and prevented nodes that no longer exist from being removed from the cluster. This is an issue that should only affect Kubernetes version 1.1, but could be caused by repeated resizing of the cluster down and up. To fix it, delete the pods that have been in a terminating state for more than a few minutes. The old nodes will then be removed from the master's API and replaced by the new nodes. Your network's firewall rules don't allow for SSH access to the master. All Compute Engine networks are created with a firewall rule called "default-allow-ssh" that allows SSH access from all IP addresses (requiring a valid private key, of course). Container Engine also inserts an SSH rule for each cluster of the form "gke---ssh" that allows SSH access specifically from the cluster's master IP to the cluster's nodes. If neither of these rules exists, then the master will be unable to open SSH tunnels. To fix it, re-add a firewall rule allowing access to VMs with the tag that's on all the cluster's nodes from the master's IP address. Your project's common metadata entry for sshKeys is full. If the project's metadata entry named "sshKeys" is close to the 32KiB size limit, then Container Engine isn't able to add its own SSH key to let it open SSH tunnels. You can see your project's metadata by running gcloud compute project-info describe [--project=PROJECT], then check the length of the list of sshKeys. To fix it, delete some of the SSH keys that are no longer needed. You have set a metadata field with the key "sshKeys" on the VMs in the cluster. The node agent on VMs prefers per-instance sshKeys to project-wide SSH keys, so if you've set any SSH keys specifically on the cluster's nodes, then the master's SSH key in the project metadata won't be respected by the nodes. To check, run gcloud compute instances describe <VM-name> and look for an "sshKeys" field in the metadata. To fix it, delete the per-instance SSH keys from the instance metadata. It's worth noting that these features are not required for the correct functioning of the cluster. If you prefer to keep your cluster's network locked down from all outside access, that's perfectly fine. Just be aware that features like these won't work as a result.
I'm running some containers on Google Container Engine. One day everything was fine, and the next day I can't attach to my containers anymore. Or exec, or any other docker command. I deleted the pods and let new ones be instanced, didn't help. Then I deleted the node and waited for a new one to be created and the pods deployed, didn't help either. $ kubectl attach www-controller-dev-xxxxx Error from server: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxxxxxxxxxxxxxxxxxxx"? What else can I try? The problem might have started after I've deleted the cluster and recreated it, but I can't be sure. Did that before and it never was a problem.
The correct syntax is kubectl logs <pod> <container>. In your example, kubectl logs heapster-v1.1.0-beta1-3082378092-t6inb heapster --namespace=kube-system will show the logs of the "heapster" container within the named pod.
###Instruction: Trying to deploy heapster-controller to get Heapster + Graphana + InfluxDB working for Kubernetes. Getting error messages while trying ot deploy using heapster-controller.yaml file: heapster-controller.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster-v1.1.0-beta1 namespace: kube-system labels: k8s-app: heapster kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: matchLabels: k8s-app: heapster template: metadata: labels: k8s-app: heapster kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google_containers/heapster:v1.1.0-beta1 name: heapster resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200m requests: cpu: 100m memory: 200m command: - /heapster - --source=kubernetes.summary_api:'' - --sink=influxdb:http://monitoring-influxdb:8086 - --metric_resolution=60s - image: gcr.io/google_containers/heapster:v1.1.0-beta1 name: eventer resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200m requests: cpu: 100m memory: 200m command: - /eventer - --source=kubernetes:'' - --sink=influxdb:http://monitoring-influxdb:8086 - image: gcr.io/google_containers/addon-resizer:1.0 name: heapster-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200 - --extra-memory=200Mi - --threshold=5 - --deployment=heapster-v1.1.0-beta1 - --container=heapster - --poll-period=300000 - image: gcr.io/google_containers/addon-resizer:1.0 name: eventer-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200 - --extra-memory=200Ki - --threshold=5 - --deployment=heapster-v1.1.0-beta1 - --container=eventer - --poll-period=300000 Deployment goes through, but then I get error: [root@node236 influxdb]# kubectl get pods -o wide --namespace=kube-system NAME READY STATUS RESTARTS AGE NODE heapster-v1.1.0-beta1-3082378092-t6inb 2/4 RunContainerError 0 1m node262.local.net [root@node236 influxdb]# Display log for the failed container: [root@node236 influxdb]# kubectl logs --namespace=kube-system heapster-v1.1.0-beta1-3082378092-t6inb Error from server: a container name must be specified for pod heapster-v1.1.0-beta1-3082378092-t6inb, choose one of: [heapster eventer heapster-nanny eventer-nanny] [root@node236 influxdb]# Where am I possibly going wrong ? Any feedback appreaciated!! Alex ### Response: The correct syntax is kubectl logs <pod> <container>. In your example, kubectl logs heapster-v1.1.0-beta1-3082378092-t6inb heapster --namespace=kube-system will show the logs of the "heapster" container within the named pod.
Trying to deploy heapster-controller to get Heapster + Graphana + InfluxDB working for Kubernetes. Getting error messages while trying ot deploy using heapster-controller.yaml file: heapster-controller.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster-v1.1.0-beta1 namespace: kube-system labels: k8s-app: heapster kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: matchLabels: k8s-app: heapster template: metadata: labels: k8s-app: heapster kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google_containers/heapster:v1.1.0-beta1 name: heapster resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200m requests: cpu: 100m memory: 200m command: - /heapster - --source=kubernetes.summary_api:'' - --sink=influxdb:http://monitoring-influxdb:8086 - --metric_resolution=60s - image: gcr.io/google_containers/heapster:v1.1.0-beta1 name: eventer resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200m requests: cpu: 100m memory: 200m command: - /eventer - --source=kubernetes:'' - --sink=influxdb:http://monitoring-influxdb:8086 - image: gcr.io/google_containers/addon-resizer:1.0 name: heapster-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200 - --extra-memory=200Mi - --threshold=5 - --deployment=heapster-v1.1.0-beta1 - --container=heapster - --poll-period=300000 - image: gcr.io/google_containers/addon-resizer:1.0 name: eventer-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200 - --extra-memory=200Ki - --threshold=5 - --deployment=heapster-v1.1.0-beta1 - --container=eventer - --poll-period=300000 Deployment goes through, but then I get error: [root@node236 influxdb]# kubectl get pods -o wide --namespace=kube-system NAME READY STATUS RESTARTS AGE NODE heapster-v1.1.0-beta1-3082378092-t6inb 2/4 RunContainerError 0 1m node262.local.net [root@node236 influxdb]# Display log for the failed container: [root@node236 influxdb]# kubectl logs --namespace=kube-system heapster-v1.1.0-beta1-3082378092-t6inb Error from server: a container name must be specified for pod heapster-v1.1.0-beta1-3082378092-t6inb, choose one of: [heapster eventer heapster-nanny eventer-nanny] [root@node236 influxdb]# Where am I possibly going wrong ? Any feedback appreaciated!! Alex
Seccomp is disabled by default in kubernetes v1.2 for docker v1.10+, so you should not have problems running container with unconfined policy.
###Instruction: I'm running kubernetes with docker 1.10 and I want to run a container with --security-opt seccomp=unconfined . I understand from https://github.com/kubernetes/kubernetes/issues/20870 that seccomp in general is not supported by kubernetes yet, but are there any workarounds? Do I just need to downgrade docker to 1.9 and lose the security profiles altogether, or is there another way to give my container the access it needs? ### Response: Seccomp is disabled by default in kubernetes v1.2 for docker v1.10+, so you should not have problems running container with unconfined policy.
I'm running kubernetes with docker 1.10 and I want to run a container with --security-opt seccomp=unconfined . I understand from https://github.com/kubernetes/kubernetes/issues/20870 that seccomp in general is not supported by kubernetes yet, but are there any workarounds? Do I just need to downgrade docker to 1.9 and lose the security profiles altogether, or is there another way to give my container the access it needs?
If you use Kubernetes v1.2, there is a new, cleaner metrics summary API. From the release note: Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead. You can access the endpoint through <node-ip>:10255/stats/summary and detailed API objects is here.
###Instruction: I want to calculate and show node specific cpu usage in percent in my own web application using Kubernetes API. I need the same information as Kube UI and Cadvisor displays but I want to use the Kubernetes API. I have found some cpu metrics under node-ip:10255/stats which contains timestamp, cpu usage: total, user and system in big weird numbers which I do not understand. Also the CPU-Limit is reported as 1024. How does Kube UI calculate cpu usage and is it possible to do the same via the API? ### Response: If you use Kubernetes v1.2, there is a new, cleaner metrics summary API. From the release note: Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead. You can access the endpoint through <node-ip>:10255/stats/summary and detailed API objects is here.
I want to calculate and show node specific cpu usage in percent in my own web application using Kubernetes API. I need the same information as Kube UI and Cadvisor displays but I want to use the Kubernetes API. I have found some cpu metrics under node-ip:10255/stats which contains timestamp, cpu usage: total, user and system in big weird numbers which I do not understand. Also the CPU-Limit is reported as 1024. How does Kube UI calculate cpu usage and is it possible to do the same via the API?
I just commented out the lines that contain the successThreshold and failureThreshold values in skydns-rc.yaml, then re-run the kubectl commands. kubectl create -f skydns-rc.yaml kubectl create -f skydns-svc.yaml
###Instruction: I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the documentation, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables: ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}" DNS_SERVER_IP="10.0.0.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1 I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment. The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include: KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local" and restarting the kubelet with systemctl restart kubelet. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization: [david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001 I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1 I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service. F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated I've determined that the termination is done by the healthz container after reporting: 2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1 Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message: error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running? Here is the skydns-rc.yaml I used: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m # Kube2sky watches all pods. memory: 200Mi requests: cpu: 100m memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain="cluster.local" - --kube-master-url=http://192.168.87.159:8080 - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain="cluster.local" ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. and skydns-svc.yaml: apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: "10.0.0.10" ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP ### Response: I just commented out the lines that contain the successThreshold and failureThreshold values in skydns-rc.yaml, then re-run the kubectl commands. kubectl create -f skydns-rc.yaml kubectl create -f skydns-svc.yaml
I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the documentation, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables: ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}" DNS_SERVER_IP="10.0.0.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1 I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment. The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include: KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local" and restarting the kubelet with systemctl restart kubelet. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization: [david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001 I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1 I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service. F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated I've determined that the termination is done by the healthz container after reporting: 2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1 Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message: error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running? Here is the skydns-rc.yaml I used: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 500Mi requests: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m # Kube2sky watches all pods. memory: 200Mi requests: cpu: 100m memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain="cluster.local" - --kube-master-url=http://192.168.87.159:8080 - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain="cluster.local" ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. and skydns-svc.yaml: apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: "10.0.0.10" ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below : apiVersion: v1 clusters: - cluster: server: http://<master-ip>:<port> name: test contexts: - context: cluster: test user: test name: test Then set context by executing: kubectl config use-context test After this you should be able to interact with the cluster. Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/ Alternately, you can also try following command kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1 kubectl config use-context test-cluster
###Instruction: I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] ### Response: Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below : apiVersion: v1 clusters: - cluster: server: http://<master-ip>:<port> name: test contexts: - context: cluster: test user: test name: test Then set context by executing: kubectl config use-context test After this you should be able to interact with the cluster. Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/ Alternately, you can also try following command kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1 kubectl config use-context test-cluster
I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: []
(I would have posted this as a comment, if I had enough reputation) I am confused by your output. kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled. But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not. Can you try GET masterIP/apis and see if extensions is listed there? You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there. Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).
###Instruction: I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out. First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set. kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider= kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080 However api-versions only shows version 1: $ kubectl api-versions Available Server Api Versions: v1 Kubernetes version is 1.2: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled). $ kubectl get ds -o json { "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [ { "kind": "DaemonSet", "apiVersion": "extensions/v1beta1", "metadata": { "name": "ds-test", "namespace": "dvlp", "selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test", "uid": "2d948b18-fa7b-11e5-8a55-00163e245587", "resourceVersion": "2657499", "generation": 1, "creationTimestamp": "2016-04-04T15:37:45Z", "labels": { "app": "ds-test" } }, "spec": { "selector": { "app": "ds-test" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "ds-test" } }, "spec": { "containers": [ { "name": "ds-test", "image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp", "ports": [ { "containerPort": 8080, "protocol": "TCP" } ], "resources": {}, "terminationMessagePath": "/dev/termination-log", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {} } } }, "status": { "currentNumberScheduled": 0, "numberMisscheduled": 0, "desiredNumberScheduled": 0 } } ] } Here is my yaml file to create the DaemonSet apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: ds-test spec: selector: app: ds-test template: metadata: labels: app: ds-test spec: containers: - name: ds-test image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp ports: - containerPort: 8080 Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created: $ kubectl get pods -o json { "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [] } ### Response: (I would have posted this as a comment, if I had enough reputation) I am confused by your output. kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled. But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not. Can you try GET masterIP/apis and see if extensions is listed there? You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there. Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).
I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out. First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set. kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider= kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080 However api-versions only shows version 1: $ kubectl api-versions Available Server Api Versions: v1 Kubernetes version is 1.2: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled). $ kubectl get ds -o json { "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [ { "kind": "DaemonSet", "apiVersion": "extensions/v1beta1", "metadata": { "name": "ds-test", "namespace": "dvlp", "selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test", "uid": "2d948b18-fa7b-11e5-8a55-00163e245587", "resourceVersion": "2657499", "generation": 1, "creationTimestamp": "2016-04-04T15:37:45Z", "labels": { "app": "ds-test" } }, "spec": { "selector": { "app": "ds-test" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "ds-test" } }, "spec": { "containers": [ { "name": "ds-test", "image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp", "ports": [ { "containerPort": 8080, "protocol": "TCP" } ], "resources": {}, "terminationMessagePath": "/dev/termination-log", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {} } } }, "status": { "currentNumberScheduled": 0, "numberMisscheduled": 0, "desiredNumberScheduled": 0 } } ] } Here is my yaml file to create the DaemonSet apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: ds-test spec: selector: app: ds-test template: metadata: labels: app: ds-test spec: containers: - name: ds-test image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp ports: - containerPort: 8080 Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created: $ kubectl get pods -o json { "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [] }
As an alternative approach, you might consider just letting Kubernetes handle reclamation of old images for you. Presently, the ImageManager handles reclamation of candidate images. See: Garbage Collection Garbage collection is a helpful function of kubelet that will clean up unreferenced images and unused containers. kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. Configuration is controlled via these two kublet cli parameters: --image-gc-high-threshold=90: The percent of disk usage after which image garbage collection is always run. Default: 90% --image-gc-low-threshold=80: The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80% The high/low thresholds could be tuned to force collection at an interval that works for you.
###Instruction: To simplify deployment and short term roll-back, it's useful to use a new Docker image tag for each new version to deploy on Kubernetes. Without clean-up this means that old images:tags are kept forever. How can I list all image:tag that are used by a Kubernetes container so that I can find all old image:tag that are old and not used to delete them automatically from the Docker Registry? My goal is ideally for Google Container Engine (GKE) to delete unused images a Google Container Registry. ### Response: As an alternative approach, you might consider just letting Kubernetes handle reclamation of old images for you. Presently, the ImageManager handles reclamation of candidate images. See: Garbage Collection Garbage collection is a helpful function of kubelet that will clean up unreferenced images and unused containers. kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. Configuration is controlled via these two kublet cli parameters: --image-gc-high-threshold=90: The percent of disk usage after which image garbage collection is always run. Default: 90% --image-gc-low-threshold=80: The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80% The high/low thresholds could be tuned to force collection at an interval that works for you.
To simplify deployment and short term roll-back, it's useful to use a new Docker image tag for each new version to deploy on Kubernetes. Without clean-up this means that old images:tags are kept forever. How can I list all image:tag that are used by a Kubernetes container so that I can find all old image:tag that are old and not used to delete them automatically from the Docker Registry? My goal is ideally for Google Container Engine (GKE) to delete unused images a Google Container Registry.
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
###Instruction: Our current CI deployment phase works like this: Build the containers. Tag the images as "latest" and < commit hash >. Push images to repository. Invoke rolling update on appropriate RC(s). This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases. What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like: [App Build] Build the containers. [App Build] Tag the images as "latest" and < commit hash >. [App Build] Push images to repository. [App Build] Invoke build of the app's Deployment repo, passing through the current commit hash. [Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%) [Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s). Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :) ### Response: The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
Our current CI deployment phase works like this: Build the containers. Tag the images as "latest" and < commit hash >. Push images to repository. Invoke rolling update on appropriate RC(s). This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases. What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like: [App Build] Build the containers. [App Build] Tag the images as "latest" and < commit hash >. [App Build] Push images to repository. [App Build] Invoke build of the app's Deployment repo, passing through the current commit hash. [Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%) [Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s). Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
It not supported; only one k8sm scheduler should talk to mesos master. One option is to use Marathon to manage k8sm-xxx daemons, it will re-start k8sm master for failover.
###Instruction: Environment Mesos: 0.26 Kubernetes: 1.3.0 Anyone out there using Kubernetes-Mesos framework? Kubernetes-Mesos Question: Does current Kubernetes-Mesos support Kubernetes in HA configuration ? Having multiple Kubernetes masters talking to leader.mesos ? I tried to use it but I've got the following error on my secondary kubernetes master (trying to start km scheduler) "mesos-master[25014]: I0405 09:54:07.523236 25020 master.cpp:2324] Framework a979cde6-aa86-4286-b07f-e83e9ae4076e-0005 (Kubernetes) at scheduler(1)@10.9.158.237:42819 failed over" ### Response: It not supported; only one k8sm scheduler should talk to mesos master. One option is to use Marathon to manage k8sm-xxx daemons, it will re-start k8sm master for failover.
Environment Mesos: 0.26 Kubernetes: 1.3.0 Anyone out there using Kubernetes-Mesos framework? Kubernetes-Mesos Question: Does current Kubernetes-Mesos support Kubernetes in HA configuration ? Having multiple Kubernetes masters talking to leader.mesos ? I tried to use it but I've got the following error on my secondary kubernetes master (trying to start km scheduler) "mesos-master[25014]: I0405 09:54:07.523236 25020 master.cpp:2324] Framework a979cde6-aa86-4286-b07f-e83e9ae4076e-0005 (Kubernetes) at scheduler(1)@10.9.158.237:42819 failed over"
There seem to be new ways to attach Cinder storage to bare metal, but it's apparently just PoC http://blog.e0ne.info/post/Attach-Cinder-Volume-to-the-Ironic-Instance-without-Nova.aspx
###Instruction: Maybe my question does not make sense, but this is what I'm trying to do: I have a running Kubernetes cluster running on CoreOS on bare metal. I am trying to mount block storage from an OpenStack cloud provider with Cinder. From my readings, to be able to connect to the block storage provider, I need kubelet to be configured with cloud-provider=openstack, and use a cloud.conf file for the configuration of credentials. I did that and the auth part seems to work fine (i.e. I successfully connect to the cloud provider), however kubelet then complains that it cannot find my node on the openstack provider. I get: Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object This is similar to this question: Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object However, I know kubelet will not find my node at the OpenStack provider since it is not hosted there! The error makes sense, but how do I avoid it? In short, how do I tell kubelet not to look for my node there, as I only need it to look up the storage block to mount it? Is it even possible to mount block storage this way? Am I misunderstanding how this works? ### Response: There seem to be new ways to attach Cinder storage to bare metal, but it's apparently just PoC http://blog.e0ne.info/post/Attach-Cinder-Volume-to-the-Ironic-Instance-without-Nova.aspx
Maybe my question does not make sense, but this is what I'm trying to do: I have a running Kubernetes cluster running on CoreOS on bare metal. I am trying to mount block storage from an OpenStack cloud provider with Cinder. From my readings, to be able to connect to the block storage provider, I need kubelet to be configured with cloud-provider=openstack, and use a cloud.conf file for the configuration of credentials. I did that and the auth part seems to work fine (i.e. I successfully connect to the cloud provider), however kubelet then complains that it cannot find my node on the openstack provider. I get: Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object This is similar to this question: Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object However, I know kubelet will not find my node at the OpenStack provider since it is not hosted there! The error makes sense, but how do I avoid it? In short, how do I tell kubelet not to look for my node there, as I only need it to look up the storage block to mount it? Is it even possible to mount block storage this way? Am I misunderstanding how this works?
http://slack.kubernetes.io is indeed the way to get yourself invited. It sounds like there were some issue this morning (perhaps with Slack?), but the invites seem to be working now. See https://github.com/kubernetes/kubernetes/issues/23823. It's possible you're seeing the same issue (or that you are the same person :) ). Let me know if there is still a problem.
###Instruction: I've been trying to get help on a Kubernetes question and I don't get answers, and one suggestion was to ask on the Kubernetes slack channel, however it seems to be on invite only or for google, intel, coreos and redhat email addresses. So, how am I supposed to get an invite to the channel? The 'get my invite' options claims they sent me the invite, yet I didn't receive it and there is no option to resend it. This question is a real question, and it also meant to attrack attention to the Kubernetes team to answer the related question. Since Kubernetes uses Stack Overflow as their support Q&A system and redirect the Github question to SO, I believe it is fair to try to get their attention here. ### Response: http://slack.kubernetes.io is indeed the way to get yourself invited. It sounds like there were some issue this morning (perhaps with Slack?), but the invites seem to be working now. See https://github.com/kubernetes/kubernetes/issues/23823. It's possible you're seeing the same issue (or that you are the same person :) ). Let me know if there is still a problem.
I've been trying to get help on a Kubernetes question and I don't get answers, and one suggestion was to ask on the Kubernetes slack channel, however it seems to be on invite only or for google, intel, coreos and redhat email addresses. So, how am I supposed to get an invite to the channel? The 'get my invite' options claims they sent me the invite, yet I didn't receive it and there is no option to resend it. This question is a real question, and it also meant to attrack attention to the Kubernetes team to answer the related question. Since Kubernetes uses Stack Overflow as their support Q&A system and redirect the Github question to SO, I believe it is fair to try to get their attention here.
Kubernetes expects anything within the cluster to be able to talk with everything else. GKE accomplishes this with advanced routing. By default, this lets GKE containers and GCE nodes on the same network communicate. This is why you could hit your containers directly. A ClusterIP is only reachable within the Kubernetes cluster. These IPs are managed by iptables on just Kubernetes nodes. This is why you can't hit your service from the GCE nodes, but you can hit it from your containers. Bastion routes send all traffic to the cluster's subnet to a cluster node. The node then routes the flow correctly. Create multiple bastion routes to multiple nodes at the same priority to avoid hotspotting a single node. Try using the cluster's full /14, which you can find under the cluster's description in the container engine UI.
###Instruction: I have a few applications that run on regular Compute Engine nodes. In addition I have a Container Cluster that I am migrating applications to. Sooner or later all apps should be in Container Engine so service discovery is straight forward. But for now the apps on Compute Engine need to be able to talk to the Container Engine apps. The Container Engine apps are all registered as a service. For the sake of testing I used the "echoheaders" image: $ kubectl describe svc echoheaders Name: echoheaders Namespace: default Labels: app=echoheaders Selector: app=echoheaders Type: ClusterIP IP: 10.115.249.140 Port: http 80/TCP Endpoints: 10.112.1.3:8080 Session Affinity: None No events. The issue now is that I can only access the pod service from the Compute Engine node directly via 10.112.1.3:8080 but not via its clusterip 10.115.249.140:80. That only works from within the actual Compute Engine nodes. I already tried to create a bastion route pointing to one of the Container Engine nodes but it still doesn't work: $ gcloud compute routes describe gke-cluster-1-services creationTimestamp: '2016-04-05T05:39:55.275-07:00' description: Route to Cluster-1 service IP range destRange: 10.115.240.0/20 id: '926323215677918452' kind: compute#route name: gke-cluster-1-services network: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/networks/infra nextHopInstance: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/zones/europe-west1-d/instances/gke-cluster-1-5679a61a-node-f7iu priority: 500 selfLink: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/routes/gke-cluster-1-services And on the firewall the Compute Node can connect to any. Anybody happen to have pointers what could be missing to allow the Compute Engine nodes access the Compute Node Services by their ClusterIPs? Thanks ### Response: Kubernetes expects anything within the cluster to be able to talk with everything else. GKE accomplishes this with advanced routing. By default, this lets GKE containers and GCE nodes on the same network communicate. This is why you could hit your containers directly. A ClusterIP is only reachable within the Kubernetes cluster. These IPs are managed by iptables on just Kubernetes nodes. This is why you can't hit your service from the GCE nodes, but you can hit it from your containers. Bastion routes send all traffic to the cluster's subnet to a cluster node. The node then routes the flow correctly. Create multiple bastion routes to multiple nodes at the same priority to avoid hotspotting a single node. Try using the cluster's full /14, which you can find under the cluster's description in the container engine UI.
I have a few applications that run on regular Compute Engine nodes. In addition I have a Container Cluster that I am migrating applications to. Sooner or later all apps should be in Container Engine so service discovery is straight forward. But for now the apps on Compute Engine need to be able to talk to the Container Engine apps. The Container Engine apps are all registered as a service. For the sake of testing I used the "echoheaders" image: $ kubectl describe svc echoheaders Name: echoheaders Namespace: default Labels: app=echoheaders Selector: app=echoheaders Type: ClusterIP IP: 10.115.249.140 Port: http 80/TCP Endpoints: 10.112.1.3:8080 Session Affinity: None No events. The issue now is that I can only access the pod service from the Compute Engine node directly via 10.112.1.3:8080 but not via its clusterip 10.115.249.140:80. That only works from within the actual Compute Engine nodes. I already tried to create a bastion route pointing to one of the Container Engine nodes but it still doesn't work: $ gcloud compute routes describe gke-cluster-1-services creationTimestamp: '2016-04-05T05:39:55.275-07:00' description: Route to Cluster-1 service IP range destRange: 10.115.240.0/20 id: '926323215677918452' kind: compute#route name: gke-cluster-1-services network: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/networks/infra nextHopInstance: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/zones/europe-west1-d/instances/gke-cluster-1-5679a61a-node-f7iu priority: 500 selfLink: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/routes/gke-cluster-1-services And on the firewall the Compute Node can connect to any. Anybody happen to have pointers what could be missing to allow the Compute Engine nodes access the Compute Node Services by their ClusterIPs? Thanks
The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used. (Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.) Create a docker image for a fluentd forwarder using a Dockerfile as follows: FROM gcr.io/google_containers/fluentd-gcp:1.18 COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf Where fluentd-forwarder.conf contains the following: <source> type forward port 24224 </source> <match **> type google_cloud buffer_chunk_limit 2M buffer_queue_limit 24 flush_interval 5s max_retry_wait 30 disable_retry_limit </match> Then build and push the image: $ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 . $ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1 You need a replication controller (fluentd-forwarder-controller.yaml): apiVersion: v1 kind: ReplicationController metadata: name: fluentd-forwarder spec: replicas: 1 template: metadata: name: fluentd-forwarder labels: app: fluentd-forwarder spec: containers: - name: fluentd-forwarder image: gcr.io/###your project id###/fluentd-forwarder:v1 env: - name: FLUENTD_ARGS value: -qq ports: - containerPort: 24224 You also need a service (fluentd-forwarder-service.yaml): apiVersion: v1 kind: Service metadata: name: fluentd-forwarder spec: selector: app: fluentd-forwarder ports: - protocol: TCP port: 24224 Then create the replication controller and service: $ kubectl create -f fluentd-forwarder-controller.yaml $ kubectl create -f fluentd-forwarder-service.yaml Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine, use the values of evironment variables FLUENTD_FORWARDER_SERVICE_HOST and FLUENTD_FORWARDER_SERVICE_PORT.
###Instruction: I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"? According to documentation https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image. How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster. Any help would be appreciated, may be I'm looking at all this from a wrong point? ### Response: The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used. (Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.) Create a docker image for a fluentd forwarder using a Dockerfile as follows: FROM gcr.io/google_containers/fluentd-gcp:1.18 COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf Where fluentd-forwarder.conf contains the following: <source> type forward port 24224 </source> <match **> type google_cloud buffer_chunk_limit 2M buffer_queue_limit 24 flush_interval 5s max_retry_wait 30 disable_retry_limit </match> Then build and push the image: $ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 . $ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1 You need a replication controller (fluentd-forwarder-controller.yaml): apiVersion: v1 kind: ReplicationController metadata: name: fluentd-forwarder spec: replicas: 1 template: metadata: name: fluentd-forwarder labels: app: fluentd-forwarder spec: containers: - name: fluentd-forwarder image: gcr.io/###your project id###/fluentd-forwarder:v1 env: - name: FLUENTD_ARGS value: -qq ports: - containerPort: 24224 You also need a service (fluentd-forwarder-service.yaml): apiVersion: v1 kind: Service metadata: name: fluentd-forwarder spec: selector: app: fluentd-forwarder ports: - protocol: TCP port: 24224 Then create the replication controller and service: $ kubectl create -f fluentd-forwarder-controller.yaml $ kubectl create -f fluentd-forwarder-service.yaml Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine, use the values of evironment variables FLUENTD_FORWARDER_SERVICE_HOST and FLUENTD_FORWARDER_SERVICE_PORT.
I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"? According to documentation https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image. How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster. Any help would be appreciated, may be I'm looking at all this from a wrong point?
You could use Kubernetes' built-in Horizonal Pod Autoscaling (HPA) to scale up from 1 instance of each container to as many are needed to handle the load, but there's no built-in functionality for 0-to-1 scaling on receiving a request, and I'm not aware of any widely used solution.
###Instruction: Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period. Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No visible downtime to end-user. Options: Kubernetes has resource controllers which can scale replicas. I suppose it would be possible to keep the number of replicas on 0 and set it to 1 when needed, but how can one achieve that? The user guide says there is something called an auto-scaling control agent but I don't see any further information on this. Is there a pluggable, programmable agent one can use to track requests and scale based on user-defined logic? I don't see any solution in Docker Swarm, correct me if I'm wrong though. Use a custom http server written in chosen language that will have access to the docker daemon. Before routing to correct place it would check for existence of container and ensure it is running. Downside - not a general solution, has to not be a container or have access to the daemon. Use systemd as described here. Same downsides as above, i.e. not general and one has to handle networking tasks themselves (like finding the IP of the spawned container and feeding it into the server/proxy's configuration). Any ideas appreciated! ### Response: You could use Kubernetes' built-in Horizonal Pod Autoscaling (HPA) to scale up from 1 instance of each container to as many are needed to handle the load, but there's no built-in functionality for 0-to-1 scaling on receiving a request, and I'm not aware of any widely used solution.
Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period. Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No visible downtime to end-user. Options: Kubernetes has resource controllers which can scale replicas. I suppose it would be possible to keep the number of replicas on 0 and set it to 1 when needed, but how can one achieve that? The user guide says there is something called an auto-scaling control agent but I don't see any further information on this. Is there a pluggable, programmable agent one can use to track requests and scale based on user-defined logic? I don't see any solution in Docker Swarm, correct me if I'm wrong though. Use a custom http server written in chosen language that will have access to the docker daemon. Before routing to correct place it would check for existence of container and ensure it is running. Downside - not a general solution, has to not be a container or have access to the daemon. Use systemd as described here. Same downsides as above, i.e. not general and one has to handle networking tasks themselves (like finding the IP of the spawned container and feeding it into the server/proxy's configuration). Any ideas appreciated!
Running in privileged mode indeed gives the container all capabilities. But it is good practice to always give a container the minimum requirements it needs. The Docker run command documentation refers to this flag: Full container capabilities (--privileged) The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. You can give specific capabilities using --cap-add flag. See man 7 capabilities for more info on those capabilities. The literal names can be used, e.g. --cap-add CAP_FOWNER.
###Instruction: If I am running a container in privileged mode, does it have all the Kernel capabilities or do I need to add them separately? ### Response: Running in privileged mode indeed gives the container all capabilities. But it is good practice to always give a container the minimum requirements it needs. The Docker run command documentation refers to this flag: Full container capabilities (--privileged) The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. You can give specific capabilities using --cap-add flag. See man 7 capabilities for more info on those capabilities. The literal names can be used, e.g. --cap-add CAP_FOWNER.
If I am running a container in privileged mode, does it have all the Kernel capabilities or do I need to add them separately?
Thanks for your replay Robert. i got this resolve by doing below: kubectl delete rc kubectl delete node 172.16.0.44 stop kubelet in 172.16.0.44 rm -rf /k8s/* restart kubelet Now the node is ready, and out of disk is gone. Name: 172.16.0.44 Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1 CreationTimestamp: Fri, 08 Apr 2016 15:14:51 +0800 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ──── ────── ───────────────── ────────────────── ────── ─────── Ready True Fri, 08 Apr 2016 15:25:33 +0800 Fri, 08 Apr 2016 15:14:50 +0800 KubeletReady kubelet is posting ready status Addresses: 172.16.0.44,172.16.0.44 Capacity: cpu: 2 memory: 7748948Ki pods: 40 System Info: Machine ID: 45461f76679f48ee96e95da6cc798cc8 System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461 Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965 Kernel Version: 3.10.0-123.9.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) I found this https://github.com/kubernetes/kubernetes/issues/4135, but still don't know why my disk space is free and kubelet thinks it is out of disk...
###Instruction: I've set up a Kubernetes cluster with three nodes, i get all my nodes status ready, but the scheduler seems not find one of them. How could this happen. [root@master1 app]# kubectl get nodes NAME LABELS STATUS AGE 172.16.0.44 kubernetes.io/hostname=172.16.0.44,pxc=node1 Ready 8d 172.16.0.45 kubernetes.io/hostname=172.16.0.45 Ready 8d 172.16.0.46 kubernetes.io/hostname=172.16.0.46 Ready 8d I use nodeSelect in my RC file like thie: nodeSelector: pxc: node1 describe the rc: Name: mongo-controller Namespace: kube-system Image(s): mongo Selector: k8s-app=mongo Labels: k8s-app=mongo Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Volumes: mongo-persistent-storage: Type: HostPath (bare host directory volume) Path: /k8s/mongodb Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 25m 25m 1 {replication-controller } SuccessfulCreate Created pod: mongo-controller-0wpwu get pods to be pending: [root@master1 app]# kubectl get pods mongo-controller-0wpwu --namespace=kube-system NAME READY STATUS RESTARTS AGE mongo-controller-0wpwu 0/1 Pending 0 27m describe pod mongo-controller-0wpwu: [root@master1 app]# kubectl describe pod mongo-controller-0wpwu --namespace=kube-system Name: mongo-controller-0wpwu Namespace: kube-system Image(s): mongo Node: / Labels: k8s-app=mongo Status: Pending Reason: Message: IP: Replication Controllers: mongo-controller (1/1 replicas created) Containers: mongo: Container ID: Image: mongo Image ID: QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Ready: False Restart Count: 0 Environment Variables: Volumes: mongo-persistent-storage: Type: HostPath (bare host directory volume) Path: /k8s/mongodb default-token-7qjcu: Type: Secret (a secret that should populate this volume) SecretName: default-token-7qjcu Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 22m 37s 12 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.46): MatchNodeSelector fit failure on node (172.16.0.45): MatchNodeSelector 27m 9s 67 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector See the ip list in events, The 172.16.0.44 seems not seen by the scheduler? How could the happen? describe the node 172.16.0.44 [root@master1 app]# kubectl describe nodes --namespace=kube-system Name: 172.16.0.44 Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1 CreationTimestamp: Wed, 30 Mar 2016 15:58:47 +0800 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ──── ────── ───────────────── ────────────────── ────── ─────── Ready True Fri, 08 Apr 2016 12:18:01 +0800 Fri, 08 Apr 2016 11:18:52 +0800 KubeletReady kubelet is posting ready status OutOfDisk Unknown Wed, 30 Mar 2016 15:58:47 +0800 Thu, 07 Apr 2016 17:38:50 +0800 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: 172.16.0.44,172.16.0.44 Capacity: cpu: 2 memory: 7748948Ki pods: 40 System Info: Machine ID: 45461f76679f48ee96e95da6cc798cc8 System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461 Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965 Kernel Version: 3.10.0-123.9.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Container Runtime Version: docker://1.10.1 Kubelet Version: v1.2.0 Kube-Proxy Version: v1.2.0 ExternalID: 172.16.0.44 Non-terminated Pods: (1 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── kube-system kube-registry-proxy-172.16.0.44 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 59m 59m 1 {kubelet 172.16.0.44} Starting Starting kubelet. Ssh login 44, i get the disk space is free(i also remove some docker images and containers): [root@iZ25dqhvvd0Z ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 40G 2.6G 35G 7% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.7G 0 3.7G 0% /dev/shm tmpfs 3.7G 143M 3.6G 4% /run tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup /dev/xvdb 40G 361M 37G 1% /k8s Still docker logs scheduler(v1.3.0-alpha.1 ) get this E0408 05:28:42.679448 1 factory.go:387] Error scheduling kube-system mongo-controller-0wpwu: pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector ; retrying I0408 05:28:42.679577 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"mongo-controller-0wpwu", UID:"2d0f0844-fd3c-11e5-b531-00163e000727", APIVersion:"v1", ResourceVersion:"634139", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector ### Response: Thanks for your replay Robert. i got this resolve by doing below: kubectl delete rc kubectl delete node 172.16.0.44 stop kubelet in 172.16.0.44 rm -rf /k8s/* restart kubelet Now the node is ready, and out of disk is gone. Name: 172.16.0.44 Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1 CreationTimestamp: Fri, 08 Apr 2016 15:14:51 +0800 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ──── ────── ───────────────── ────────────────── ────── ─────── Ready True Fri, 08 Apr 2016 15:25:33 +0800 Fri, 08 Apr 2016 15:14:50 +0800 KubeletReady kubelet is posting ready status Addresses: 172.16.0.44,172.16.0.44 Capacity: cpu: 2 memory: 7748948Ki pods: 40 System Info: Machine ID: 45461f76679f48ee96e95da6cc798cc8 System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461 Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965 Kernel Version: 3.10.0-123.9.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) I found this https://github.com/kubernetes/kubernetes/issues/4135, but still don't know why my disk space is free and kubelet thinks it is out of disk...
I've set up a Kubernetes cluster with three nodes, i get all my nodes status ready, but the scheduler seems not find one of them. How could this happen. [root@master1 app]# kubectl get nodes NAME LABELS STATUS AGE 172.16.0.44 kubernetes.io/hostname=172.16.0.44,pxc=node1 Ready 8d 172.16.0.45 kubernetes.io/hostname=172.16.0.45 Ready 8d 172.16.0.46 kubernetes.io/hostname=172.16.0.46 Ready 8d I use nodeSelect in my RC file like thie: nodeSelector: pxc: node1 describe the rc: Name: mongo-controller Namespace: kube-system Image(s): mongo Selector: k8s-app=mongo Labels: k8s-app=mongo Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Volumes: mongo-persistent-storage: Type: HostPath (bare host directory volume) Path: /k8s/mongodb Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 25m 25m 1 {replication-controller } SuccessfulCreate Created pod: mongo-controller-0wpwu get pods to be pending: [root@master1 app]# kubectl get pods mongo-controller-0wpwu --namespace=kube-system NAME READY STATUS RESTARTS AGE mongo-controller-0wpwu 0/1 Pending 0 27m describe pod mongo-controller-0wpwu: [root@master1 app]# kubectl describe pod mongo-controller-0wpwu --namespace=kube-system Name: mongo-controller-0wpwu Namespace: kube-system Image(s): mongo Node: / Labels: k8s-app=mongo Status: Pending Reason: Message: IP: Replication Controllers: mongo-controller (1/1 replicas created) Containers: mongo: Container ID: Image: mongo Image ID: QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Ready: False Restart Count: 0 Environment Variables: Volumes: mongo-persistent-storage: Type: HostPath (bare host directory volume) Path: /k8s/mongodb default-token-7qjcu: Type: Secret (a secret that should populate this volume) SecretName: default-token-7qjcu Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 22m 37s 12 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.46): MatchNodeSelector fit failure on node (172.16.0.45): MatchNodeSelector 27m 9s 67 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector See the ip list in events, The 172.16.0.44 seems not seen by the scheduler? How could the happen? describe the node 172.16.0.44 [root@master1 app]# kubectl describe nodes --namespace=kube-system Name: 172.16.0.44 Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1 CreationTimestamp: Wed, 30 Mar 2016 15:58:47 +0800 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ──── ────── ───────────────── ────────────────── ────── ─────── Ready True Fri, 08 Apr 2016 12:18:01 +0800 Fri, 08 Apr 2016 11:18:52 +0800 KubeletReady kubelet is posting ready status OutOfDisk Unknown Wed, 30 Mar 2016 15:58:47 +0800 Thu, 07 Apr 2016 17:38:50 +0800 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: 172.16.0.44,172.16.0.44 Capacity: cpu: 2 memory: 7748948Ki pods: 40 System Info: Machine ID: 45461f76679f48ee96e95da6cc798cc8 System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461 Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965 Kernel Version: 3.10.0-123.9.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Container Runtime Version: docker://1.10.1 Kubelet Version: v1.2.0 Kube-Proxy Version: v1.2.0 ExternalID: 172.16.0.44 Non-terminated Pods: (1 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── kube-system kube-registry-proxy-172.16.0.44 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 59m 59m 1 {kubelet 172.16.0.44} Starting Starting kubelet. Ssh login 44, i get the disk space is free(i also remove some docker images and containers): [root@iZ25dqhvvd0Z ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 40G 2.6G 35G 7% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.7G 0 3.7G 0% /dev/shm tmpfs 3.7G 143M 3.6G 4% /run tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup /dev/xvdb 40G 361M 37G 1% /k8s Still docker logs scheduler(v1.3.0-alpha.1 ) get this E0408 05:28:42.679448 1 factory.go:387] Error scheduling kube-system mongo-controller-0wpwu: pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector ; retrying I0408 05:28:42.679577 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"mongo-controller-0wpwu", UID:"2d0f0844-fd3c-11e5-b531-00163e000727", APIVersion:"v1", ResourceVersion:"634139", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' pod (mongo-controller-0wpwu) failed to fit in any node fit failure on node (172.16.0.45): MatchNodeSelector fit failure on node (172.16.0.46): MatchNodeSelector
The log you're showing is an harmless error. Basically it is just saying that the cache file is no longer in sync. INFO 17:00:52 Harmless error reading saved cache /cassandra_data/saved_caches/KeyCache-d.db
###Instruction: The cassandra Filesystem is on a glusterFS, after scaling the number of pods to zero, and back up to 3, the data is not loading up into cassandra. Is there a way to recover it? INFO 17:00:52 reading saved cache /cassandra_data/saved_caches/KeyCache-d.db INFO 17:00:52 Harmless error reading saved cache /cassandra_data/saved_caches/KeyCache-d.db java.lang.RuntimeException: Cache schema version c2a2bb4f-7d31-3fb8-a216-00b41a643650 does not match current schema version 59adb24e-f3cd-3e02-97f0-5b395827453f at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:198) ~[apache-cassandra-3.3.jar:3.3] at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:157) [apache-cassandra-3.3.jar:3.3] at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:153) [apache-cassandra-3.3.jar:3.3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] ### Response: The log you're showing is an harmless error. Basically it is just saying that the cache file is no longer in sync. INFO 17:00:52 Harmless error reading saved cache /cassandra_data/saved_caches/KeyCache-d.db
The cassandra Filesystem is on a glusterFS, after scaling the number of pods to zero, and back up to 3, the data is not loading up into cassandra. Is there a way to recover it? INFO 17:00:52 reading saved cache /cassandra_data/saved_caches/KeyCache-d.db INFO 17:00:52 Harmless error reading saved cache /cassandra_data/saved_caches/KeyCache-d.db java.lang.RuntimeException: Cache schema version c2a2bb4f-7d31-3fb8-a216-00b41a643650 does not match current schema version 59adb24e-f3cd-3e02-97f0-5b395827453f at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:198) ~[apache-cassandra-3.3.jar:3.3] at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:157) [apache-cassandra-3.3.jar:3.3] at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:153) [apache-cassandra-3.3.jar:3.3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
Sometime flannel will change it's subnet configuration... you can tell this if the IP and MTU from cat /run/flannel/subnet.env doesn't match ps aux | grep docker (or cat /etc/default/docker)... in which case you will need to reconfigure docker to use the new flannel config. First you have to delete the docker network interface sudo ip link set dev docker0 down sudo brctl delbr docker0 Next you have to reconfigure docker to use the new flannel config. Note: sometimes this step has to be done manually (i.e. read the contents of /run/flannel/subnet.env and then alter /etc/default/docker) source /run/flannel/subnet.env echo DOCKER_OPTS=\"-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker Finally, restart docker sudo service docker restart
###Instruction: So I have a Kubernetes cluster, and I am using Flannel for an overlay network. It has been working fine (for almost a year actually) then I modified a service to have 2 ports and all of a sudden I get this about a completely different service, one that was working previously and I did not edit: <Timestamp> <host> flanneld[873]: I0407 18:36:51.705743 00873 vxlan.go:345] L3 miss: <Service's IP> <Timestamp> <host> flanneld[873]: I0407 18:36:51.705865 00873 vxlan.go:349] Route for <Service's IP> not found Is there a common cause to this? I am using Kubernetes 1.0.X and Flannel 0.5.5 and I should mention only one node is having this issue, the rest of the nodes are fine. The bad node's kube-proxy is also saying it can't find the service's endpoint. ### Response: Sometime flannel will change it's subnet configuration... you can tell this if the IP and MTU from cat /run/flannel/subnet.env doesn't match ps aux | grep docker (or cat /etc/default/docker)... in which case you will need to reconfigure docker to use the new flannel config. First you have to delete the docker network interface sudo ip link set dev docker0 down sudo brctl delbr docker0 Next you have to reconfigure docker to use the new flannel config. Note: sometimes this step has to be done manually (i.e. read the contents of /run/flannel/subnet.env and then alter /etc/default/docker) source /run/flannel/subnet.env echo DOCKER_OPTS=\"-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker Finally, restart docker sudo service docker restart
So I have a Kubernetes cluster, and I am using Flannel for an overlay network. It has been working fine (for almost a year actually) then I modified a service to have 2 ports and all of a sudden I get this about a completely different service, one that was working previously and I did not edit: <Timestamp> <host> flanneld[873]: I0407 18:36:51.705743 00873 vxlan.go:345] L3 miss: <Service's IP> <Timestamp> <host> flanneld[873]: I0407 18:36:51.705865 00873 vxlan.go:349] Route for <Service's IP> not found Is there a common cause to this? I am using Kubernetes 1.0.X and Flannel 0.5.5 and I should mention only one node is having this issue, the rest of the nodes are fine. The bad node's kube-proxy is also saying it can't find the service's endpoint.
Persistent Volumes have access semantics. on GCE I'm assuming you are using a Persistent Disk, which can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup Nfs or some other storage that let's you write from multiple pods. In case you are interested in running NFS - https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md FYI: We are still working on supporting auto-provisioning of PVs as you scale your deployment. As of now it is a manual process.
###Instruction: I've done quite a bit of research and have yet to find an answer to this. Here's what I'm trying to accomplish: I have an ELK stack container running in a pod on a k8s cluster in GCE - the cluster also contains a PersistentVolume (format: ext4) and a PersistentVolumeClaim. In order to scale the ELK stack to multiple pods/nodes and keep persistent data in ElasticSearch, I either need to have all pods write to the same PV (using the node/index structure of the ES file system), or have some volume logic to scale up/create these PVs/PVCs. Currently what happens is if I spin up a second pod on the replication controller, it can't mount the PV. So I'm wondering if I'm going about this the wrong way, and what is the best way to architect this solution to allow for persistent data in ES when my cluster/nodes autoscale. ### Response: Persistent Volumes have access semantics. on GCE I'm assuming you are using a Persistent Disk, which can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup Nfs or some other storage that let's you write from multiple pods. In case you are interested in running NFS - https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md FYI: We are still working on supporting auto-provisioning of PVs as you scale your deployment. As of now it is a manual process.
I've done quite a bit of research and have yet to find an answer to this. Here's what I'm trying to accomplish: I have an ELK stack container running in a pod on a k8s cluster in GCE - the cluster also contains a PersistentVolume (format: ext4) and a PersistentVolumeClaim. In order to scale the ELK stack to multiple pods/nodes and keep persistent data in ElasticSearch, I either need to have all pods write to the same PV (using the node/index structure of the ES file system), or have some volume logic to scale up/create these PVs/PVCs. Currently what happens is if I spin up a second pod on the replication controller, it can't mount the PV. So I'm wondering if I'm going about this the wrong way, and what is the best way to architect this solution to allow for persistent data in ES when my cluster/nodes autoscale.
As already mentioned in Robert's answer the Elasticsearch/Kibana stack needs to be added manually if the cluster is supposed to run on Google Container Engine (GKE). Using the information given in this post, I was able to get it to work performing the following steps: Start a GKE Cluster without cloud logging gcloud container --project <PROJECT_ID> clusters create <CLUSTER_ID> --no-enable-cloud-logging Add a configured fluentd container to each running node by using a kubernetes DaemonSet. kubectl create -f fluentd-es.yaml fluentd-es.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: app: fluentd-logging spec: template: metadata: labels: app: fluentd-es spec: containers: - name: fluentd-elasticsearch image: gcr.io/google_containers/fluentd-elasticsearch:1.15 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers Add elasticsearch and kibana pods and services. kubectl create -f es-controller.yaml kubectl create -f es-service.yaml kubectl create -f kibana-controller.yaml kubectl create -f kibana-service.yaml Note below that the kubernetes.io/cluster-service: "true" label (present in the original files) has been removed. Having this label in the definitions resulted in termination of the running pods. es-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: elasticsearch-logging-v1 namespace: kube-system labels: k8s-app: elasticsearch-logging version: v1 spec: replicas: 2 selector: k8s-app: elasticsearch-logging version: v1 template: metadata: labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google_containers/elasticsearch:1.8 name: elasticsearch-logging resources: limits: cpu: 100m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: es-persistent-storage mountPath: /data volumes: - name: es-persistent-storage emptyDir: {} es-service.yaml apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging kibana-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: kibana-logging-v1 namespace: kube-system labels: k8s-app: kibana-logging version: v1 spec: replicas: 1 selector: k8s-app: kibana-logging version: v1 template: metadata: labels: k8s-app: kibana-logging version: v1 kubernetes.io/cluster-service: "true" spec: containers: - name: kibana-logging image: gcr.io/google_containers/kibana:1.3 resources: limits: cpu: 100m requests: cpu: 100m env: - name: "ELASTICSEARCH_URL" value: "http://elasticsearch-logging:9200" ports: - containerPort: 5601 name: ui protocol: TCP kibana-service.yaml apiVersion: v1 kind: Service metadata: name: kibana-logging namespace: kube-system labels: k8s-app: kibana-logging kubernetes.io/name: "Kibana" spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana-logging Create a kubectl proxy kubectl proxy Watch your logs with kibana at http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging/
###Instruction: Sorry for the noob question but from https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/getting-started-guides/logging-elasticsearch.md it says: To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below: KUBE_LOGGING_DESTINATION=elasticsearch Where and how do I set this Env Var ? I was thinking that I should use gcloud container clusters create and pass the options there but there is no options... ### Response: As already mentioned in Robert's answer the Elasticsearch/Kibana stack needs to be added manually if the cluster is supposed to run on Google Container Engine (GKE). Using the information given in this post, I was able to get it to work performing the following steps: Start a GKE Cluster without cloud logging gcloud container --project <PROJECT_ID> clusters create <CLUSTER_ID> --no-enable-cloud-logging Add a configured fluentd container to each running node by using a kubernetes DaemonSet. kubectl create -f fluentd-es.yaml fluentd-es.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: app: fluentd-logging spec: template: metadata: labels: app: fluentd-es spec: containers: - name: fluentd-elasticsearch image: gcr.io/google_containers/fluentd-elasticsearch:1.15 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers Add elasticsearch and kibana pods and services. kubectl create -f es-controller.yaml kubectl create -f es-service.yaml kubectl create -f kibana-controller.yaml kubectl create -f kibana-service.yaml Note below that the kubernetes.io/cluster-service: "true" label (present in the original files) has been removed. Having this label in the definitions resulted in termination of the running pods. es-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: elasticsearch-logging-v1 namespace: kube-system labels: k8s-app: elasticsearch-logging version: v1 spec: replicas: 2 selector: k8s-app: elasticsearch-logging version: v1 template: metadata: labels: k8s-app: elasticsearch-logging version: v1 kubernetes.io/cluster-service: "true" spec: containers: - image: gcr.io/google_containers/elasticsearch:1.8 name: elasticsearch-logging resources: limits: cpu: 100m requests: cpu: 100m ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: es-persistent-storage mountPath: /data volumes: - name: es-persistent-storage emptyDir: {} es-service.yaml apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch-logging kubernetes.io/name: "Elasticsearch" spec: ports: - port: 9200 protocol: TCP targetPort: db selector: k8s-app: elasticsearch-logging kibana-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: kibana-logging-v1 namespace: kube-system labels: k8s-app: kibana-logging version: v1 spec: replicas: 1 selector: k8s-app: kibana-logging version: v1 template: metadata: labels: k8s-app: kibana-logging version: v1 kubernetes.io/cluster-service: "true" spec: containers: - name: kibana-logging image: gcr.io/google_containers/kibana:1.3 resources: limits: cpu: 100m requests: cpu: 100m env: - name: "ELASTICSEARCH_URL" value: "http://elasticsearch-logging:9200" ports: - containerPort: 5601 name: ui protocol: TCP kibana-service.yaml apiVersion: v1 kind: Service metadata: name: kibana-logging namespace: kube-system labels: k8s-app: kibana-logging kubernetes.io/name: "Kibana" spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: k8s-app: kibana-logging Create a kubectl proxy kubectl proxy Watch your logs with kibana at http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging/
Sorry for the noob question but from https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/getting-started-guides/logging-elasticsearch.md it says: To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below: KUBE_LOGGING_DESTINATION=elasticsearch Where and how do I set this Env Var ? I was thinking that I should use gcloud container clusters create and pass the options there but there is no options...
I have been testing rolling update of a service using both replication controller and declarative deployment objects. I found using rc there appears to be no downtime from a client perspective. But when the Deployment is doing a rolling update, the client gets some errors for a while until the update stabilizes. This is with kubernetes 1.2.1
###Instruction: I have tried all the basics of Kubernetes and if you want to update your application all you can use kubectl rolling-update to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called Deployment on version v1beta1. I am confused since I there is a line on the Deployment docs: Next time we want to update pods, we can just update the deployment again. Isn't this the role for rolling-update? Any inputs would be very useful. ### Response: I have been testing rolling update of a service using both replication controller and declarative deployment objects. I found using rc there appears to be no downtime from a client perspective. But when the Deployment is doing a rolling update, the client gets some errors for a while until the update stabilizes. This is with kubernetes 1.2.1
I have tried all the basics of Kubernetes and if you want to update your application all you can use kubectl rolling-update to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called Deployment on version v1beta1. I am confused since I there is a line on the Deployment docs: Next time we want to update pods, we can just update the deployment again. Isn't this the role for rolling-update? Any inputs would be very useful.
You need to setup multiple deployments and services. Take a look at this zookeeper example - https://gist.github.com/bprashanth/8160d0cf1469b4b125af95f697433934 You do not rely on node/machine IPs. Instead you rely on stable DNS names of multiple services.
###Instruction: I want to create MongoDB replica of the three machines, it needs to specify the IP-addresses of these machines? But they run into a pod's and have a dynamic IP. If you try to specify the DNS-name service MongoDB he says No host described in new configuration XXXXX for replica set app_replica maps to this node How to configure MongoDB replica for c k8s? I use DNS-addons for k8s. And I try to initialize the cluster as follows: var config = { "_id" : "app_replica", "members" : [ { "_id" : 0, "host" : "mongodb-node-01" }, { "_id" : 1, "host" : "mongodb-node-02" }, { "_id" : 2, "host" : "mongodb-node-03", "arbiterOnly" : true } ] } rs.initiate(config) Config Service: apiVersion: v1 kind: Service metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: ports: - port: 27017 targetPort: 27001 selector: app: "mongodb-node-01" Config Replication Controller: apiVersion: v1 kind: ReplicationController metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: replicas: 1 selector: app: "mongodb-node-01" template: metadata: labels: app: "mongodb-node-01" spec: containers: - name: "mongodb-node-01" image: 192.168.0.139:5000/db/mongo command: - mongod - "--replSet" - "app_replica" - "--smallfiles" - "--noprealloc" env: - name: ENV value: "prod" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db readOnly: false volumes: - name: mongo-persistent-storage hostPath: path: /data/mongo/mongodb-node-01 nodeSelector: database: "true" mongodb01: "true" ### Response: You need to setup multiple deployments and services. Take a look at this zookeeper example - https://gist.github.com/bprashanth/8160d0cf1469b4b125af95f697433934 You do not rely on node/machine IPs. Instead you rely on stable DNS names of multiple services.
I want to create MongoDB replica of the three machines, it needs to specify the IP-addresses of these machines? But they run into a pod's and have a dynamic IP. If you try to specify the DNS-name service MongoDB he says No host described in new configuration XXXXX for replica set app_replica maps to this node How to configure MongoDB replica for c k8s? I use DNS-addons for k8s. And I try to initialize the cluster as follows: var config = { "_id" : "app_replica", "members" : [ { "_id" : 0, "host" : "mongodb-node-01" }, { "_id" : 1, "host" : "mongodb-node-02" }, { "_id" : 2, "host" : "mongodb-node-03", "arbiterOnly" : true } ] } rs.initiate(config) Config Service: apiVersion: v1 kind: Service metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: ports: - port: 27017 targetPort: 27001 selector: app: "mongodb-node-01" Config Replication Controller: apiVersion: v1 kind: ReplicationController metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: replicas: 1 selector: app: "mongodb-node-01" template: metadata: labels: app: "mongodb-node-01" spec: containers: - name: "mongodb-node-01" image: 192.168.0.139:5000/db/mongo command: - mongod - "--replSet" - "app_replica" - "--smallfiles" - "--noprealloc" env: - name: ENV value: "prod" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db readOnly: false volumes: - name: mongo-persistent-storage hostPath: path: /data/mongo/mongodb-node-01 nodeSelector: database: "true" mongodb01: "true"
From kubernetes v1.1, any container in a pod can enable privileged mode, using the privileged flag on the SecurityContext of the container spec. To enable privileged mode nest privileged:true inside securityContext decleration of the container spec: "securityContext": { "privileged": true And as Janet said set --allow-privileged=true for both kubelet and kube-apiserver and restart them: sudo /etc/init.d/kubelet restart sudo /etc/init.d/kube-apiserver restart and validate that the flags are changed by using ps -ef | grep kube command.
###Instruction: (I have looked at some other threads, but apparently the privilege mode is now supported in the latest code, so am wondering if I have hit a bug.) I have two physical servers: both running Linux (ubuntu), with the latest kubernetes code from github yesterday. I am running docs/getting-started-guides/docker-multinode/master.sh (& worker.sh). On Master node: $ kubectl create -f examples/nfs/nfs-server-rc.yaml The ReplicationController "nfs-server" is invalid. spec.template.spec.containers[0].securityContext.privileged: forbidden '<*>(0xc208389770)true' Question: Is this supported? Or am I doing something wrong. Or is this a bug, please? master.sh code already has the option --allow-privileged=true provided. These following options were set, but not with a great conviction, and just because I saw some discussion elsewhere setting them. /etc/default/kubelet: `KUBELET_OPTS="--allow_privileged=true"` /etc/default/kube-apiserver: `KUBE_APISERVER_OPTS="--allow_privileged=true"` Master configuration: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.5.833+2e5da8b881e2f5", GitCommit:"2e5da8b881e2f5b6dfb66653acf4aaa1ca1f398e", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} $ docker version Client: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 ### Response: From kubernetes v1.1, any container in a pod can enable privileged mode, using the privileged flag on the SecurityContext of the container spec. To enable privileged mode nest privileged:true inside securityContext decleration of the container spec: "securityContext": { "privileged": true And as Janet said set --allow-privileged=true for both kubelet and kube-apiserver and restart them: sudo /etc/init.d/kubelet restart sudo /etc/init.d/kube-apiserver restart and validate that the flags are changed by using ps -ef | grep kube command.
(I have looked at some other threads, but apparently the privilege mode is now supported in the latest code, so am wondering if I have hit a bug.) I have two physical servers: both running Linux (ubuntu), with the latest kubernetes code from github yesterday. I am running docs/getting-started-guides/docker-multinode/master.sh (& worker.sh). On Master node: $ kubectl create -f examples/nfs/nfs-server-rc.yaml The ReplicationController "nfs-server" is invalid. spec.template.spec.containers[0].securityContext.privileged: forbidden '<*>(0xc208389770)true' Question: Is this supported? Or am I doing something wrong. Or is this a bug, please? master.sh code already has the option --allow-privileged=true provided. These following options were set, but not with a great conviction, and just because I saw some discussion elsewhere setting them. /etc/default/kubelet: `KUBELET_OPTS="--allow_privileged=true"` /etc/default/kube-apiserver: `KUBE_APISERVER_OPTS="--allow_privileged=true"` Master configuration: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.5.833+2e5da8b881e2f5", GitCommit:"2e5da8b881e2f5b6dfb66653acf4aaa1ca1f398e", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} $ docker version Client: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64
Firstly, do you really need multiple readers / writers? From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service. In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages. As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors. In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of Hyrum's law that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services. Ok, your situation is best solved using a file system. There are a number of options... There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found here. While this list is quite long, many of these volume types don't support multiple writers (also known as ReadWriteMany in Kubernetes). Those volume types that do support ReadWriteMany can be found in this table and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume. There are also operators such as the popular rook.io which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward. The simplest approach. In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around ReadWriteMany Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement. The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for on-prem clusters and using AWS NFS equivalent EFS on EKS clusters. These two articles give a good contrast for just how different implementations can be given your particular situation. For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following this DO article is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well. Once you have an NFS service, you can create a persistent volume similar to the following: --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-name spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany nfs: server: 255.0.255.0 # IP address of your NFS service path: "/desired/path/in/nfs" A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my on-prem cluster article. This is also the reason you need to use EFS when running on EKS as your nodes don't have the ability to connect to NFS. Once you have the persistent volume set up, it is a simple case of using it like you would any other volume. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-name spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: p-name volumeMounts: - mountPath: /data name: v-name volumes: - name: v-name persistentVolumeClaim: claimName: pvc-name
###Instruction: I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. What’s the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster? EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no? EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :( ### Response: Firstly, do you really need multiple readers / writers? From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service. In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages. As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors. In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of Hyrum's law that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services. Ok, your situation is best solved using a file system. There are a number of options... There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found here. While this list is quite long, many of these volume types don't support multiple writers (also known as ReadWriteMany in Kubernetes). Those volume types that do support ReadWriteMany can be found in this table and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume. There are also operators such as the popular rook.io which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward. The simplest approach. In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around ReadWriteMany Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement. The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for on-prem clusters and using AWS NFS equivalent EFS on EKS clusters. These two articles give a good contrast for just how different implementations can be given your particular situation. For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following this DO article is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well. Once you have an NFS service, you can create a persistent volume similar to the following: --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-name spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany nfs: server: 255.0.255.0 # IP address of your NFS service path: "/desired/path/in/nfs" A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my on-prem cluster article. This is also the reason you need to use EFS when running on EKS as your nodes don't have the ability to connect to NFS. Once you have the persistent volume set up, it is a simple case of using it like you would any other volume. --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-name spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: p-name volumeMounts: - mountPath: /data name: v-name volumes: - name: v-name persistentVolumeClaim: claimName: pvc-name
I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, I’m running into a problem: I’m hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. What’s the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks can’t be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster? EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no? EDIT2: For example, this page describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (as described here), which kind of renders it pointless :(
Killing them so that they'll be rescheduled on another node is perfectly fine. They can all be rescheduled other than the fluentd pods, which are bound one to each node.
###Instruction: I currently have a cluster running on GCloud which I created with 3 nodes. This is what I get when I run kubectl describe nodes Name: node1 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── default my-pod1 100m (10%) 0 (0%) 0 (0%) 0 (0%) default my-pod2 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 300m (30%) 100m (10%) 200Mi (5%) 200Mi (5%) Name: node2 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (4 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── default my-pod3 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) kube-system heapster-v11-yi2nw 100m (10%) 100m (10%) 236Mi (6%) 236Mi (6%) kube-system kube-ui-v4-5nh36 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 400m (40%) 300m (30%) 486Mi (13%) 486Mi (13%) Name: node3 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) kube-system kube-dns-v9-bo86j 310m (31%) 310m (31%) 170Mi (4%) 170Mi (4%) kube-system l7-lb-controller-v0.5.2-ae0t2 110m (11%) 110m (11%) 70Mi (1%) 120Mi (3%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 520m (52%) 520m (52%) 440Mi (11%) 490Mi (13%) Now, as you can see, I have 3 pods, 2 on node1 and 1 on node2. What I would like to do is to move all pods on node1 and delete the two other nodes. However, there seem to be pods belonging to the kube-system namespace and I don't know what effect deleting them might have. I can tell that the pods named fluentd-cloud-logging... or heapster.. are used for logging and computer resources usage, but I don't really know if I can move the pods kube-dns-v9-bo86j and l7-lb-controller-v0.5.2-ae0t2 to another node without repercussions. Can anyone help with some insight as to how should I proceed? Thank you very much. ### Response: Killing them so that they'll be rescheduled on another node is perfectly fine. They can all be rescheduled other than the fluentd pods, which are bound one to each node.
I currently have a cluster running on GCloud which I created with 3 nodes. This is what I get when I run kubectl describe nodes Name: node1 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── default my-pod1 100m (10%) 0 (0%) 0 (0%) 0 (0%) default my-pod2 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 300m (30%) 100m (10%) 200Mi (5%) 200Mi (5%) Name: node2 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (4 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── default my-pod3 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) kube-system heapster-v11-yi2nw 100m (10%) 100m (10%) 236Mi (6%) 236Mi (6%) kube-system kube-ui-v4-5nh36 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 400m (40%) 300m (30%) 486Mi (13%) 486Mi (13%) Name: node3 Capacity: cpu: 1 memory: 3800808Ki pods: 40 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits ───────── ──── ──────────── ────────── ─────────────── ───────────── kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%) kube-system kube-dns-v9-bo86j 310m (31%) 310m (31%) 170Mi (4%) 170Mi (4%) kube-system l7-lb-controller-v0.5.2-ae0t2 110m (11%) 110m (11%) 70Mi (1%) 120Mi (3%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ──────────── ────────── ─────────────── ───────────── 520m (52%) 520m (52%) 440Mi (11%) 490Mi (13%) Now, as you can see, I have 3 pods, 2 on node1 and 1 on node2. What I would like to do is to move all pods on node1 and delete the two other nodes. However, there seem to be pods belonging to the kube-system namespace and I don't know what effect deleting them might have. I can tell that the pods named fluentd-cloud-logging... or heapster.. are used for logging and computer resources usage, but I don't really know if I can move the pods kube-dns-v9-bo86j and l7-lb-controller-v0.5.2-ae0t2 to another node without repercussions. Can anyone help with some insight as to how should I proceed? Thank you very much.
You can either use Deployment API (for Kubernetes >= v1.2) or kubectl rolling-update (for < v1.2) to manage the rolling deploy of your Kubernetes Pods (each is a co-located group of containers and volumes). You'll also need to create Services for accessing those Pods (Services redirect traffic to Pods). During the rolling deploy, a user will be redirected to either the Pod with old or new versions of assets container.
###Instruction: We have a requirement that no requests receive 404's when doing a rolling deploy. Currently we achieve this by deploying the new assets container to all servers before continuing with a rolling deploy. With nginx's "try_files" this ensures that as the new code is being rolled out we can serve both the old and new versions of assets. Does Kubernetes have any features to support this type of workflow? ### Response: You can either use Deployment API (for Kubernetes >= v1.2) or kubectl rolling-update (for < v1.2) to manage the rolling deploy of your Kubernetes Pods (each is a co-located group of containers and volumes). You'll also need to create Services for accessing those Pods (Services redirect traffic to Pods). During the rolling deploy, a user will be redirected to either the Pod with old or new versions of assets container.
We have a requirement that no requests receive 404's when doing a rolling deploy. Currently we achieve this by deploying the new assets container to all servers before continuing with a rolling deploy. With nginx's "try_files" this ensures that as the new code is being rolled out we can serve both the old and new versions of assets. Does Kubernetes have any features to support this type of workflow?
Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. https://github.com/kubernetes/kubernetes/issues/7294
###Instruction: I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test? ### Response: Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. https://github.com/kubernetes/kubernetes/issues/7294
I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test?
boo my answers apply to https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx default backend at port 80 for all hosts that: 404 on location /.well-known/acme-challenge/ this is not possible using Ingress rules 301 on location / This is already supported. If the server contains a SSL certificate it will redirect to httpsautomatically subdomain based routing to different backend services on port 443 each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??) You need to create multiple Ingress rules, one per subdomain. Each rule can use a different secret name (this will create multiple servers, one per subdomain) What I think need is this: full documentation on writing Ingress rules http://kubernetes.io/docs/user-guide/ingress/ (I don't know id there's additional information besides the go code) can I configure SSL certs (on port 443) for each backend individually? is / the "path" that's a catchall for a host? yes updating Ingress rules in place what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers? This depends on what you need, if you want to build you custom Ingress controller you can use nginx-alpha as reference. If nginx-ingress is not clear in the examples please open an issue and mention what could be improved in the examples or it's missing is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server? No. The reason for this is that the template is tied to the go code that populates the template. That said, you can build a custom image changing the template but this requires you deploy the image to tests the changes how do you store SSL keys and certs as secrets? yes, as secrets like this http://kubernetes.io/docs/user-guide/ingress/#tls For the letsencrypt support please check this comment https://github.com/kubernetes/kubernetes/issues/19899#issuecomment-184059009 Here is a complete example https://gist.github.com/aledbf/d88c7f7d0b8d4d032035b14ab0965e26 added to examples in #766
###Instruction: The closest tutorial I can find in getting an SSL terminating Ingress and an nginx based controller running on bare metal (Digital Ocean, for example) is this: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx but it leaves so many assumptions unexplained. My ingress requirements are simply: default backend at port 80 for all hosts that: file access to location ^~ /.well-known/acme-challenge/ which allows my LetsEncrypt cert renewals to work 404 on location /.well-known/acme-challenge/ 301 on location / subdomain based routing to different backend services on port 443 each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??) What I think need is this: full documentation on writing Ingress rules can I configure SSL certs (on port 443) for each backend individually? is / the "path" that's a catchall for a host? updating Ingress rules in place what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers? is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server? how do you store SSL keys and certs as secrets? ### Response: boo my answers apply to https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx default backend at port 80 for all hosts that: 404 on location /.well-known/acme-challenge/ this is not possible using Ingress rules 301 on location / This is already supported. If the server contains a SSL certificate it will redirect to httpsautomatically subdomain based routing to different backend services on port 443 each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??) You need to create multiple Ingress rules, one per subdomain. Each rule can use a different secret name (this will create multiple servers, one per subdomain) What I think need is this: full documentation on writing Ingress rules http://kubernetes.io/docs/user-guide/ingress/ (I don't know id there's additional information besides the go code) can I configure SSL certs (on port 443) for each backend individually? is / the "path" that's a catchall for a host? yes updating Ingress rules in place what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers? This depends on what you need, if you want to build you custom Ingress controller you can use nginx-alpha as reference. If nginx-ingress is not clear in the examples please open an issue and mention what could be improved in the examples or it's missing is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server? No. The reason for this is that the template is tied to the go code that populates the template. That said, you can build a custom image changing the template but this requires you deploy the image to tests the changes how do you store SSL keys and certs as secrets? yes, as secrets like this http://kubernetes.io/docs/user-guide/ingress/#tls For the letsencrypt support please check this comment https://github.com/kubernetes/kubernetes/issues/19899#issuecomment-184059009 Here is a complete example https://gist.github.com/aledbf/d88c7f7d0b8d4d032035b14ab0965e26 added to examples in #766
The closest tutorial I can find in getting an SSL terminating Ingress and an nginx based controller running on bare metal (Digital Ocean, for example) is this: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx but it leaves so many assumptions unexplained. My ingress requirements are simply: default backend at port 80 for all hosts that: file access to location ^~ /.well-known/acme-challenge/ which allows my LetsEncrypt cert renewals to work 404 on location /.well-known/acme-challenge/ 301 on location / subdomain based routing to different backend services on port 443 each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??) What I think need is this: full documentation on writing Ingress rules can I configure SSL certs (on port 443) for each backend individually? is / the "path" that's a catchall for a host? updating Ingress rules in place what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers? is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server? how do you store SSL keys and certs as secrets?
Just to clarify a bit on what Robert wrote about Kubernetes. If you want to have up to 5 machines for running your applications even 1-core virtual machine (n1-standard-1 on GCE) should be enough. You can handle 10-node cluster with 2-core virtual machine as Robert said. For official recommendations please take a look at: https://kubernetes.io/docs/setup/best-practices/cluster-large/ However, note that resource usage of our master components is more related to number of pods (containers) you want to run on your cluster. If you want to have say single-digit-number of them, even n1-standard-1 GCE should be enough for 10-node cluster. But it's definitely safer to use n1-standard-2 in case of <=10 node clusters. As for HA, I agree with Robert that having 3 master VMs is better than 2. Etcd (which is our backing storage) requires more than a half of all registered replicas to be up to work correctly, so in case of 2 instances, all of them needs to be up (which is generally not your goal). If you have 3 instances, one of them can be down. Let me know if you have more questions about Kubernetes.
###Instruction: We are planning to build a small docker cluster for our application services. We considered to use 2 master vms for ha, 1 consul(if we choose Swarm) and 5-10 hosts for containers. We have not yet decided what to use - Docker Swarm or Kubernetes. So the question is what "hardware" requirements (CPU cores, RAM) managers, both Swarm and Kubernetes, can meet to orchestrate this small cluster. ### Response: Just to clarify a bit on what Robert wrote about Kubernetes. If you want to have up to 5 machines for running your applications even 1-core virtual machine (n1-standard-1 on GCE) should be enough. You can handle 10-node cluster with 2-core virtual machine as Robert said. For official recommendations please take a look at: https://kubernetes.io/docs/setup/best-practices/cluster-large/ However, note that resource usage of our master components is more related to number of pods (containers) you want to run on your cluster. If you want to have say single-digit-number of them, even n1-standard-1 GCE should be enough for 10-node cluster. But it's definitely safer to use n1-standard-2 in case of <=10 node clusters. As for HA, I agree with Robert that having 3 master VMs is better than 2. Etcd (which is our backing storage) requires more than a half of all registered replicas to be up to work correctly, so in case of 2 instances, all of them needs to be up (which is generally not your goal). If you have 3 instances, one of them can be down. Let me know if you have more questions about Kubernetes.
We are planning to build a small docker cluster for our application services. We considered to use 2 master vms for ha, 1 consul(if we choose Swarm) and 5-10 hosts for containers. We have not yet decided what to use - Docker Swarm or Kubernetes. So the question is what "hardware" requirements (CPU cores, RAM) managers, both Swarm and Kubernetes, can meet to orchestrate this small cluster.
Logs showed Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002 So it failed mounting on localhost. to create my persistent volume I've executed this yaml: { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "registry-volume" }, "spec": { "capacity": { "storage": "20Gi" }, "accessModes": [ "ReadWriteMany" ], "nfs": { "path": "/home/data/pv0002", "server": "localhost" } } } So I was mounting to /home/data/pv0002 but this path was not on the localhost but on my master server (which is ose3-master.example.com. So I created my PV in a wrong way. { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "registry-volume" }, "spec": { "capacity": { "storage": "20Gi" }, "accessModes": [ "ReadWriteMany" ], "nfs": { "path": "/home/data/pv0002", "server": "ose3-master.example.com" } } } This was also in a training environment. It's recommended to have a NFS server outside of your cluster to mount to.
###Instruction: EDITED: I've an OpenShift cluster with one master and two nodes. I've installed NFS on the master and NFS client on the nodes. I've followed the wordpress example with NFS: https://github.com/openshift/origin/tree/master/examples/wordpress I did the following on my master as: oc login -u system:admin: mkdir /home/data/pv0001 mkdir /home/data/pv0002 chown -R nfsnobody:nfsnobody /home/data chmod -R 777 /home/data/ # Add to /etc/exports /home/data/pv0001 *(rw,sync,no_root_squash) /home/data/pv0002 *(rw,sync,no_root_squash) # Enable the new exports without bouncing the NFS service exportfs -a So exportfs shows: /home/data/pv0001 <world> /home/data/pv0002 <world> $ setsebool -P virt_use_nfs 1 # Create the persistent volumes for NFS. # I did not change anything in the yaml-files $ oc create -f examples/wordpress/nfs/pv-1.yaml $ oc create -f examples/wordpress/nfs/pv-2.yaml $ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 <none> 1073741824 RWO,RWX Available pv0002 <none> 5368709120 RWO Available This is also what I get. Than I'm going to my node: oc login test-admin And I create a wordpress project: oc new-project wordpress # Create claims for storage in my project (same namespace). # The claims in this example carefully match the volumes created above. $ oc create -f examples/wordpress/pvc-wp.yaml $ oc create -f examples/wordpress/pvc-mysql.yaml $ oc get pvc NAME LABELS STATUS VOLUME claim-mysql map[] Bound pv0002 claim-wp map[] Bound pv0001 This looks exactly the same for me. Launch the MySQL pod. oc create -f examples/wordpress/pod-mysql.yaml oc create -f examples/wordpress/service-mysql.yaml oc create -f examples/wordpress/pod-wordpress.yaml oc create -f examples/wordpress/service-wp.yaml oc get svc NAME LABELS SELECTOR IP(S) PORT(S) mysql name=mysql name=mysql 172.30.115.137 3306/TCP wpfrontend name=wpfrontend name=wordpress 172.30.170.55 5055/TCP So actually everyting seemed to work! But when I'm asking for my pod status I get the following: [root@ip-10-0-0-104 pv0002]# oc get pod NAME READY STATUS RESTARTS AGE mysql 0/1 Image: openshift/mysql-55-centos7 is ready, container is creating 0 6h wordpress 0/1 Image: wordpress is not ready on the node 0 6h The pods are in pending state and in the webconsole they're giving the following error: 12:12:51 PM mysql Pod failedMount Unable to mount volumes for pod "mysql_wordpress": exit status 32 (607 times in the last hour, 41 minutes) 12:12:51 PM mysql Pod failedSync Error syncing pod, skipping: exit status 32 (607 times in the last hour, 41 minutes) 12:12:48 PM wordpress Pod failedMount Unable to mount volumes for pod "wordpress_wordpress": exit status 32 (604 times in the last hour, 40 minutes) 12:12:48 PM wordpress Pod failedSync Error syncing pod, skipping: exit status 32 (604 times in the last hour, 40 minutes) Unable to mount +timeout. But when I'm going to my node and I'm doing the following (test is a created directory on my node): mount -t nfs -v masterhostname:/home/data/pv0002 /test And I place some file in my /test on my node than it appears in my /home/data/pv0002 on my master so that seems to work. What's the reason that it's unable to mount in OpenShift? I've been stuck on this for a while. LOGS: Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.795267904Z" level=info msg="GET /containers/json" Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832179 1148 mount_linux.go:103] Mount failed: exit status 32 Oct 21 10:44:52 ip-10-0-0-129 origin-node: Mounting arguments: localhost:/home/data/pv0002 /var/lib/origin/openshift.local.volumes/pods/2bf19fe9-77ce-11e5-9122-02463424c049/volumes/kubernetes.io~nfs/pv0002 nfs [] Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002 Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832279 1148 kubelet.go:1206] Unable to mount volumes for pod "mysql_wordpress": exit status 32; skipping pod Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.832794476Z" level=info msg="GET /containers/json?all=1" Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.835916304Z" level=info msg="GET /images/openshift/mysql-55-centos7/json" Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.837085 1148 pod_workers.go:111] Error syncing pod 2bf19fe9-77ce-11e5-9122-02463424c049, skipping: exit status 32 ### Response: Logs showed Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002 So it failed mounting on localhost. to create my persistent volume I've executed this yaml: { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "registry-volume" }, "spec": { "capacity": { "storage": "20Gi" }, "accessModes": [ "ReadWriteMany" ], "nfs": { "path": "/home/data/pv0002", "server": "localhost" } } } So I was mounting to /home/data/pv0002 but this path was not on the localhost but on my master server (which is ose3-master.example.com. So I created my PV in a wrong way. { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "registry-volume" }, "spec": { "capacity": { "storage": "20Gi" }, "accessModes": [ "ReadWriteMany" ], "nfs": { "path": "/home/data/pv0002", "server": "ose3-master.example.com" } } } This was also in a training environment. It's recommended to have a NFS server outside of your cluster to mount to.
EDITED: I've an OpenShift cluster with one master and two nodes. I've installed NFS on the master and NFS client on the nodes. I've followed the wordpress example with NFS: https://github.com/openshift/origin/tree/master/examples/wordpress I did the following on my master as: oc login -u system:admin: mkdir /home/data/pv0001 mkdir /home/data/pv0002 chown -R nfsnobody:nfsnobody /home/data chmod -R 777 /home/data/ # Add to /etc/exports /home/data/pv0001 *(rw,sync,no_root_squash) /home/data/pv0002 *(rw,sync,no_root_squash) # Enable the new exports without bouncing the NFS service exportfs -a So exportfs shows: /home/data/pv0001 <world> /home/data/pv0002 <world> $ setsebool -P virt_use_nfs 1 # Create the persistent volumes for NFS. # I did not change anything in the yaml-files $ oc create -f examples/wordpress/nfs/pv-1.yaml $ oc create -f examples/wordpress/nfs/pv-2.yaml $ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 <none> 1073741824 RWO,RWX Available pv0002 <none> 5368709120 RWO Available This is also what I get. Than I'm going to my node: oc login test-admin And I create a wordpress project: oc new-project wordpress # Create claims for storage in my project (same namespace). # The claims in this example carefully match the volumes created above. $ oc create -f examples/wordpress/pvc-wp.yaml $ oc create -f examples/wordpress/pvc-mysql.yaml $ oc get pvc NAME LABELS STATUS VOLUME claim-mysql map[] Bound pv0002 claim-wp map[] Bound pv0001 This looks exactly the same for me. Launch the MySQL pod. oc create -f examples/wordpress/pod-mysql.yaml oc create -f examples/wordpress/service-mysql.yaml oc create -f examples/wordpress/pod-wordpress.yaml oc create -f examples/wordpress/service-wp.yaml oc get svc NAME LABELS SELECTOR IP(S) PORT(S) mysql name=mysql name=mysql 172.30.115.137 3306/TCP wpfrontend name=wpfrontend name=wordpress 172.30.170.55 5055/TCP So actually everyting seemed to work! But when I'm asking for my pod status I get the following: [root@ip-10-0-0-104 pv0002]# oc get pod NAME READY STATUS RESTARTS AGE mysql 0/1 Image: openshift/mysql-55-centos7 is ready, container is creating 0 6h wordpress 0/1 Image: wordpress is not ready on the node 0 6h The pods are in pending state and in the webconsole they're giving the following error: 12:12:51 PM mysql Pod failedMount Unable to mount volumes for pod "mysql_wordpress": exit status 32 (607 times in the last hour, 41 minutes) 12:12:51 PM mysql Pod failedSync Error syncing pod, skipping: exit status 32 (607 times in the last hour, 41 minutes) 12:12:48 PM wordpress Pod failedMount Unable to mount volumes for pod "wordpress_wordpress": exit status 32 (604 times in the last hour, 40 minutes) 12:12:48 PM wordpress Pod failedSync Error syncing pod, skipping: exit status 32 (604 times in the last hour, 40 minutes) Unable to mount +timeout. But when I'm going to my node and I'm doing the following (test is a created directory on my node): mount -t nfs -v masterhostname:/home/data/pv0002 /test And I place some file in my /test on my node than it appears in my /home/data/pv0002 on my master so that seems to work. What's the reason that it's unable to mount in OpenShift? I've been stuck on this for a while. LOGS: Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.795267904Z" level=info msg="GET /containers/json" Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832179 1148 mount_linux.go:103] Mount failed: exit status 32 Oct 21 10:44:52 ip-10-0-0-129 origin-node: Mounting arguments: localhost:/home/data/pv0002 /var/lib/origin/openshift.local.volumes/pods/2bf19fe9-77ce-11e5-9122-02463424c049/volumes/kubernetes.io~nfs/pv0002 nfs [] Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002 Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832279 1148 kubelet.go:1206] Unable to mount volumes for pod "mysql_wordpress": exit status 32; skipping pod Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.832794476Z" level=info msg="GET /containers/json?all=1" Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.835916304Z" level=info msg="GET /images/openshift/mysql-55-centos7/json" Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.837085 1148 pod_workers.go:111] Error syncing pod 2bf19fe9-77ce-11e5-9122-02463424c049, skipping: exit status 32
The services you show running do not include the apiserver. For a quick breakdown I can explain what each service does that you show running. Weave: This is a software overlay network and assigns IP addresses to your pods. kube-proxy: This runs on your worker nodes allow pods to run and route traffic between exposed services. kubectl create: Kubectl is actually the management cli tool but in this case using -f /etc/kubernetes/addons/; sleep 2 is watching the /etc/kubernetes/addons/ folder and automatically creating any objects (pods, replication controllers, services, etc.) that are put in that folder. kube-scheduler: Responsible for scheduling pods onto nodes. Uses policies and rules. kube-controller-manager: Manages the state of the cluster by always making sure the current state and desired state are the same. This includes starting/stopping pods and creating objects (services, replication-controllers, etc) that do not yet exist or killing them if they shouldn't exist. All of these services interact with the kube-apiserver which should be a separate service that coordinates all of the information these other services use. You'll need the apiserver running in order for all of the other components to do their jobs. I won't go into the details of getting it running in your environment but from it looks like in the comments on your original thread you found some missing documentation to get it running.
###Instruction: Tried installing kubernetes v1.2.0 on azure environment but after installation cannot access kube apis at port 8080. Following services are running : root 1473 0.2 0.5 536192 42812 ? Ssl 09:22 0:00 /home/weave/weaver --port 6783 --name 22:95:7a:6e:30:ed --nickname kube-00 --datapath datapath --ipalloc-range 10.32.0.0/12 --dns-effective-listen-address 172.17.42.1 --dns-listen-address 172.17.42.1:53 --http-addr 127.0.0.1:6784 root 1904 0.1 0.2 30320 20112 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-proxy --master=http://kube-00:8080 --logtostderr=true root 1907 0.0 0.0 14016 2968 ? Ss 09:22 0:00 /bin/bash -c until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done root 1914 0.2 0.3 35888 22212 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080 root 3129 2.2 0.3 42488 25192 ? Ssl 09:27 0:00 /opt/kubernetes/server/bin/kube-controller-manager --master=127.0.0.1:8080 --logtostderr=true curl -v http://localhost:8080 returns error Rebuilt URL to: http://localhost:8080/ Trying 127.0.0.1... connect to 127.0.0.1 port 8080 failed: Connection refused Failed to connect to localhost port 8080: Connection refused Closing connection 0 curl: (7) Failed to connect to localhost port 8080: Connection refused Same works fine with v1.1.2. I'm using following guidelines https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure and updated line https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml#L187 to user version v1.2.0. ### Response: The services you show running do not include the apiserver. For a quick breakdown I can explain what each service does that you show running. Weave: This is a software overlay network and assigns IP addresses to your pods. kube-proxy: This runs on your worker nodes allow pods to run and route traffic between exposed services. kubectl create: Kubectl is actually the management cli tool but in this case using -f /etc/kubernetes/addons/; sleep 2 is watching the /etc/kubernetes/addons/ folder and automatically creating any objects (pods, replication controllers, services, etc.) that are put in that folder. kube-scheduler: Responsible for scheduling pods onto nodes. Uses policies and rules. kube-controller-manager: Manages the state of the cluster by always making sure the current state and desired state are the same. This includes starting/stopping pods and creating objects (services, replication-controllers, etc) that do not yet exist or killing them if they shouldn't exist. All of these services interact with the kube-apiserver which should be a separate service that coordinates all of the information these other services use. You'll need the apiserver running in order for all of the other components to do their jobs. I won't go into the details of getting it running in your environment but from it looks like in the comments on your original thread you found some missing documentation to get it running.
Tried installing kubernetes v1.2.0 on azure environment but after installation cannot access kube apis at port 8080. Following services are running : root 1473 0.2 0.5 536192 42812 ? Ssl 09:22 0:00 /home/weave/weaver --port 6783 --name 22:95:7a:6e:30:ed --nickname kube-00 --datapath datapath --ipalloc-range 10.32.0.0/12 --dns-effective-listen-address 172.17.42.1 --dns-listen-address 172.17.42.1:53 --http-addr 127.0.0.1:6784 root 1904 0.1 0.2 30320 20112 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-proxy --master=http://kube-00:8080 --logtostderr=true root 1907 0.0 0.0 14016 2968 ? Ss 09:22 0:00 /bin/bash -c until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done root 1914 0.2 0.3 35888 22212 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080 root 3129 2.2 0.3 42488 25192 ? Ssl 09:27 0:00 /opt/kubernetes/server/bin/kube-controller-manager --master=127.0.0.1:8080 --logtostderr=true curl -v http://localhost:8080 returns error Rebuilt URL to: http://localhost:8080/ Trying 127.0.0.1... connect to 127.0.0.1 port 8080 failed: Connection refused Failed to connect to localhost port 8080: Connection refused Closing connection 0 curl: (7) Failed to connect to localhost port 8080: Connection refused Same works fine with v1.1.2. I'm using following guidelines https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure and updated line https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml#L187 to user version v1.2.0.
Yes, through the workingDir field of the container spec. Here's an example replication controller with an nginx container that has workingDir set to /workdir: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: mynginximage workingDir: /workdir
###Instruction: Is it possible to set the working directory when launching a container with Kubernetes ? ### Response: Yes, through the workingDir field of the container spec. Here's an example replication controller with an nginx container that has workingDir set to /workdir: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: mynginximage workingDir: /workdir
Is it possible to set the working directory when launching a container with Kubernetes ?
Thanks to @Paul Morie for helping me diagnose and fix this (from github issue): bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to -v /var/lib/kubelet:/var/lib/kubelet:rw,shared I also had to change MountFlags=slave to MountFlags=shared in my docker systemd file.
###Instruction: k8s 1.2 deployed locally, single-node docker Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment? Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not: [root@totoro brs-kubernetes]# kubectl create -f example.yaml configmap "special-config" created pod "dapi-test-pod" created [root@totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh / # cd /etc/config/ /etc/config # ls /etc/config # ls -alh total 4 drwxrwxrwt 2 root root 40 Mar 23 18:47 . drwxr-xr-x 7 root root 4.0K Mar 23 18:47 .. /etc/config # example.yaml apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: ["sleep", "100"] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: how.file restartPolicy: Never Summary of conformance test failures follows (asked to run by jayunit100). Full run in this gist. Summarizing 7 Failures: [Fail] ConfigMap [It] updates should be reflected in volume [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262 [Fail] Downward API volume [It] should provide podname only [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 [Fail] Downward API volume [It] should update labels on modification [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82 [Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 [Fail] Networking [It] should function for intra-pod communication [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121 [Fail] Downward API volume [It] should update annotations on modification [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119 [Fail] ConfigMap [It] should be consumable from pods in volume [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 Ran 93 of 265 Specs in 2875.468 seconds FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s) FAIL Output of findmnt: [schou@totoro single-node]$ findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/fedora-root β”‚ ext4 rw,relatime,data=ordere β”œβ”€/sys sysfs sysfs rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/kernel/debug debugfs debugfs rw,relatime β”‚ β”œβ”€/sys/kernel/config configfs configf rw,relatime β”‚ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime β”œβ”€/proc proc proc rw,nosuid,nodev,noexec, β”‚ β”œβ”€/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp= β”‚ └─/proc/fs/nfsd nfsd nfsd rw,relatime β”œβ”€/dev devtmpfs devtmpf rw,nosuid,size=8175208k β”‚ β”œβ”€/dev/shm tmpfs tmpfs rw,nosuid,nodev β”‚ β”œβ”€/dev/pts devpts devpts rw,nosuid,noexec,relati β”‚ β”œβ”€/dev/mqueue mqueue mqueue rw,relatime β”‚ └─/dev/hugepages hugetlbfs hugetlb rw,relatime β”œβ”€/run tmpfs tmpfs rw,nosuid,nodev,mode=75 β”‚ β”œβ”€/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim β”‚ β”‚ └─/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim β”‚ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim β”‚ └─/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim β”œβ”€/tmp tmpfs tmpfs rw β”œβ”€/boot /dev/sda2 ext4 rw,relatime,data=ordere β”‚ └─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077, β”œβ”€/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime β”œβ”€/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv β”‚ tmpfs tmpfs rw,relatime β”œβ”€/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume β”‚ tmpfs tmpfs rw,relatime └─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe tmpfs tmpfs rw,relatime [schou@totoro single-node]$ ### Response: Thanks to @Paul Morie for helping me diagnose and fix this (from github issue): bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to -v /var/lib/kubelet:/var/lib/kubelet:rw,shared I also had to change MountFlags=slave to MountFlags=shared in my docker systemd file.
k8s 1.2 deployed locally, single-node docker Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment? Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not: [root@totoro brs-kubernetes]# kubectl create -f example.yaml configmap "special-config" created pod "dapi-test-pod" created [root@totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh / # cd /etc/config/ /etc/config # ls /etc/config # ls -alh total 4 drwxrwxrwt 2 root root 40 Mar 23 18:47 . drwxr-xr-x 7 root root 4.0K Mar 23 18:47 .. /etc/config # example.yaml apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: ["sleep", "100"] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: how.file restartPolicy: Never Summary of conformance test failures follows (asked to run by jayunit100). Full run in this gist. Summarizing 7 Failures: [Fail] ConfigMap [It] updates should be reflected in volume [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262 [Fail] Downward API volume [It] should provide podname only [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 [Fail] Downward API volume [It] should update labels on modification [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82 [Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 [Fail] Networking [It] should function for intra-pod communication [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121 [Fail] Downward API volume [It] should update annotations on modification [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119 [Fail] ConfigMap [It] should be consumable from pods in volume [Conformance] /home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637 Ran 93 of 265 Specs in 2875.468 seconds FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s) FAIL Output of findmnt: [schou@totoro single-node]$ findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/fedora-root β”‚ ext4 rw,relatime,data=ordere β”œβ”€/sys sysfs sysfs rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ β”œβ”€/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”‚ └─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec, β”‚ β”œβ”€/sys/kernel/debug debugfs debugfs rw,relatime β”‚ β”œβ”€/sys/kernel/config configfs configf rw,relatime β”‚ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime β”œβ”€/proc proc proc rw,nosuid,nodev,noexec, β”‚ β”œβ”€/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp= β”‚ └─/proc/fs/nfsd nfsd nfsd rw,relatime β”œβ”€/dev devtmpfs devtmpf rw,nosuid,size=8175208k β”‚ β”œβ”€/dev/shm tmpfs tmpfs rw,nosuid,nodev β”‚ β”œβ”€/dev/pts devpts devpts rw,nosuid,noexec,relati β”‚ β”œβ”€/dev/mqueue mqueue mqueue rw,relatime β”‚ └─/dev/hugepages hugetlbfs hugetlb rw,relatime β”œβ”€/run tmpfs tmpfs rw,nosuid,nodev,mode=75 β”‚ β”œβ”€/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim β”‚ β”‚ └─/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim β”‚ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim β”‚ └─/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim β”œβ”€/tmp tmpfs tmpfs rw β”œβ”€/boot /dev/sda2 ext4 rw,relatime,data=ordere β”‚ └─/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077, β”œβ”€/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime β”œβ”€/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv β”‚ tmpfs tmpfs rw,relatime β”œβ”€/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume β”‚ tmpfs tmpfs rw,relatime └─/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe tmpfs tmpfs rw,relatime [schou@totoro single-node]$
kubectl saves clusters/contexts in its configuration. If you use the default scripts to bring up the cluster, these entries should've been set for your clutser. A brief overview of kubectl config: kubectl config view let you to view the cluster/contexts in your configuration. kubectl config set-cluster and kubectl config set-context modifies/adds new entries. You can use kubectl config use-context to change the default context, and kubectl --context=CONTEXT get pods to switch to a different context for the current command.
###Instruction: Is there something like: kubectl get pods --project=PROJECT_ID I would like not to modify my default gcloud configuration to switch between my staging and production environment. ### Response: kubectl saves clusters/contexts in its configuration. If you use the default scripts to bring up the cluster, these entries should've been set for your clutser. A brief overview of kubectl config: kubectl config view let you to view the cluster/contexts in your configuration. kubectl config set-cluster and kubectl config set-context modifies/adds new entries. You can use kubectl config use-context to change the default context, and kubectl --context=CONTEXT get pods to switch to a different context for the current command.
Is there something like: kubectl get pods --project=PROJECT_ID I would like not to modify my default gcloud configuration to switch between my staging and production environment.
Probably you want to use Daemonset - a way to run a daemon on every node in a Kubernetes cluster.
###Instruction: In coreos we can defined service as [X-Fleet] Global=true This will make sure that this particular service will run in all the nodes. How do i achieve same thing for a pod in Kubernetes? ### Response: Probably you want to use Daemonset - a way to run a daemon on every node in a Kubernetes cluster.
In coreos we can defined service as [X-Fleet] Global=true This will make sure that this particular service will run in all the nodes. How do i achieve same thing for a pod in Kubernetes?
After some attempts it seemed that the auto-scaling group worked using debian images. Note that it needs aprox. 2 minutes once the instance is ready with all installed necessities and you can see the instance using get nodes.
###Instruction: I followed the tutorial how to create a k8s cluster on aws using ubuntu. This works great, so I have one master and three nodes - minions. However I haven't found any working working recipe how to add a new node to the cluster. First of all, I have a new autoscaling group in aws. It was created along with the cluster. I tried to increase the number of nodes in the scaling group, the scaling procedure really added an instance but the node is not among the k8s cluster. There is no kubelet, kube-proxy, no such thing on the instance. So, it's reasonable that it did not join the k8s cluster. As I have not found any tutorial, maybe I need to do some manual procedure. But it would be weird autoscaling. What am I supposed to do? kube scripts install the cluster, add scaling group and it does not work? I do not insist on the automatic scaling, I just need to add a new node in the case of any failure or how we will add new running pods/rcs. I found a reference in this article in the section called Add more nodes in a second zone, note that this is about multizone deployment. So I tried things like: KUBE_USE_EXISTING_MASTER=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=eu-central-1b NUM_NODES=1 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh but it always failed with: A client error (AlreadyExists) occurred when calling the CreateLaunchConfiguration operation: Launch Configuration by this name already exists - A launch configuration already exists with the name kubernetes-minion-group-eu-central-1b There must be definitely some way how to do that, can you please someone help me? Thank you! ### Response: After some attempts it seemed that the auto-scaling group worked using debian images. Note that it needs aprox. 2 minutes once the instance is ready with all installed necessities and you can see the instance using get nodes.
I followed the tutorial how to create a k8s cluster on aws using ubuntu. This works great, so I have one master and three nodes - minions. However I haven't found any working working recipe how to add a new node to the cluster. First of all, I have a new autoscaling group in aws. It was created along with the cluster. I tried to increase the number of nodes in the scaling group, the scaling procedure really added an instance but the node is not among the k8s cluster. There is no kubelet, kube-proxy, no such thing on the instance. So, it's reasonable that it did not join the k8s cluster. As I have not found any tutorial, maybe I need to do some manual procedure. But it would be weird autoscaling. What am I supposed to do? kube scripts install the cluster, add scaling group and it does not work? I do not insist on the automatic scaling, I just need to add a new node in the case of any failure or how we will add new running pods/rcs. I found a reference in this article in the section called Add more nodes in a second zone, note that this is about multizone deployment. So I tried things like: KUBE_USE_EXISTING_MASTER=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=eu-central-1b NUM_NODES=1 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh but it always failed with: A client error (AlreadyExists) occurred when calling the CreateLaunchConfiguration operation: Launch Configuration by this name already exists - A launch configuration already exists with the name kubernetes-minion-group-eu-central-1b There must be definitely some way how to do that, can you please someone help me? Thank you!
I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers. This is currently the recommended approach for getting information not available in the downward API. To avoid the additional complexity in your containers, you could use a "sidecar" with a variation on Tobias's solution. The sidecar would be an additional container in the pod, which connects to the kubernetes API, queries the information you're looking for (node labels), and writes the output to a shared volume. This could be implemented as an init container, or a sidecar that continuously syncs with the API.
###Instruction: I am looking to deploy services to a Kubernetes cluster running over multiple zones and would like to be able to inject the region/zone labels into my pods using environment variables. I have looked into the downward API however this only seems to allow you to inject labels/metadata from the pod/service and not from the node you are running the pod on. If there is no way to inject the node labels another solution I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers. ### Response: I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers. This is currently the recommended approach for getting information not available in the downward API. To avoid the additional complexity in your containers, you could use a "sidecar" with a variation on Tobias's solution. The sidecar would be an additional container in the pod, which connects to the kubernetes API, queries the information you're looking for (node labels), and writes the output to a shared volume. This could be implemented as an init container, or a sidecar that continuously syncs with the API.
I am looking to deploy services to a Kubernetes cluster running over multiple zones and would like to be able to inject the region/zone labels into my pods using environment variables. I have looked into the downward API however this only seems to allow you to inject labels/metadata from the pod/service and not from the node you are running the pod on. If there is no way to inject the node labels another solution I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers.
Navigating to https://<master-ip>:8080/ui will redirect you to the kubernetes-dashboard service in the kube-system namespace. When you first set up your cluster this service was working properly but after rebooting the set of endpoints became stale. You can diagnose this issue by running kubectl describe endpoints --namespace kube-system kubernetes-dashboard to see the current set of endpoints, and if they are incorrect (or missing), restarting the dashboard pod and/or dns pod will resolve the issue.
###Instruction: I configured k8s(kubernetes) with 1 master and 2 slaves. I was able to access web UI provided by k8s but after rebooting of M/cs, not able to access UI with the same URL. May be I am missing with some of environment variables or something else, didn't able to figure it out correctly. Did any one know, what I am missing ? docker ps shows that I am running the desired containers. Images are- gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 and gcr.io/google_containers/pause:2.0 I followed this link. ### Response: Navigating to https://<master-ip>:8080/ui will redirect you to the kubernetes-dashboard service in the kube-system namespace. When you first set up your cluster this service was working properly but after rebooting the set of endpoints became stale. You can diagnose this issue by running kubectl describe endpoints --namespace kube-system kubernetes-dashboard to see the current set of endpoints, and if they are incorrect (or missing), restarting the dashboard pod and/or dns pod will resolve the issue.
I configured k8s(kubernetes) with 1 master and 2 slaves. I was able to access web UI provided by k8s but after rebooting of M/cs, not able to access UI with the same URL. May be I am missing with some of environment variables or something else, didn't able to figure it out correctly. Did any one know, what I am missing ? docker ps shows that I am running the desired containers. Images are- gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 and gcr.io/google_containers/pause:2.0 I followed this link.
I have found the answer. In case anyone else is watching. There is a much better system to watch resource and handle events with custom tasks with pkg/controller/framework package I found the step like this, 1. initiate a framework.NewInFormer 2. Run the controller 3. the NewInFormer loads with your custom event handlers that will call when the events occured.
###Instruction: We are interested in running certain commands as pods and services, as they start or stop. Using the life-cycle hooks in the yml files does not work for us, since these commands are not optional. We have considered running a watcher pod that uses the watch api to run these commands. But we can't figure out how to use the watch api so that it does not keep sending the same events again and again. Is there a way to tell the watch api to only send new events since connection was opened? If expecting a stateful watch api is unreasonable, will it be possible to pass it a timestamp or a monotonically increasing id to avoid getting already seen events? Basically what we are doing now we are running a pod with a daemon process that communicates with the api. we can find the events as stream. But we are interested to run some task when a pod created or deleted. ### Response: I have found the answer. In case anyone else is watching. There is a much better system to watch resource and handle events with custom tasks with pkg/controller/framework package I found the step like this, 1. initiate a framework.NewInFormer 2. Run the controller 3. the NewInFormer loads with your custom event handlers that will call when the events occured.
We are interested in running certain commands as pods and services, as they start or stop. Using the life-cycle hooks in the yml files does not work for us, since these commands are not optional. We have considered running a watcher pod that uses the watch api to run these commands. But we can't figure out how to use the watch api so that it does not keep sending the same events again and again. Is there a way to tell the watch api to only send new events since connection was opened? If expecting a stateful watch api is unreasonable, will it be possible to pass it a timestamp or a monotonically increasing id to avoid getting already seen events? Basically what we are doing now we are running a pod with a daemon process that communicates with the api. we can find the events as stream. But we are interested to run some task when a pod created or deleted.