input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
Is it possible through services? Yes, services are the recommended way to handle this. Once you have your services set up for each pod (or replication controller, as is recommended), you can find the service IP via the service environment variable, e.g. BACKEND_SERVICE_HOST and BACKEND_SERVICE_PORT for a "backend" service.
###Instruction: How does the communication between two different pods happen in Kubernetes? In my case I have two pods: frontend and backend, both have different containers. I want my frontend pod to communicate with the backend pod but I don't want to use backend pod's IP( i.e. hard coded). Is it possible through services? ### Response: Is it possible through services? Yes, services are the recommended way to handle this. Once you have your services set up for each pod (or replication controller, as is recommended), you can find the service IP via the service environment variable, e.g. BACKEND_SERVICE_HOST and BACKEND_SERVICE_PORT for a "backend" service.
How does the communication between two different pods happen in Kubernetes? In my case I have two pods: frontend and backend, both have different containers. I want my frontend pod to communicate with the backend pod but I don't want to use backend pod's IP( i.e. hard coded). Is it possible through services?
There are a lot of online YAML to JSON (and vice versa) converters covering 1.1 and 1.2 spec. I haven't used Kubernetes before, but I can see that you can pass multiple documents. Basically the YAML structure that you use is a short version of two documents. JSON doesn't have an equivalent to this, so you have to break it into two separate documents (files). The three dashes in YAML is a way of defining multiple documents. So basically the above is not one JSON oblect/file, but two. The first { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } } And the second { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } As a side note, since this is not useful for your purpose, in order to represent them as one JSON object, then you need an array. But this would mean that the YAML would have to change too. So in order to have this [ { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } }, { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } ] The YAML equivalent would be this --- - apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx - apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
###Instruction: Instead of this YAML file, I want to pass a JSON file. What is the equivalent JSON for it? I want to use it in the kubectl create -f ... command: apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx --- apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 ### Response: There are a lot of online YAML to JSON (and vice versa) converters covering 1.1 and 1.2 spec. I haven't used Kubernetes before, but I can see that you can pass multiple documents. Basically the YAML structure that you use is a short version of two documents. JSON doesn't have an equivalent to this, so you have to break it into two separate documents (files). The three dashes in YAML is a way of defining multiple documents. So basically the above is not one JSON oblect/file, but two. The first { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } } And the second { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } As a side note, since this is not useful for your purpose, in order to represent them as one JSON object, then you need an array. But this would mean that the YAML would have to change too. So in order to have this [ { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } }, { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } ] The YAML equivalent would be this --- - apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx - apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
Instead of this YAML file, I want to pass a JSON file. What is the equivalent JSON for it? I want to use it in the kubectl create -f ... command: apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx --- apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster: Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. Link to the documentation Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". Information about openshift origin. Information about openshift origin routes
###Instruction: I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant I have in my service.yaml file: spec: type: NodePort When I type: kubectl get services I see: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR kubernetes 10.100.0.1 <none> 443/TCP <none> my-frontend 10.100.250.90 nodes 8000/TCP name=my-app What is the "nodes" external IP? How do I access my-frontend externally? ### Response: In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster: Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. Link to the documentation Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". Information about openshift origin. Information about openshift origin routes
I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant I have in my service.yaml file: spec: type: NodePort When I type: kubectl get services I see: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR kubernetes 10.100.0.1 <none> 443/TCP <none> my-frontend 10.100.250.90 nodes 8000/TCP name=my-app What is the "nodes" external IP? How do I access my-frontend externally?
Kubernetes 1.1 has an Ingress type, which allows you to route different dns names/ips to different services. From github apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
###Instruction: Can I run multiple services on port 80 in Kubernetes on Google Container Engine? Each service has a dedicated external IP Address, so theoretically the router should be able to route to each service based off it's IP Address. So far I have created pods for "frontend-1" and "frontend-2" on Container Engine. I tried to create seperate services for them both running on Port 80 with unique External IPs, but it didn't work. Is there another way to accomplish this in Kubernetes without using a custom routing service? service-1.yaml: id: service-1 port: 80 containerPort: 8080 selector: name: frontend-1 createExternalLoadBalancer: true service-2.yaml: id: service-2 port: 80 containerPort: 8081 selector: name: frontend-2 createExternalLoadBalancer: true ### Response: Kubernetes 1.1 has an Ingress type, which allows you to route different dns names/ips to different services. From github apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
Can I run multiple services on port 80 in Kubernetes on Google Container Engine? Each service has a dedicated external IP Address, so theoretically the router should be able to route to each service based off it's IP Address. So far I have created pods for "frontend-1" and "frontend-2" on Container Engine. I tried to create seperate services for them both running on Port 80 with unique External IPs, but it didn't work. Is there another way to accomplish this in Kubernetes without using a custom routing service? service-1.yaml: id: service-1 port: 80 containerPort: 8080 selector: name: frontend-1 createExternalLoadBalancer: true service-2.yaml: id: service-2 port: 80 containerPort: 8081 selector: name: frontend-2 createExternalLoadBalancer: true
You can start a docker container in detached mode with a -d option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands. The opposite of detached mode is foreground mode. That is the default mode, when -d option is not used. In this mode, the console you are using to execute docker run will be attached to standard input, output and error. That means your console is attached to the container's process. In detached mode, you can follow the standard output of your docker container with docker logs -f <container_ID>. Just try both options. I always use the detached mode to run my containers. I hope I could explain it a little bit clearer.
###Instruction: What is detached mode in the docker world? I read this article Link, but it does not explain exactly what detached mode mean. ### Response: You can start a docker container in detached mode with a -d option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands. The opposite of detached mode is foreground mode. That is the default mode, when -d option is not used. In this mode, the console you are using to execute docker run will be attached to standard input, output and error. That means your console is attached to the container's process. In detached mode, you can follow the standard output of your docker container with docker logs -f <container_ID>. Just try both options. I always use the detached mode to run my containers. I hope I could explain it a little bit clearer.
What is detached mode in the docker world? I read this article Link, but it does not explain exactly what detached mode mean.
There are many cases where our users want to run pods with multiple containers within OpenShift. A common use-case for running multiple containers is where a pod has a 'primary' container that does some job, and a 'side-car' container that does something like write logs to a logging agent. The motivation for pods is twofold -- to make it easier to share resources between containers, and to enable deploying and replicating groups of containers that share resources. You can read more about them in the user-guide. The reason we still use a Pod when only a single container is that containers do not have all the notions that are attached to pods. For example, pods have IP addresses. Containers do not -- they share the IP address associated with the pod's network namespace. Hope that helps. Let me know if you'd like more clarification, or we can discuss on slack.
###Instruction: Kubernetes is an orchestration tool for the management of containers. Kubernetes creates pods which are containing containers, instead of managing containers directly. I read this about pods I'm working with OpenShift V3 which is using pods. But in my apps, all demo's and all examples I see: One pod contains one containers (it's possible to contain more and that could be an advantage of using pods). But in an OpenShift environment I don't see the advantage of this pods. Can some explain me why OpenShift V3 is using kubernetes with pods and containers instead of an orchestration tool which is working with containers immediately (without pods). ### Response: There are many cases where our users want to run pods with multiple containers within OpenShift. A common use-case for running multiple containers is where a pod has a 'primary' container that does some job, and a 'side-car' container that does something like write logs to a logging agent. The motivation for pods is twofold -- to make it easier to share resources between containers, and to enable deploying and replicating groups of containers that share resources. You can read more about them in the user-guide. The reason we still use a Pod when only a single container is that containers do not have all the notions that are attached to pods. For example, pods have IP addresses. Containers do not -- they share the IP address associated with the pod's network namespace. Hope that helps. Let me know if you'd like more clarification, or we can discuss on slack.
Kubernetes is an orchestration tool for the management of containers. Kubernetes creates pods which are containing containers, instead of managing containers directly. I read this about pods I'm working with OpenShift V3 which is using pods. But in my apps, all demo's and all examples I see: One pod contains one containers (it's possible to contain more and that could be an advantage of using pods). But in an OpenShift environment I don't see the advantage of this pods. Can some explain me why OpenShift V3 is using kubernetes with pods and containers instead of an orchestration tool which is working with containers immediately (without pods).
If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must: Make sure your Service selects pods from the RC via matching label selectors Make the Service type=LoadBalancer
###Instruction: When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address: initially: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 80/TCP app=app-1 7s and after about 30s: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.251.171 <none> 80/TCP app=app-1 2m The <none> under External IP worries me. For the Service I use the JSON file: { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "app-1" }, "spec": { "selector": { "app": "app-1" }, "ports": [ { "port": 80, "targetPort": 8000 } ] } } and for the ReplicationController: { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "app-1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "app-1" } }, "spec": { "containers": [ { "name": "service", "image": "gcr.io/sigma-cairn-99810/service:latest", "ports": [ { "containerPort": 8000 } ] } ] } } } } and to expose the Service manually I use the command: kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer" ### Response: If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must: Make sure your Service selects pods from the RC via matching label selectors Make the Service type=LoadBalancer
When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address: initially: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 80/TCP app=app-1 7s and after about 30s: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.251.171 <none> 80/TCP app=app-1 2m The <none> under External IP worries me. For the Service I use the JSON file: { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "app-1" }, "spec": { "selector": { "app": "app-1" }, "ports": [ { "port": 80, "targetPort": 8000 } ] } } and for the ReplicationController: { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "app-1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "app-1" } }, "spec": { "containers": [ { "name": "service", "image": "gcr.io/sigma-cairn-99810/service:latest", "ports": [ { "containerPort": 8000 } ] } ] } } } } and to expose the Service manually I use the command: kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer"
As explained in "How can you completely remove a package?", a sudo apt-get --purge autoremove might have removed go completely. But that doesn't mean it has cleaned what was compiled before Make sure that /home/nan/kubernetes/Godeps/_workspace and /home/nan/kubernetes/_output are deleted after a make clean (See issue 16771). Make sure /usr/local/go/pkg is writable for the user nan. (See hack/lib/golang.sh) Finally, issue 16229 mentions: Would be nice to at least update the docs to indicate that you can't use go>1.4. So try and install go 1.4.x only. Update: the OP Nan Xiao reports in the comments having managed to build it with go 1.5.1 or 1.5.2 without any more issue.
###Instruction: My OS is ubuntu 14.04.3 Server, and I want to build kubernetes. Firstly, I use "apt-get" command install Golang, but the version is 1.2.1, so I use apt-get --purge autoremove command to remove it.And install the newest 1.5.1 from golang website. But executing make command, it seems kubernetes always "think" current golang is 1.2.1: $ make hack/build-go.sh +++ [1203 06:20:30] Building go targets for linux/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/kubemark cmd/hyperkube cmd/linkcheck plugin/cmd/kube-scheduler cmd/kubectl cmd/integration cmd/gendocs cmd/genkubedocs cmd/genman cmd/mungedocs cmd/genbashcomp cmd/genconversion cmd/gendeepcopy cmd/genswaggertypedocs examples/k8petstore/web-server/src github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test +++ [1203 06:20:30] +++ Warning: stdlib pkg with cgo flag not found. +++ [1203 06:20:30] +++ Warning: stdlib pkg cannot be rebuilt since /usr/local/go/pkg is not writable by nan +++ [1203 06:20:30] +++ Warning: Make /usr/local/go/pkg writable for nan for a one-time stdlib install, Or +++ [1203 06:20:30] +++ Warning: Rebuild stdlib using the command 'CGO_ENABLED=0 go install -a -installsuffix cgo std' +++ [1203 06:20:30] +++ Falling back to go build, which is slower # k8s.io/kubernetes/pkg/util/yaml _output/local/go/src/k8s.io/kubernetes/pkg/util/yaml/decoder.go:26: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/ghodss/yaml.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/validation _output/local/go/src/k8s.io/kubernetes/pkg/util/validation/errors.go:23: import /home/nan/kubernetes/_output/local/go/pkg/linux_amd64/k8s.io/kubernetes/pkg/util/errors.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/api/resource _output/local/go/src/k8s.io/kubernetes/pkg/api/resource/quantity.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/speter.net/go/exp/math/dec/inf.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/spf13/cobra Godeps/_workspace/src/github.com/spf13/cobra/command.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/inconshreveable/mousetrap.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/iptables _output/local/go/src/k8s.io/kubernetes/pkg/util/iptables/iptables.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/coreos/go-semver/semver.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/prometheus/common/expfmt Godeps/_workspace/src/github.com/prometheus/common/expfmt/decode.go:23: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/prometheus/client_model/go.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/emicklei/go-restful Godeps/_workspace/src/github.com/emicklei/go-restful/container.go:16: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/emicklei/go-restful/log.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:376 'CGO_ENABLED=0 go build -o "${outfile}" "${goflags[@]:+${goflags[@]}}" -ldflags "${goldflags}" "${binary}"' exited with status 2 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:376 kube::golang::build_binaries_for_platform(...) 2: /home/nan/kubernetes/hack/lib/golang.sh:535 kube::golang::build_binaries(...) 3: hack/build-go.sh:26 main(...) Exiting with status 1 !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:456 '( kube::golang::setup_env; local host_platform; host_platform=$(kube::golang::host_platform); local goflags goldflags; eval "goflags=(${KUBE_GOFLAGS:-})"; goldflags="${KUBE_GOLDFLAGS:-} $(kube::version::ldflags)"; local use_go_build; local -a targets=(); local arg; for arg in "$@"; do if [[ "${arg}" == "--use_go_build" ]]; then use_go_build=true; else if [[ "${arg}" == -* ]]; then goflags+=("${arg}"); else targets+=("${arg}"); fi; fi; done; if [[ ${#targets[@]} -eq 0 ]]; then targets=("${KUBE_ALL_TARGETS[@]}"); fi; local -a platforms=("${KUBE_BUILD_PLATFORMS[@]:+${KUBE_BUILD_PLATFORMS[@]}}"); if [[ ${#platforms[@]} -eq 0 ]]; then platforms=("${host_platform}"); fi; local binaries; binaries=($(kube::golang::binaries_from_targets "${targets[@]}")); local parallel=false; if [[ ${#platforms[@]} -gt 1 ]]; then local gigs; gigs=$(kube::golang::get_physmem); if [[ ${gigs} -ge ${KUBE_PARALLEL_BUILD_MEMORY} ]]; then kube::log::status "Multiple platforms requested and available ${gigs}G >= threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in parallel"; parallel=true; else kube::log::status "Multiple platforms requested, but available ${gigs}G < threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in serial"; parallel=false; fi; fi; if [[ "${parallel}" == "true" ]]; then kube::log::status "Building go targets for ${platforms[@]} in parallel (output will appear in a burst when complete):" "${targets[@]}"; local platform; for platform in "${platforms[@]}"; do ( kube::golang::set_platform_envs "${platform}"; kube::log::status "${platform}: go build started"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; kube::log::status "${platform}: go build finished" ) &> "/tmp//${platform//\//_}.build" & done; local fails=0; for job in $(jobs -p); do wait ${job} || let "fails+=1"; done; for platform in "${platforms[@]}"; do cat "/tmp//${platform//\//_}.build"; done; exit ${fails}; else for platform in "${platforms[@]}"; do kube::log::status "Building go targets for ${platform}:" "${targets[@]}"; kube::golang::set_platform_envs "${platform}"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; done; fi )' exited with status 1 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:456 kube::golang::build_binaries(...) 2: hack/build-go.sh:26 main(...) Exiting with status 1 make: *** [all] Error 1 But the go in "PATH" is 1.5.1: $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin $ which go /usr/local/go/bin/go $ go version go version go1.5.1 linux/amd64 So how can I fix this issue? ### Response: As explained in "How can you completely remove a package?", a sudo apt-get --purge autoremove might have removed go completely. But that doesn't mean it has cleaned what was compiled before Make sure that /home/nan/kubernetes/Godeps/_workspace and /home/nan/kubernetes/_output are deleted after a make clean (See issue 16771). Make sure /usr/local/go/pkg is writable for the user nan. (See hack/lib/golang.sh) Finally, issue 16229 mentions: Would be nice to at least update the docs to indicate that you can't use go>1.4. So try and install go 1.4.x only. Update: the OP Nan Xiao reports in the comments having managed to build it with go 1.5.1 or 1.5.2 without any more issue.
My OS is ubuntu 14.04.3 Server, and I want to build kubernetes. Firstly, I use "apt-get" command install Golang, but the version is 1.2.1, so I use apt-get --purge autoremove command to remove it.And install the newest 1.5.1 from golang website. But executing make command, it seems kubernetes always "think" current golang is 1.2.1: $ make hack/build-go.sh +++ [1203 06:20:30] Building go targets for linux/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/kubemark cmd/hyperkube cmd/linkcheck plugin/cmd/kube-scheduler cmd/kubectl cmd/integration cmd/gendocs cmd/genkubedocs cmd/genman cmd/mungedocs cmd/genbashcomp cmd/genconversion cmd/gendeepcopy cmd/genswaggertypedocs examples/k8petstore/web-server/src github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test +++ [1203 06:20:30] +++ Warning: stdlib pkg with cgo flag not found. +++ [1203 06:20:30] +++ Warning: stdlib pkg cannot be rebuilt since /usr/local/go/pkg is not writable by nan +++ [1203 06:20:30] +++ Warning: Make /usr/local/go/pkg writable for nan for a one-time stdlib install, Or +++ [1203 06:20:30] +++ Warning: Rebuild stdlib using the command 'CGO_ENABLED=0 go install -a -installsuffix cgo std' +++ [1203 06:20:30] +++ Falling back to go build, which is slower # k8s.io/kubernetes/pkg/util/yaml _output/local/go/src/k8s.io/kubernetes/pkg/util/yaml/decoder.go:26: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/ghodss/yaml.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/validation _output/local/go/src/k8s.io/kubernetes/pkg/util/validation/errors.go:23: import /home/nan/kubernetes/_output/local/go/pkg/linux_amd64/k8s.io/kubernetes/pkg/util/errors.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/api/resource _output/local/go/src/k8s.io/kubernetes/pkg/api/resource/quantity.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/speter.net/go/exp/math/dec/inf.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/spf13/cobra Godeps/_workspace/src/github.com/spf13/cobra/command.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/inconshreveable/mousetrap.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/iptables _output/local/go/src/k8s.io/kubernetes/pkg/util/iptables/iptables.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/coreos/go-semver/semver.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/prometheus/common/expfmt Godeps/_workspace/src/github.com/prometheus/common/expfmt/decode.go:23: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/prometheus/client_model/go.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/emicklei/go-restful Godeps/_workspace/src/github.com/emicklei/go-restful/container.go:16: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/emicklei/go-restful/log.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:376 'CGO_ENABLED=0 go build -o "${outfile}" "${goflags[@]:+${goflags[@]}}" -ldflags "${goldflags}" "${binary}"' exited with status 2 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:376 kube::golang::build_binaries_for_platform(...) 2: /home/nan/kubernetes/hack/lib/golang.sh:535 kube::golang::build_binaries(...) 3: hack/build-go.sh:26 main(...) Exiting with status 1 !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:456 '( kube::golang::setup_env; local host_platform; host_platform=$(kube::golang::host_platform); local goflags goldflags; eval "goflags=(${KUBE_GOFLAGS:-})"; goldflags="${KUBE_GOLDFLAGS:-} $(kube::version::ldflags)"; local use_go_build; local -a targets=(); local arg; for arg in "$@"; do if [[ "${arg}" == "--use_go_build" ]]; then use_go_build=true; else if [[ "${arg}" == -* ]]; then goflags+=("${arg}"); else targets+=("${arg}"); fi; fi; done; if [[ ${#targets[@]} -eq 0 ]]; then targets=("${KUBE_ALL_TARGETS[@]}"); fi; local -a platforms=("${KUBE_BUILD_PLATFORMS[@]:+${KUBE_BUILD_PLATFORMS[@]}}"); if [[ ${#platforms[@]} -eq 0 ]]; then platforms=("${host_platform}"); fi; local binaries; binaries=($(kube::golang::binaries_from_targets "${targets[@]}")); local parallel=false; if [[ ${#platforms[@]} -gt 1 ]]; then local gigs; gigs=$(kube::golang::get_physmem); if [[ ${gigs} -ge ${KUBE_PARALLEL_BUILD_MEMORY} ]]; then kube::log::status "Multiple platforms requested and available ${gigs}G >= threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in parallel"; parallel=true; else kube::log::status "Multiple platforms requested, but available ${gigs}G < threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in serial"; parallel=false; fi; fi; if [[ "${parallel}" == "true" ]]; then kube::log::status "Building go targets for ${platforms[@]} in parallel (output will appear in a burst when complete):" "${targets[@]}"; local platform; for platform in "${platforms[@]}"; do ( kube::golang::set_platform_envs "${platform}"; kube::log::status "${platform}: go build started"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; kube::log::status "${platform}: go build finished" ) &> "/tmp//${platform//\//_}.build" & done; local fails=0; for job in $(jobs -p); do wait ${job} || let "fails+=1"; done; for platform in "${platforms[@]}"; do cat "/tmp//${platform//\//_}.build"; done; exit ${fails}; else for platform in "${platforms[@]}"; do kube::log::status "Building go targets for ${platform}:" "${targets[@]}"; kube::golang::set_platform_envs "${platform}"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; done; fi )' exited with status 1 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:456 kube::golang::build_binaries(...) 2: hack/build-go.sh:26 main(...) Exiting with status 1 make: *** [all] Error 1 But the go in "PATH" is 1.5.1: $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin $ which go /usr/local/go/bin/go $ go version go version go1.5.1 linux/amd64 So how can I fix this issue?
Prashanth B. identified the root cause of my issue which was that there were two replication controllers using the same selectors, with different replica values running at the same time. The log location for kubelets (which run the pod) on the Google Compute Instance is, /var/log/kubelet.log. Looking here would have helped point out that the pod was immediately being removed. My troubleshooting could have gone like this: Identify that pod isn't running as intended: kubectl get pods Check the replication controller: kubectl describe rc Search logs for the pod that was created, as seen in the previous command: grep xxr93 /var/log/kubelet.log user@gke-stuff-d9adf8e28-node-13cl:~$ grep xxr93 /var/log/kubelet.log I1203 16:59:09.337110 3366 kubelet.go:2005] SyncLoop (ADD): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345356 3366 kubelet.go:2008] SyncLoop (UPDATE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345423 3366 kubelet.go:2011] SyncLoop (REMOVE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345503 3366 kubelet.go:2101] Failed to delete pod "jenkins-leader-restored-xxr93_default", err: pod not found I1203 16:59:09.483104 3366 manager.go:1707] Need to restart pod infra container for "jenkins-leader-restored-xxr93_default" because it is not found I1203 16:59:13.695134 3366 kubelet.go:1823] Killing unwanted pod "jenkins-leader-restored-xxr93" E1203 17:00:47.026865 3366 manager.go:1920] Error running pod "jenkins-leader-restored-xxr93_default" container "jenkins": impossible: cannot find the mounted volumes for pod "jenkins-leader-restored-xxr93_default"
###Instruction: I have a replication controller that keeps starting a pod but it's never up. How do I get to the replication controller logs so I can debug this? $ kubectl describe rc: Name: jenkins-leader-restored Namespace: default Image(s): gcr.io/cloud-solutions-images/jenkins-gcp-leader:master-5ca73a6 Selector: name=jenkins,role=leader Labels: name=jenkins,role=leader Replicas: 0 current / 1 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 15m 15m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-xxr93 12m 12m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-1e44w 11m 11m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-y3llu 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-wfd70 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-8ji09 5m 5m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-p4wbc 4m 4m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-tvreo 1m 1m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-l6rpy 56s 56s 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-4asg5 Using the Automated Image Builds with Jenkins, Packer, and Kubernetes repo, the 'Practice Restoring a Backup' section. ### Response: Prashanth B. identified the root cause of my issue which was that there were two replication controllers using the same selectors, with different replica values running at the same time. The log location for kubelets (which run the pod) on the Google Compute Instance is, /var/log/kubelet.log. Looking here would have helped point out that the pod was immediately being removed. My troubleshooting could have gone like this: Identify that pod isn't running as intended: kubectl get pods Check the replication controller: kubectl describe rc Search logs for the pod that was created, as seen in the previous command: grep xxr93 /var/log/kubelet.log user@gke-stuff-d9adf8e28-node-13cl:~$ grep xxr93 /var/log/kubelet.log I1203 16:59:09.337110 3366 kubelet.go:2005] SyncLoop (ADD): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345356 3366 kubelet.go:2008] SyncLoop (UPDATE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345423 3366 kubelet.go:2011] SyncLoop (REMOVE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345503 3366 kubelet.go:2101] Failed to delete pod "jenkins-leader-restored-xxr93_default", err: pod not found I1203 16:59:09.483104 3366 manager.go:1707] Need to restart pod infra container for "jenkins-leader-restored-xxr93_default" because it is not found I1203 16:59:13.695134 3366 kubelet.go:1823] Killing unwanted pod "jenkins-leader-restored-xxr93" E1203 17:00:47.026865 3366 manager.go:1920] Error running pod "jenkins-leader-restored-xxr93_default" container "jenkins": impossible: cannot find the mounted volumes for pod "jenkins-leader-restored-xxr93_default"
I have a replication controller that keeps starting a pod but it's never up. How do I get to the replication controller logs so I can debug this? $ kubectl describe rc: Name: jenkins-leader-restored Namespace: default Image(s): gcr.io/cloud-solutions-images/jenkins-gcp-leader:master-5ca73a6 Selector: name=jenkins,role=leader Labels: name=jenkins,role=leader Replicas: 0 current / 1 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 15m 15m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-xxr93 12m 12m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-1e44w 11m 11m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-y3llu 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-wfd70 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-8ji09 5m 5m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-p4wbc 4m 4m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-tvreo 1m 1m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-l6rpy 56s 56s 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-4asg5 Using the Automated Image Builds with Jenkins, Packer, and Kubernetes repo, the 'Practice Restoring a Backup' section.
What OS and Docker version are you using? There are known issues with volume mounting on older versions of Docker - OpenShift/Kube mounts volumes into containers, but when Docker is running OpenShift the stock Docker version hides access to the root mounts. Red Hat OS's carry a patch that makes this works, supposedly Docker 1.10 will carry the right fix. The alternate is to download the binary directly and simply start it with sudo openshift start.
###Instruction: I am testing Openshift Origin v3. I installed it as a docker container following the instructions. I also deployed all the streams in roles/openshift_examples/files/examples/image-streams/image-streams-centos7.json. I am now testing the installation by deploying a dummy php application from Github. I am able to create the project and application. However the builds are stuck in status "pending". In the events tab, I see plenty of messages like this one: "Unable to mount volumes for pod "hello-world-1-build_php1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/9377d3b4-9887- 11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-x2ijq- push"): file does not exist (5 times in the last 40 seconds)" I tried also with a java application and the tomcat docker image, but got the same error messages. Looks like a Kubernetes configuration issue. Any ideas? Thanks for your help UPDATE1: logs from the origin container show a bit more information about the error: Unable to mount volumes for pod "deployment-example-2-deploy_test1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist; skipping pod E1202 09:12:24.269145 4396 pod_workers.go:113] Error syncing pod 70f69f8c-98d3-11e5-8d98-00215abe5482, skipping: IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist W1202 09:12:34.229374 4396 kubelet.go:1690] Orphaned volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push" found, tearing down volume E1202 09:12:34.287847 4396 kubelet.go:1696] Could not tear down volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/ac11a2b5-9880-11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-va0cl-push"): file does not exist The log entries of the start of the origin container: 202 09:12:13.992293 4396 start_master.go:278] assetConfig.loggingPublicURL: invalid value '', Details: required to view aggregated container logs in the console W1202 09:12:13.992442 4396 start_master.go:278] assetConfig.metricsPublicURL: invalid value '', Details: required to view cluster metrics in the console I1202 09:12:14.784026 4396 plugins.go:71] No cloud provider specified. I1202 09:12:14.981775 4396 start_master.go:388] Starting master on 0.0.0.0:8443 (v1.1-270-ge592c18) I1202 09:12:14.981825 4396 start_master.go:389] Public master address is https://192.168.178.55:8443 I1202 09:12:14.981855 4396 start_master.go:393] Using images from "openshift/origin-<component>:v1.1" 2015-12-02 09:12:15.574421 I | etcdserver: name = openshift.local 2015-12-02 09:12:15.574455 I | etcdserver: data dir = openshift.local.etcd 2015-12-02 09:12:15.574465 I | etcdserver: member dir = openshift.local.etcd/member 2015-12-02 09:12:15.574472 I | etcdserver: heartbeat = 100ms 2015-12-02 09:12:15.574480 I | etcdserver: election = 1000ms 2015-12-02 09:12:15.574489 I | etcdserver: snapshot count = 0 2015-12-02 09:12:15.574505 I | etcdserver: advertise client URLs = https://192.168.178.55:4001 2015-12-02 09:12:15.606296 I | etcdserver: restarting member 2041635cb479cd3a in cluster 6a5d0422e654089a at commit index 3846 2015-12-02 09:12:15.609623 I | raft: 2041635cb479cd3a became follower at term 2 2015-12-02 09:12:15.609663 I | raft: newRaft 2041635cb479cd3a [peers: [], term: 2, commit: 3846, applied: 0, lastindex: 3846, lastterm: 2] 2015-12-02 09:12:15.609815 I | etcdserver: set snapshot count to default 10000 2015-12-02 09:12:15.609829 I | etcdserver: starting server... [version: 2.1.2, cluster version: to_be_decided] I1202 09:12:15.611196 4396 etcd.go:68] Started etcd at 192.168.178.55:4001 2015-12-02 09:12:15.624029 N | etcdserver: added local member 2041635cb479cd3a [https://192.168.178.55:7001] to cluster 6a5d0422e654089a 2015-12-02 09:12:15.624349 N | etcdserver: set the initial cluster version to 2.1.0 I1202 09:12:15.645761 4396 run_components.go:181] Using default project node label selector: 2015-12-02 09:12:17.009875 I | raft: 2041635cb479cd3a is starting a new election at term 2 2015-12-02 09:12:17.009915 I | raft: 2041635cb479cd3a became candidate at term 3 2015-12-02 09:12:17.009970 I | raft: 2041635cb479cd3a received vote from 2041635cb479cd3a at term 3 2015-12-02 09:12:17.009995 I | raft: 2041635cb479cd3a became leader at term 3 2015-12-02 09:12:17.010011 I | raft: raft.node: 2041635cb479cd3a elected leader 2041635cb479cd3a at term 3 2015-12-02 09:12:17.059445 I | etcdserver: published {Name:openshift.local ClientURLs:[https://192.168.178.55:4001]} to cluster 6a5d0422e654089a W1202 09:12:17.111262 4396 controller.go:290] Resetting endpoints for master service "kubernetes" to &{{ } {kubernetes default c10e12cf-98d0-11e5-8d98-00215abe5482 8 0 2015-12-02 08:43:26 +0000 UTC <nil> <nil> map[] map[]} [{[{192.168.178.55 <nil>}] [] [{https 8443 TCP} {dns 53 UDP} {dns-tcp 53 TCP}]}]} I1202 09:12:17.524735 4396 master.go:232] Started Kubernetes API at 0.0.0.0:8443/api/v1 I1202 09:12:17.524914 4396 master.go:232] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1 I1202 09:12:17.525038 4396 master.go:232] Started Origin API at 0.0.0.0:8443/oapi/v1 I1202 09:12:17.525049 4396 master.go:232] Started OAuth2 API at 0.0.0.0:8443/oauth I1202 09:12:17.525055 4396 master.go:232] Started Login endpoint at 0.0.0.0:8443/login I1202 09:12:17.525061 4396 master.go:232] Started Web Console 0.0.0.0:8443/console/ I1202 09:12:17.525067 4396 master.go:232] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/ 2015-12-02 09:12:18.523290 I | http: TLS handshake error from 192.168.178.21:50932: EOF 2015-12-02 09:12:18.537124 I | http: TLS handshake error from 192.168.178.21:50933: EOF 2015-12-02 09:12:18.549780 I | http: TLS handshake error from 192.168.178.21:50934: EOF 2015-12-02 09:12:18.556966 I | http: TLS handshake error from 192.168.178.21:50935: EOF 2015-12-02 09:12:20.117727 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0] 2015-12-02 09:12:20.117804 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0] I1202 09:12:20.217891 4396 run_components.go:176] DNS listening at 0.0.0.0:53 I1202 09:12:20.225439 4396 start_master.go:519] Controllers starting (*) E1202 09:12:20.702335 4396 serviceaccounts_controller.go:218] serviceaccounts "default" already exists I1202 09:12:21.505391 4396 nodecontroller.go:133] Sending events to api server. I1202 09:12:21.507690 4396 start_master.go:563] Started Kubernetes Controllers W1202 09:12:21.944254 4396 nodecontroller.go:572] Missing timestamp for Node intweb3. Assuming now as a timestamp. I1202 09:12:21.944570 4396 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"intweb3", UID:"intweb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node intweb3 event: Registered Node intweb3 in NodeController I1202 09:12:22.662116 4396 start_node.go:179] Starting a node connected to https://192.168.178.55:8443 I1202 09:12:22.670163 4396 plugins.go:71] No cloud provider specified. I1202 09:12:22.670239 4396 start_node.go:284] Starting node intweb3 (v1.1-270-ge592c18) W1202 09:12:22.681308 4396 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1 I1202 09:12:22.698136 4396 node.go:56] Connecting to Docker at unix:///var/run/docker.sock I1202 09:12:22.717904 4396 manager.go:128] cAdvisor running in container: "/docker/f80b92397b6eb9052cf318d7225d21eb66941fcb333f16fe2b0330af629f73dd" I1202 09:12:22.932096 4396 fs.go:108] Filesystem partitions: map[/dev/sda1:{mountpoint:/rootfs/boot major:8 minor:1 fsType: blockSize:0} /dev/mapper/intweb3--vg-root:{mountpoint:/rootfs major:252 minor:0 fsType: blockSize:0}] I1202 09:12:22.949204 4396 node.go:251] Started Kubernetes Proxy on 0.0.0.0 I1202 09:12:22.974678 4396 start_master.go:582] Started Origin Controllers I1202 09:12:22.999204 4396 machine.go:48] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I1202 09:12:22.999311 4396 manager.go:163] Machine: {NumCores:2 CpuFrequency:2667000 MemoryCapacity:1010421760 MachineID: SystemUUID:26A5835E-1781-DD11-BBDA-5ABE54820021 BootID:6cbd9dcc-5d4d-414d-96e7-c8a41de013f7 Filesystems:[{Device:/dev/mapper/intweb3--vg-root Capacity:156112113664} {Device:/dev/sda1 Capacity:246755328}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:158737629184 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:1044381696 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:160041885696 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:21:5a:be:54:82 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:1010421760 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I1202 09:12:23.010686 4396 manager.go:169] Version: {KernelVersion:3.19.0-25-generic ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} I1202 09:12:23.011734 4396 server.go:820] Watching apiserver I1202 09:12:23.253556 4396 manager.go:191] Setting dockerRoot to /var/lib/docker I1202 09:12:23.270558 4396 plugins.go:56] Registering credential provider: .dockercfg I1202 09:12:23.363525 4396 server.go:779] Started kubelet E1202 09:12:23.363724 4396 kubelet.go:812] Image garbage collection failed: unable to find data for container / I1202 09:12:23.370771 4396 kubelet.go:833] Running in container "/kubelet" I1202 09:12:23.370860 4396 server.go:104] Starting to listen on 0.0.0.0:10250 I1202 09:12:23.734095 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154869743 +0000 UTC): [579.19167ms] [579.19167ms] Decoded 1 nodes [579.193136ms] [1.466Β΅s] END I1202 09:12:23.734149 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154865413 +0000 UTC): [3.352Β΅s] [3.352Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [579.252571ms] [579.249219ms] Decoding dir /openshift.io/imagestreams/test1 END [579.255504ms] [2.933Β΅s] Decoded 1 nodes [579.257181ms] [1.677Β΅s] END I1202 09:12:23.734204 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.001854335 +0000 UTC): [1.676Β΅s] [1.676Β΅s] About to list directory [732.327694ms] [732.326018ms] List extracted [732.330138ms] [2.444Β΅s] END I1202 09:12:23.773150 4396 factory.go:236] Registering Docker factory I1202 09:12:23.779446 4396 factory.go:93] Registering Raw factory I1202 09:12:24.069082 4396 manager.go:1006] Started watching for new ooms in manager I1202 09:12:24.074624 4396 oomparser.go:183] oomparser using systemd I1202 09:12:24.111389 4396 kubelet.go:944] Node intweb3 was previously registered I1202 09:12:24.112362 4396 manager.go:250] Starting recovery of all containers I1202 09:12:24.166309 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155013407 +0000 UTC): [1.011259672s] [1.011259672s] Decoded 1 nodes [1.011261767s] [2.095Β΅s] END I1202 09:12:24.166422 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155011032 +0000 UTC): [1.327Β΅s] [1.327Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [1.01138385s] [1.011382523s] Decoding dir /openshift.io/imagestreams/test1 END [1.011386853s] [3.003Β΅s] Decoded 1 nodes [1.01138839s] [1.537Β΅s] END I1202 09:12:24.166561 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002949866 +0000 UTC): [3.142Β΅s] [3.142Β΅s] About to list etcd node [152.060049ms] [152.056907ms] Etcd node listed [1.163577016s] [1.011516967s] Node list decoded [1.163587911s] [10.895Β΅s] END I1202 09:12:24.166656 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002947281 +0000 UTC): [1.188Β΅s] [1.188Β΅s] About to list directory [1.16368555s] [1.163684362s] List extracted [1.163687576s] [2.026Β΅s] END I1202 09:12:24.196265 4396 manager.go:255] Recovery completed I1202 09:12:24.215711 4396 manager.go:118] Starting to sync pod status with apiserver I1202 09:12:24.215792 4396 kubelet.go:2056] Starting kubelet main sync loop. UPDATE2 $sudo docker -v Docker version 1.9.1, build a34a1d5 $ cat /etc/*release* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS" NAME="Ubuntu" VERSION="14.04.3 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.3 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" @Clayton: Can you share any link about the mount bug that is supposed to be fixed in 1.10? Thanks! ### Response: What OS and Docker version are you using? There are known issues with volume mounting on older versions of Docker - OpenShift/Kube mounts volumes into containers, but when Docker is running OpenShift the stock Docker version hides access to the root mounts. Red Hat OS's carry a patch that makes this works, supposedly Docker 1.10 will carry the right fix. The alternate is to download the binary directly and simply start it with sudo openshift start.
I am testing Openshift Origin v3. I installed it as a docker container following the instructions. I also deployed all the streams in roles/openshift_examples/files/examples/image-streams/image-streams-centos7.json. I am now testing the installation by deploying a dummy php application from Github. I am able to create the project and application. However the builds are stuck in status "pending". In the events tab, I see plenty of messages like this one: "Unable to mount volumes for pod "hello-world-1-build_php1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/9377d3b4-9887- 11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-x2ijq- push"): file does not exist (5 times in the last 40 seconds)" I tried also with a java application and the tomcat docker image, but got the same error messages. Looks like a Kubernetes configuration issue. Any ideas? Thanks for your help UPDATE1: logs from the origin container show a bit more information about the error: Unable to mount volumes for pod "deployment-example-2-deploy_test1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist; skipping pod E1202 09:12:24.269145 4396 pod_workers.go:113] Error syncing pod 70f69f8c-98d3-11e5-8d98-00215abe5482, skipping: IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist W1202 09:12:34.229374 4396 kubelet.go:1690] Orphaned volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push" found, tearing down volume E1202 09:12:34.287847 4396 kubelet.go:1696] Could not tear down volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/ac11a2b5-9880-11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-va0cl-push"): file does not exist The log entries of the start of the origin container: 202 09:12:13.992293 4396 start_master.go:278] assetConfig.loggingPublicURL: invalid value '', Details: required to view aggregated container logs in the console W1202 09:12:13.992442 4396 start_master.go:278] assetConfig.metricsPublicURL: invalid value '', Details: required to view cluster metrics in the console I1202 09:12:14.784026 4396 plugins.go:71] No cloud provider specified. I1202 09:12:14.981775 4396 start_master.go:388] Starting master on 0.0.0.0:8443 (v1.1-270-ge592c18) I1202 09:12:14.981825 4396 start_master.go:389] Public master address is https://192.168.178.55:8443 I1202 09:12:14.981855 4396 start_master.go:393] Using images from "openshift/origin-<component>:v1.1" 2015-12-02 09:12:15.574421 I | etcdserver: name = openshift.local 2015-12-02 09:12:15.574455 I | etcdserver: data dir = openshift.local.etcd 2015-12-02 09:12:15.574465 I | etcdserver: member dir = openshift.local.etcd/member 2015-12-02 09:12:15.574472 I | etcdserver: heartbeat = 100ms 2015-12-02 09:12:15.574480 I | etcdserver: election = 1000ms 2015-12-02 09:12:15.574489 I | etcdserver: snapshot count = 0 2015-12-02 09:12:15.574505 I | etcdserver: advertise client URLs = https://192.168.178.55:4001 2015-12-02 09:12:15.606296 I | etcdserver: restarting member 2041635cb479cd3a in cluster 6a5d0422e654089a at commit index 3846 2015-12-02 09:12:15.609623 I | raft: 2041635cb479cd3a became follower at term 2 2015-12-02 09:12:15.609663 I | raft: newRaft 2041635cb479cd3a [peers: [], term: 2, commit: 3846, applied: 0, lastindex: 3846, lastterm: 2] 2015-12-02 09:12:15.609815 I | etcdserver: set snapshot count to default 10000 2015-12-02 09:12:15.609829 I | etcdserver: starting server... [version: 2.1.2, cluster version: to_be_decided] I1202 09:12:15.611196 4396 etcd.go:68] Started etcd at 192.168.178.55:4001 2015-12-02 09:12:15.624029 N | etcdserver: added local member 2041635cb479cd3a [https://192.168.178.55:7001] to cluster 6a5d0422e654089a 2015-12-02 09:12:15.624349 N | etcdserver: set the initial cluster version to 2.1.0 I1202 09:12:15.645761 4396 run_components.go:181] Using default project node label selector: 2015-12-02 09:12:17.009875 I | raft: 2041635cb479cd3a is starting a new election at term 2 2015-12-02 09:12:17.009915 I | raft: 2041635cb479cd3a became candidate at term 3 2015-12-02 09:12:17.009970 I | raft: 2041635cb479cd3a received vote from 2041635cb479cd3a at term 3 2015-12-02 09:12:17.009995 I | raft: 2041635cb479cd3a became leader at term 3 2015-12-02 09:12:17.010011 I | raft: raft.node: 2041635cb479cd3a elected leader 2041635cb479cd3a at term 3 2015-12-02 09:12:17.059445 I | etcdserver: published {Name:openshift.local ClientURLs:[https://192.168.178.55:4001]} to cluster 6a5d0422e654089a W1202 09:12:17.111262 4396 controller.go:290] Resetting endpoints for master service "kubernetes" to &{{ } {kubernetes default c10e12cf-98d0-11e5-8d98-00215abe5482 8 0 2015-12-02 08:43:26 +0000 UTC <nil> <nil> map[] map[]} [{[{192.168.178.55 <nil>}] [] [{https 8443 TCP} {dns 53 UDP} {dns-tcp 53 TCP}]}]} I1202 09:12:17.524735 4396 master.go:232] Started Kubernetes API at 0.0.0.0:8443/api/v1 I1202 09:12:17.524914 4396 master.go:232] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1 I1202 09:12:17.525038 4396 master.go:232] Started Origin API at 0.0.0.0:8443/oapi/v1 I1202 09:12:17.525049 4396 master.go:232] Started OAuth2 API at 0.0.0.0:8443/oauth I1202 09:12:17.525055 4396 master.go:232] Started Login endpoint at 0.0.0.0:8443/login I1202 09:12:17.525061 4396 master.go:232] Started Web Console 0.0.0.0:8443/console/ I1202 09:12:17.525067 4396 master.go:232] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/ 2015-12-02 09:12:18.523290 I | http: TLS handshake error from 192.168.178.21:50932: EOF 2015-12-02 09:12:18.537124 I | http: TLS handshake error from 192.168.178.21:50933: EOF 2015-12-02 09:12:18.549780 I | http: TLS handshake error from 192.168.178.21:50934: EOF 2015-12-02 09:12:18.556966 I | http: TLS handshake error from 192.168.178.21:50935: EOF 2015-12-02 09:12:20.117727 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0] 2015-12-02 09:12:20.117804 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0] I1202 09:12:20.217891 4396 run_components.go:176] DNS listening at 0.0.0.0:53 I1202 09:12:20.225439 4396 start_master.go:519] Controllers starting (*) E1202 09:12:20.702335 4396 serviceaccounts_controller.go:218] serviceaccounts "default" already exists I1202 09:12:21.505391 4396 nodecontroller.go:133] Sending events to api server. I1202 09:12:21.507690 4396 start_master.go:563] Started Kubernetes Controllers W1202 09:12:21.944254 4396 nodecontroller.go:572] Missing timestamp for Node intweb3. Assuming now as a timestamp. I1202 09:12:21.944570 4396 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"intweb3", UID:"intweb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node intweb3 event: Registered Node intweb3 in NodeController I1202 09:12:22.662116 4396 start_node.go:179] Starting a node connected to https://192.168.178.55:8443 I1202 09:12:22.670163 4396 plugins.go:71] No cloud provider specified. I1202 09:12:22.670239 4396 start_node.go:284] Starting node intweb3 (v1.1-270-ge592c18) W1202 09:12:22.681308 4396 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1 I1202 09:12:22.698136 4396 node.go:56] Connecting to Docker at unix:///var/run/docker.sock I1202 09:12:22.717904 4396 manager.go:128] cAdvisor running in container: "/docker/f80b92397b6eb9052cf318d7225d21eb66941fcb333f16fe2b0330af629f73dd" I1202 09:12:22.932096 4396 fs.go:108] Filesystem partitions: map[/dev/sda1:{mountpoint:/rootfs/boot major:8 minor:1 fsType: blockSize:0} /dev/mapper/intweb3--vg-root:{mountpoint:/rootfs major:252 minor:0 fsType: blockSize:0}] I1202 09:12:22.949204 4396 node.go:251] Started Kubernetes Proxy on 0.0.0.0 I1202 09:12:22.974678 4396 start_master.go:582] Started Origin Controllers I1202 09:12:22.999204 4396 machine.go:48] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I1202 09:12:22.999311 4396 manager.go:163] Machine: {NumCores:2 CpuFrequency:2667000 MemoryCapacity:1010421760 MachineID: SystemUUID:26A5835E-1781-DD11-BBDA-5ABE54820021 BootID:6cbd9dcc-5d4d-414d-96e7-c8a41de013f7 Filesystems:[{Device:/dev/mapper/intweb3--vg-root Capacity:156112113664} {Device:/dev/sda1 Capacity:246755328}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:158737629184 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:1044381696 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:160041885696 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:21:5a:be:54:82 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:1010421760 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I1202 09:12:23.010686 4396 manager.go:169] Version: {KernelVersion:3.19.0-25-generic ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} I1202 09:12:23.011734 4396 server.go:820] Watching apiserver I1202 09:12:23.253556 4396 manager.go:191] Setting dockerRoot to /var/lib/docker I1202 09:12:23.270558 4396 plugins.go:56] Registering credential provider: .dockercfg I1202 09:12:23.363525 4396 server.go:779] Started kubelet E1202 09:12:23.363724 4396 kubelet.go:812] Image garbage collection failed: unable to find data for container / I1202 09:12:23.370771 4396 kubelet.go:833] Running in container "/kubelet" I1202 09:12:23.370860 4396 server.go:104] Starting to listen on 0.0.0.0:10250 I1202 09:12:23.734095 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154869743 +0000 UTC): [579.19167ms] [579.19167ms] Decoded 1 nodes [579.193136ms] [1.466Β΅s] END I1202 09:12:23.734149 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154865413 +0000 UTC): [3.352Β΅s] [3.352Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [579.252571ms] [579.249219ms] Decoding dir /openshift.io/imagestreams/test1 END [579.255504ms] [2.933Β΅s] Decoded 1 nodes [579.257181ms] [1.677Β΅s] END I1202 09:12:23.734204 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.001854335 +0000 UTC): [1.676Β΅s] [1.676Β΅s] About to list directory [732.327694ms] [732.326018ms] List extracted [732.330138ms] [2.444Β΅s] END I1202 09:12:23.773150 4396 factory.go:236] Registering Docker factory I1202 09:12:23.779446 4396 factory.go:93] Registering Raw factory I1202 09:12:24.069082 4396 manager.go:1006] Started watching for new ooms in manager I1202 09:12:24.074624 4396 oomparser.go:183] oomparser using systemd I1202 09:12:24.111389 4396 kubelet.go:944] Node intweb3 was previously registered I1202 09:12:24.112362 4396 manager.go:250] Starting recovery of all containers I1202 09:12:24.166309 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155013407 +0000 UTC): [1.011259672s] [1.011259672s] Decoded 1 nodes [1.011261767s] [2.095Β΅s] END I1202 09:12:24.166422 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155011032 +0000 UTC): [1.327Β΅s] [1.327Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [1.01138385s] [1.011382523s] Decoding dir /openshift.io/imagestreams/test1 END [1.011386853s] [3.003Β΅s] Decoded 1 nodes [1.01138839s] [1.537Β΅s] END I1202 09:12:24.166561 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002949866 +0000 UTC): [3.142Β΅s] [3.142Β΅s] About to list etcd node [152.060049ms] [152.056907ms] Etcd node listed [1.163577016s] [1.011516967s] Node list decoded [1.163587911s] [10.895Β΅s] END I1202 09:12:24.166656 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002947281 +0000 UTC): [1.188Β΅s] [1.188Β΅s] About to list directory [1.16368555s] [1.163684362s] List extracted [1.163687576s] [2.026Β΅s] END I1202 09:12:24.196265 4396 manager.go:255] Recovery completed I1202 09:12:24.215711 4396 manager.go:118] Starting to sync pod status with apiserver I1202 09:12:24.215792 4396 kubelet.go:2056] Starting kubelet main sync loop. UPDATE2 $sudo docker -v Docker version 1.9.1, build a34a1d5 $ cat /etc/*release* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS" NAME="Ubuntu" VERSION="14.04.3 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.3 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" @Clayton: Can you share any link about the mount bug that is supposed to be fixed in 1.10? Thanks!
Assuming that your pod still exists: kubectl logs <podname> --previous $ kubectl logs -h -p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
###Instruction: Any idea to view the log files of a crashed pod in kubernetes? My pod is listing it's state as "CrashLoopBackOff" after started the replicationController. I search the available docs and couldn't find any. ### Response: Assuming that your pod still exists: kubectl logs <podname> --previous $ kubectl logs -h -p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
Any idea to view the log files of a crashed pod in kubernetes? My pod is listing it's state as "CrashLoopBackOff" after started the replicationController. I search the available docs and couldn't find any.
Fixed by adding --cors-allowed-origins=["http://*"] argument to /etc/default/kube-apiserver file. Then restarted to kube-apiserver.
###Instruction: Is there a way I can enable CORS on Kubernetes API so that I can send ajax requests to Kubernetes API with a different domain? ### Response: Fixed by adding --cors-allowed-origins=["http://*"] argument to /etc/default/kube-apiserver file. Then restarted to kube-apiserver.
Is there a way I can enable CORS on Kubernetes API so that I can send ajax requests to Kubernetes API with a different domain?
the project has evolved quite a bit. You can find the docker containers for CoreOS at https://github.com/ceph/ceph-docker
###Instruction: Anyone has success or pointer on using the kubernetes to create a pod that mounts ceph rbd within the docker container? The following example from kubernetes uses Fedora 21 with installation of ceph binaries, which won't work in CoreOS. http://kubernetes.io/v1.0/examples/rbd/ or http://www.sebastien-han.fr/blog/2015/06/29/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/ ### Response: the project has evolved quite a bit. You can find the docker containers for CoreOS at https://github.com/ceph/ceph-docker
Anyone has success or pointer on using the kubernetes to create a pod that mounts ceph rbd within the docker container? The following example from kubernetes uses Fedora 21 with installation of ceph binaries, which won't work in CoreOS. http://kubernetes.io/v1.0/examples/rbd/ or http://www.sebastien-han.fr/blog/2015/06/29/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/
I think the design is for the thing managing the endpoint addresses to add/remove them based on liveness. For services backed by pods, the pod IPs are added to endpoints based on the pod's readiness check. If a pod's liveness check fails, it is deleted and its IP removed from the endpoint. If you are manually managing endpoint addresses, the burden is currently on you (or your external health checker) to maintain the addresses/notReadyAddresses in the endpoint.
###Instruction: Is this a thing? I have some legacy services which will never run in Kubernetes that I currently make available to my cluster by defining a service and manually uploading an endpoints object. However, the service is horizontally sharded and we often need to restart one of the endpoints. My google-fu might be weak, but i can't figure out if Kubernetes is clever enough to prevent the Service from repeatedly trying the dead endpoint? The ideal behavior is that the proxy should detect the outage, mark the endpoint as failed, and at some point when the endpoint comes back re-admit it into the full list of working endpoints. BTW, I understand that at present, liveness probes are HTTP only. This would need to be a TCP probe because it's a replicated database service that doesn't grok HTTP. ### Response: I think the design is for the thing managing the endpoint addresses to add/remove them based on liveness. For services backed by pods, the pod IPs are added to endpoints based on the pod's readiness check. If a pod's liveness check fails, it is deleted and its IP removed from the endpoint. If you are manually managing endpoint addresses, the burden is currently on you (or your external health checker) to maintain the addresses/notReadyAddresses in the endpoint.
Is this a thing? I have some legacy services which will never run in Kubernetes that I currently make available to my cluster by defining a service and manually uploading an endpoints object. However, the service is horizontally sharded and we often need to restart one of the endpoints. My google-fu might be weak, but i can't figure out if Kubernetes is clever enough to prevent the Service from repeatedly trying the dead endpoint? The ideal behavior is that the proxy should detect the outage, mark the endpoint as failed, and at some point when the endpoint comes back re-admit it into the full list of working endpoints. BTW, I understand that at present, liveness probes are HTTP only. This would need to be a TCP probe because it's a replicated database service that doesn't grok HTTP.
When you create a service externalized on Google's cloud with the "LoadBalancer" directive, it creates an L3 load balancer. You can also use the new ingress directive to create an L7 (e.g. HTTP) balancer, but that doesn't yet support SSL. To enable SSL, you should follow the HTTP Load Balancing instructions but create an HTTPS service (with an SSL certificate) when configuring the cloud load balancer.
###Instruction: How do I have a web app on HTTPS on google cloud container engine using HTTPS load balancing? I created SslCertificate resource. And, I created a kubernetes service that has port 443 externally open: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"app", "labels":{ "app":"app" } }, "spec":{ "ports": [ { "port":443, "name":"app-server" } ], "selector":{ "app":"app" }, "type": "LoadBalancer" } } , but that's not enough, or right? ### Response: When you create a service externalized on Google's cloud with the "LoadBalancer" directive, it creates an L3 load balancer. You can also use the new ingress directive to create an L7 (e.g. HTTP) balancer, but that doesn't yet support SSL. To enable SSL, you should follow the HTTP Load Balancing instructions but create an HTTPS service (with an SSL certificate) when configuring the cloud load balancer.
How do I have a web app on HTTPS on google cloud container engine using HTTPS load balancing? I created SslCertificate resource. And, I created a kubernetes service that has port 443 externally open: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"app", "labels":{ "app":"app" } }, "spec":{ "ports": [ { "port":443, "name":"app-server" } ], "selector":{ "app":"app" }, "type": "LoadBalancer" } } , but that's not enough, or right?
I think the Kubernetes Deployment object is what you are looking for. It is an object in the Kubernetes REST API (as opposed to the client-side magic in kubectl rolling-update). You can specify .spec.strategy.type==RollingUpdate as your Deployment Strategy to get similar behavior to kubectl rolling-update
###Instruction: After a quick search of the api docs I found out that in Kubernetes there is no rest api provided for kubectl rolling-update. Is there any other alternative for perform a rolling update by calling several apis or so? Thanks in advance. ### Response: I think the Kubernetes Deployment object is what you are looking for. It is an object in the Kubernetes REST API (as opposed to the client-side magic in kubectl rolling-update). You can specify .spec.strategy.type==RollingUpdate as your Deployment Strategy to get similar behavior to kubectl rolling-update
After a quick search of the api docs I found out that in Kubernetes there is no rest api provided for kubectl rolling-update. Is there any other alternative for perform a rolling update by calling several apis or so? Thanks in advance.
It seems like the documentation has an out-of-date kube-apiserver.yaml file. I ran into this issue with another deployment guide. You should file this as a bug on their github page. The image for the api server gcr.io/google_containers/kube-apiserver:9680e782e08a1a1c94c656190011bd02 is at v0.18.0 from several months ago. you will need to replace the "image" line in kube-apiserver.yaml on each machine with the current image. I'm not sure what the current image is. But ill keep digging.
###Instruction: I'm currently trying to configure a highly available master cluster. I followed the proper documentation but i'm facing the following issue. My kubectl version is v1.1.2 kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"} error: Failed to negotiate an api version. Server supports: map[v1beta1:{} v1beta2:{} v1beta3:{}]. Client supports: [v1 extensions/v1beta1]. And my apiserver doesn't match the same version: curl -ku kube:changeme https://10.115.99.31/version { "major": "0", "minor": "18+", "gitVersion": "v0.18.0-71-g0bb78fe6c53ce3-dirty", "gitCommit": "0bb78fe6c53ce38198cc3805c78308cdd4805ac8", "gitTreeState": "dirty" } I didn't find a way to list the tags for the kube-apiserver docker images from the google repository. How can i do that please ? Regards, Smana ### Response: It seems like the documentation has an out-of-date kube-apiserver.yaml file. I ran into this issue with another deployment guide. You should file this as a bug on their github page. The image for the api server gcr.io/google_containers/kube-apiserver:9680e782e08a1a1c94c656190011bd02 is at v0.18.0 from several months ago. you will need to replace the "image" line in kube-apiserver.yaml on each machine with the current image. I'm not sure what the current image is. But ill keep digging.
I'm currently trying to configure a highly available master cluster. I followed the proper documentation but i'm facing the following issue. My kubectl version is v1.1.2 kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"} error: Failed to negotiate an api version. Server supports: map[v1beta1:{} v1beta2:{} v1beta3:{}]. Client supports: [v1 extensions/v1beta1]. And my apiserver doesn't match the same version: curl -ku kube:changeme https://10.115.99.31/version { "major": "0", "minor": "18+", "gitVersion": "v0.18.0-71-g0bb78fe6c53ce3-dirty", "gitCommit": "0bb78fe6c53ce38198cc3805c78308cdd4805ac8", "gitTreeState": "dirty" } I didn't find a way to list the tags for the kube-apiserver docker images from the google repository. How can i do that please ? Regards, Smana
I forgot to open port 10250 (tcp) (in my aws security group). This was the only issue for me.
###Instruction: I'm working on OpenShift Origin 1.1 (which is using kubernetes as its orchestration tool for docker containers). I'm creating pods, but I'm unable to see the build-logs. [user@ip master]# oc get pods NAME READY STATUS RESTARTS AGE test-1-build 0/1 Completed 0 14m test-1-iok8n 1/1 Running 0 12m [user@ip master]# oc logs test-1-iok8n Error from server: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout My /var/logs/messages shows: Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579794 32518 apiserver.go:440] apiserver was unable to write a JSON response: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579822 32518 errors.go:62] apiserver received an error that is not an unversioned.Status: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout My versions are: origin v1.1.0.1-1-g2c6ff4b kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2 ### Response: I forgot to open port 10250 (tcp) (in my aws security group). This was the only issue for me.
I'm working on OpenShift Origin 1.1 (which is using kubernetes as its orchestration tool for docker containers). I'm creating pods, but I'm unable to see the build-logs. [user@ip master]# oc get pods NAME READY STATUS RESTARTS AGE test-1-build 0/1 Completed 0 14m test-1-iok8n 1/1 Running 0 12m [user@ip master]# oc logs test-1-iok8n Error from server: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout My /var/logs/messages shows: Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579794 32518 apiserver.go:440] apiserver was unable to write a JSON response: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579822 32518 errors.go:62] apiserver received an error that is not an unversioned.Status: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout My versions are: origin v1.1.0.1-1-g2c6ff4b kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2
It's actually the Pod or Replication Controller that is having the issue because it is not forwarding to the service. Perhaps post that config or make sure it has port specified and its containers' processes are listening to the right port Original It's the NodePort that is actually exposed outside of the pod. Port is the port on the NAT network within the node and Port is what the process inside the container should bind to, the usually using service discovery. Other pods will talk to that pod on the NodePort. If you want to set the NodePort explicitly for say a web server, then in you Pod's definition or a replication controller or service definition, explicitly set NodePort to the desired port. There for Port: 80 would be say nginx inside a container listening on port 80, then NodePort: 4980 would be the exposed port. So you would wget <Node IP>:4980. As far as fixing your particular situation, I recommend not complicating it as much and explicitly set TargetPort and NodePort.
###Instruction: I have created a kubernetes service: [root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. However, am unable to connect to connect to the Endpoint, not even from the shell on the node host: [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. Calling wget localhost:31982 on the NodePort also gives a Recv failure: Connection reset by peer and the kube-proxy logs error messages: Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. What could be the reason for this failure? Here is my service configuration file http://pastebin.com/RriYPRg7, a slight modification of https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml ### Response: It's actually the Pod or Replication Controller that is having the issue because it is not forwarding to the service. Perhaps post that config or make sure it has port specified and its containers' processes are listening to the right port Original It's the NodePort that is actually exposed outside of the pod. Port is the port on the NAT network within the node and Port is what the process inside the container should bind to, the usually using service discovery. Other pods will talk to that pod on the NodePort. If you want to set the NodePort explicitly for say a web server, then in you Pod's definition or a replication controller or service definition, explicitly set NodePort to the desired port. There for Port: 80 would be say nginx inside a container listening on port 80, then NodePort: 4980 would be the exposed port. So you would wget <Node IP>:4980. As far as fixing your particular situation, I recommend not complicating it as much and explicitly set TargetPort and NodePort.
I have created a kubernetes service: [root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. However, am unable to connect to connect to the Endpoint, not even from the shell on the node host: [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. Calling wget localhost:31982 on the NodePort also gives a Recv failure: Connection reset by peer and the kube-proxy logs error messages: Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. What could be the reason for this failure? Here is my service configuration file http://pastebin.com/RriYPRg7, a slight modification of https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml
Both sysdig and Datadog provide this functionality as well.
###Instruction: Is it possible to somehow send alerts (to email / slack) based on events that occur within a Kubernetes cluster? In particular, it would be useful to get an alert if a pod has restarted unexpectedly or if a pod cannot start. Similarly it would be useful to know if a pod's CPU usage was over a certain threshold and get an alert. We have Heapster (with InfluxDB / Grafana backend) installed. While this gives useful data, it unfortunately does not provide us with alerting. ### Response: Both sysdig and Datadog provide this functionality as well.
Is it possible to somehow send alerts (to email / slack) based on events that occur within a Kubernetes cluster? In particular, it would be useful to get an alert if a pod has restarted unexpectedly or if a pod cannot start. Similarly it would be useful to know if a pod's CPU usage was over a certain threshold and get an alert. We have Heapster (with InfluxDB / Grafana backend) installed. While this gives useful data, it unfortunately does not provide us with alerting.
/16 means a subnet mask of 255.255.0.0 (instead of 255.255.255.0 with /24) The error message comes from pkg/registry/service/ipallocator/allocator_test.go#L196-L198 if !network.IP.Equal(cidr.IP) || network.Mask.String() != cidr.Mask.String() { t.Fatalf("mismatched networks: %s : %s", network, cidr) } It might be possible the host network mask (seen in ipconfig if the host is Windows, or ifconfig as in this script) might be different from the cidr mask used by kubernetes. Try with /24 just for testing. See also issue 5 (Network comportment) In the end, the OP batazor confirms in the comments an issue on Kubernetes side: kubernetes updated from version 1.0.3 to 1.0.6 and got docker0 mask to 255.255.255.0 This is some sort of magic.
###Instruction: Whether prompt, What linked in kubernetes this message: The Service "skudns" is invalid:spec.clusterIP: invalid value '': the provided range does not match the current range At that that DNS_SERVICE_IP = 10.3.0.10, and SERVICE_IP_RANGE = 10.3.0.0/16 My ip-address: K8S_SERVICE_IP: 10.3.0.1 MASTER_HOST: 192.168.0.139 ETCD_ENDPOINT=ETCD_CLUSTER=http://192.169.0.139:2379,http://192.168.0.107:2379 POD_NETWORK: 10.2.0.0/16 SERVICE_IP_RANGE: 10.3.0.0/24 DNS_SERVICE_IP: 10.3.0.10 ADVERTISE_IP: 192.168.0.139 ### Response: /16 means a subnet mask of 255.255.0.0 (instead of 255.255.255.0 with /24) The error message comes from pkg/registry/service/ipallocator/allocator_test.go#L196-L198 if !network.IP.Equal(cidr.IP) || network.Mask.String() != cidr.Mask.String() { t.Fatalf("mismatched networks: %s : %s", network, cidr) } It might be possible the host network mask (seen in ipconfig if the host is Windows, or ifconfig as in this script) might be different from the cidr mask used by kubernetes. Try with /24 just for testing. See also issue 5 (Network comportment) In the end, the OP batazor confirms in the comments an issue on Kubernetes side: kubernetes updated from version 1.0.3 to 1.0.6 and got docker0 mask to 255.255.255.0 This is some sort of magic.
Whether prompt, What linked in kubernetes this message: The Service "skudns" is invalid:spec.clusterIP: invalid value '': the provided range does not match the current range At that that DNS_SERVICE_IP = 10.3.0.10, and SERVICE_IP_RANGE = 10.3.0.0/16 My ip-address: K8S_SERVICE_IP: 10.3.0.1 MASTER_HOST: 192.168.0.139 ETCD_ENDPOINT=ETCD_CLUSTER=http://192.169.0.139:2379,http://192.168.0.107:2379 POD_NETWORK: 10.2.0.0/16 SERVICE_IP_RANGE: 10.3.0.0/24 DNS_SERVICE_IP: 10.3.0.10 ADVERTISE_IP: 192.168.0.139
Google has developer documentation for using OAuth 2.0 to Access Google APIs (and the Google Container Engine is one such API). It explains the various authentication flows and how to get access tokens.
###Instruction: I want access container engine REST API's given here - http://kubernetes.io/third_party/swagger-ui/#/ To access above given API's, I did following- 1) I have created container cluster with project Id - virtual-cycling-11111 with zone- us-central1 and API name - serverconfig 2) I created OAuth2.0 client Id and secret key. I am using following method to generate access token: curl -H "Content-Type: application/json" -d' { "client_id": "757054420263-09g36ip2jdt6kcl6cvlfl17faaaaaaa.apps.googleusercontent.com", "client_secret": "NyZ0YwvEQAMaeNTD4dfgtht", "refresh_token": "1/6BMfW9j53gdGIasdfUH5kU5RsR4zwI9lUVX-tqf8JXQ", "grant_type": "refresh_token" } ' https://www.googleapis.com/oauth2/v4/token How do generate access token and use it to access REST API? Also can I use this in browser to get output? ### Response: Google has developer documentation for using OAuth 2.0 to Access Google APIs (and the Google Container Engine is one such API). It explains the various authentication flows and how to get access tokens.
I want access container engine REST API's given here - http://kubernetes.io/third_party/swagger-ui/#/ To access above given API's, I did following- 1) I have created container cluster with project Id - virtual-cycling-11111 with zone- us-central1 and API name - serverconfig 2) I created OAuth2.0 client Id and secret key. I am using following method to generate access token: curl -H "Content-Type: application/json" -d' { "client_id": "757054420263-09g36ip2jdt6kcl6cvlfl17faaaaaaa.apps.googleusercontent.com", "client_secret": "NyZ0YwvEQAMaeNTD4dfgtht", "refresh_token": "1/6BMfW9j53gdGIasdfUH5kU5RsR4zwI9lUVX-tqf8JXQ", "grant_type": "refresh_token" } ' https://www.googleapis.com/oauth2/v4/token How do generate access token and use it to access REST API? Also can I use this in browser to get output?
The cluster autoscaler generally sets the number of nodes based on the target utilization level of your VMs. It doesn't know anything about what you are running on the VMs (pods or otherwise) and only looks at the utilization. The Google Container Engine / Kubernetes scheduler looks at the resource requests for each pod and finds an available node on which to run the pod. If there isn't space available, then the pod will stay in the Pending state rather than start running. It sounds like you are experiencing a situation where the pods that are running aren't using sufficient CPU to cause the autoscaler to add new nodes to your cluster, but the existing nodes don't have enough capacity for the pods that you want to schedule. When configuring the VM autoscaler, you can set the minimum number of VMs (see https://cloud.google.com/compute/docs/reference/latest/autoscalers#resource) based on the minimum pod footprint that you want to always be running in your cluster. Then the autoscaler won't delete the VMs that are necessary for all of your pods to run. You can also look at the Horizontal Pod Autoscaler in Kubernetes 1.1 to increase the number of pod replicas in your replication controller based on their observed CPU usage.
###Instruction: We had a GKE cluster with 3 nodes. On those nodes one ReplicationController was set to run 3 pods of type A and another ReplicationController was set to run 4 pods of type B. We set up an instance group manager to autoscale the nodes on CPU. Since there was no load on the cluster it scaled down to 1 node. Now that node was running only 2 pods of type B and 0 of type A. I was kinda expecting it to at least have 1 pod of A and 1 of B left after the scale down, but that didn't happen. Is there a way to configure Kubernetes (or GKE) to always have at least 1 of each pod? ### Response: The cluster autoscaler generally sets the number of nodes based on the target utilization level of your VMs. It doesn't know anything about what you are running on the VMs (pods or otherwise) and only looks at the utilization. The Google Container Engine / Kubernetes scheduler looks at the resource requests for each pod and finds an available node on which to run the pod. If there isn't space available, then the pod will stay in the Pending state rather than start running. It sounds like you are experiencing a situation where the pods that are running aren't using sufficient CPU to cause the autoscaler to add new nodes to your cluster, but the existing nodes don't have enough capacity for the pods that you want to schedule. When configuring the VM autoscaler, you can set the minimum number of VMs (see https://cloud.google.com/compute/docs/reference/latest/autoscalers#resource) based on the minimum pod footprint that you want to always be running in your cluster. Then the autoscaler won't delete the VMs that are necessary for all of your pods to run. You can also look at the Horizontal Pod Autoscaler in Kubernetes 1.1 to increase the number of pod replicas in your replication controller based on their observed CPU usage.
We had a GKE cluster with 3 nodes. On those nodes one ReplicationController was set to run 3 pods of type A and another ReplicationController was set to run 4 pods of type B. We set up an instance group manager to autoscale the nodes on CPU. Since there was no load on the cluster it scaled down to 1 node. Now that node was running only 2 pods of type B and 0 of type A. I was kinda expecting it to at least have 1 pod of A and 1 of B left after the scale down, but that didn't happen. Is there a way to configure Kubernetes (or GKE) to always have at least 1 of each pod?
is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers? Yes, it is currently possible. Nodes are given the address of an apiserver and client credentials and then register themselves into the cluster. Nodes don't know (or care) of the apiserver is local or remote, and the apiserver allows any node to register as long as it has valid credentials regardless of where the node exists on the network. Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent. This is important, as many of the settings in Kubernetes assume (either implicitly or explicitly) a high bandwidth, low-latency network between the apiserver and nodes. Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's. The downside of this approach is that if you have one global cluster you have one global point of failure. Even if you have replicated, HA master components, data corruption can still take your entire cluster offline. And a bad config propagated to all pods in a replication controller can take your entire service offline. A bad node image push can take all of your nodes offline. And so on. This is one of the reasons that we encourage folks to use a cluster per failure domain rather than a single global cluster.
###Instruction: With the understanding that Ubernetes is designed to fully solve this problem, is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers? Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent. Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's. Has someone implemented something like this as a stop gap solution before Ubernetes drops and if so, how has it worked and what would be some considerations to take into account on running like this? ### Response: is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers? Yes, it is currently possible. Nodes are given the address of an apiserver and client credentials and then register themselves into the cluster. Nodes don't know (or care) of the apiserver is local or remote, and the apiserver allows any node to register as long as it has valid credentials regardless of where the node exists on the network. Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent. This is important, as many of the settings in Kubernetes assume (either implicitly or explicitly) a high bandwidth, low-latency network between the apiserver and nodes. Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's. The downside of this approach is that if you have one global cluster you have one global point of failure. Even if you have replicated, HA master components, data corruption can still take your entire cluster offline. And a bad config propagated to all pods in a replication controller can take your entire service offline. A bad node image push can take all of your nodes offline. And so on. This is one of the reasons that we encourage folks to use a cluster per failure domain rather than a single global cluster.
With the understanding that Ubernetes is designed to fully solve this problem, is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers? Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent. Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's. Has someone implemented something like this as a stop gap solution before Ubernetes drops and if so, how has it worked and what would be some considerations to take into account on running like this?
I do that same exact thing in my Kubernetes cluster. Our solution is for application to handle the web socket disconnect with consistent state kept intact. However, other options you have are mount a volume to serve from the host; however, you cannot guarantee all nginx pods will have that volume on multi hosts, unless you use a kubernetes' persistent volume http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes.html. Another option you have is to have your static content on an object store like S3, Google Cloud Storage or Ceph, and then proxy the object store through nginx along with the websocket.
###Instruction: I'm currently building a Kubernetes cluster. I plan on using Nginx containers as a server for static content, and to act as a web socket proxy. If you restart Nginx, you lose your web socket connection, so I do not want to restart the containers. But I will want to update the content within the container. ### Response: I do that same exact thing in my Kubernetes cluster. Our solution is for application to handle the web socket disconnect with consistent state kept intact. However, other options you have are mount a volume to serve from the host; however, you cannot guarantee all nginx pods will have that volume on multi hosts, unless you use a kubernetes' persistent volume http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes.html. Another option you have is to have your static content on an object store like S3, Google Cloud Storage or Ceph, and then proxy the object store through nginx along with the websocket.
I'm currently building a Kubernetes cluster. I plan on using Nginx containers as a server for static content, and to act as a web socket proxy. If you restart Nginx, you lose your web socket connection, so I do not want to restart the containers. But I will want to update the content within the container.
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes. Here are a couple of alternatives (that may solve your problem): Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it. Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects). Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume. Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
###Instruction: i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile... ### Response: There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes. Here are a couple of alternatives (that may solve your problem): Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it. Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects). Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume. Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
The state of the pod doesn't change while preStop hooks are run -- the preStop hook is run in the container, and then the container is stopped.
###Instruction: I'm using preStop command to gracefully shutdown my server application when I delete a pod. What is the state of the pod/ container when it runs preStop command? For example, does it stop the network interfaces before running the preStop command? lifecycle: preStop: exec: command: ["kill", "-SIGTERM", "`pidof java`"] ### Response: The state of the pod doesn't change while preStop hooks are run -- the preStop hook is run in the container, and then the container is stopped.
I'm using preStop command to gracefully shutdown my server application when I delete a pod. What is the state of the pod/ container when it runs preStop command? For example, does it stop the network interfaces before running the preStop command? lifecycle: preStop: exec: command: ["kill", "-SIGTERM", "`pidof java`"]
It looks like this bug has been fixed. I just copied an instance template for my Google Container Engine cluster in the UI, modified the only the startup script field, and created a new instance template. Please try this again, as I believe it will now work for you as you expect.
###Instruction: Trying to setup a pilot in GCE to try out GKE. I'm trying to create a new instance template from a copy of the one created with by the "gcloud container clusters create" with more space per instance and the create is just hanging??? Is there something obvious that I'm not doing? enter image description here ### Response: It looks like this bug has been fixed. I just copied an instance template for my Google Container Engine cluster in the UI, modified the only the startup script field, and created a new instance template. Please try this again, as I believe it will now work for you as you expect.
Trying to setup a pilot in GCE to try out GKE. I'm trying to create a new instance template from a copy of the one created with by the "gcloud container clusters create" with more space per instance and the create is just hanging??? Is there something obvious that I'm not doing? enter image description here
port 5000 is used by the local docker registry usually. It is an add-on though. Is your cluster pulling images from that local registry? If so, is it working? how is it setup? this link may help figure your config issues: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry
###Instruction: All of the nodes in our AWS kubernetes cluster (Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}) are getting the following messages sent to /var/log/syslog which are filling the disk very quickly (32GB in about 24 hours). Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.961584 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.164:58063 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961775 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961888 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962104 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962275 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962299 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.962380 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.87:29540 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962630 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962746 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962958 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963084 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963105 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. We created the cluster using export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash if that is relevant. Can anyone point me into the right direction as to the cause? ### Response: port 5000 is used by the local docker registry usually. It is an add-on though. Is your cluster pulling images from that local registry? If so, is it working? how is it setup? this link may help figure your config issues: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry
All of the nodes in our AWS kubernetes cluster (Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}) are getting the following messages sent to /var/log/syslog which are filling the disk very quickly (32GB in about 24 hours). Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.961584 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.164:58063 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961775 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961888 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962104 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962275 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962299 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.962380 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.87:29540 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962630 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962746 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962958 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963084 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963105 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. We created the cluster using export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash if that is relevant. Can anyone point me into the right direction as to the cause?
It happens when service endpoint is not ready. To verify that, please check output of kubectl get endpoints against Cassandra service. If it is blank, then it means KubernetesSeedProvider is not able to deserialize the output received from Kubernetes API server as the endpoint's address is in notReadyAddresses state. One possible work-around to this problem is to create a Cassandra pod before creating Cassandra service.
###Instruction: I have a Cassandra image that worked with a GKE cluster v1.0.7 but has occassional issues starting on a new GKE cluster at v1.1.1 (no changes to the image or how it is created with kubectl just pointing to a new cluster). I am using kubernetes-cassandra.jar from the kubernetes Cassandra example on github. I see the following in kubectl logs. INFO 21:57:01 Getting endpoints from https://kubernetes.default.cluster.local/api/v1/namespaces/default/endpoints/cassandra ERROR 21:57:01 Fatal error during configuration loading java.lang.NullPointerException: null at io.k8s.cassandra.KubernetesSeedProvider.getSeeds(KubernetesSeedProvider.java:129) ~[kubernetes-cassandra.jar:na] at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:659) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.11.jar:2.1.11] null Fatal error during configuration loading; unable to start. See log for stacktrace. $ kubectl get pods NAME READY STATUS RESTARTS AGE cassandra 0/1 CrashLoopBackOff 8 13m Has anyone seen this error or have ideas on how to troubleshoot? ### Response: It happens when service endpoint is not ready. To verify that, please check output of kubectl get endpoints against Cassandra service. If it is blank, then it means KubernetesSeedProvider is not able to deserialize the output received from Kubernetes API server as the endpoint's address is in notReadyAddresses state. One possible work-around to this problem is to create a Cassandra pod before creating Cassandra service.
I have a Cassandra image that worked with a GKE cluster v1.0.7 but has occassional issues starting on a new GKE cluster at v1.1.1 (no changes to the image or how it is created with kubectl just pointing to a new cluster). I am using kubernetes-cassandra.jar from the kubernetes Cassandra example on github. I see the following in kubectl logs. INFO 21:57:01 Getting endpoints from https://kubernetes.default.cluster.local/api/v1/namespaces/default/endpoints/cassandra ERROR 21:57:01 Fatal error during configuration loading java.lang.NullPointerException: null at io.k8s.cassandra.KubernetesSeedProvider.getSeeds(KubernetesSeedProvider.java:129) ~[kubernetes-cassandra.jar:na] at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:659) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.11.jar:2.1.11] null Fatal error during configuration loading; unable to start. See log for stacktrace. $ kubectl get pods NAME READY STATUS RESTARTS AGE cassandra 0/1 CrashLoopBackOff 8 13m Has anyone seen this error or have ideas on how to troubleshoot?
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster. Maybe they will be more "natural" and easy: Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox for example, but I think it will be good to know about that feature. In that case you will get not only "internal cluster-only" ip address for your service but also externally configured load balancer to access it (in aws/gce etc.) Link to the documentation Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". Information about openshift origin. Information about openshift origin routes
###Instruction: I have created a kubernetes service: [root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. However, am unable to connect to connect to the Endpoint, not even from the shell on the node host: [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. Calling wget localhost:31982 on the NodePort also gives a Recv failure: Connection reset by peer and the kube-proxy logs error messages: Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. What could be the reason for this failure? Here is my service configuration file http://pastebin.com/RriYPRg7, a slight modification of https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml ### Response: In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster. Maybe they will be more "natural" and easy: Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox for example, but I think it will be good to know about that feature. In that case you will get not only "internal cluster-only" ip address for your service but also externally configured load balancer to access it (in aws/gce etc.) Link to the documentation Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". Information about openshift origin. Information about openshift origin routes
I have created a kubernetes service: [root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. However, am unable to connect to connect to the Endpoint, not even from the shell on the node host: [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. Calling wget localhost:31982 on the NodePort also gives a Recv failure: Connection reset by peer and the kube-proxy logs error messages: Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. What could be the reason for this failure? Here is my service configuration file http://pastebin.com/RriYPRg7, a slight modification of https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml
It sounds like you don't actually want command line arguments, but environment variables - and you can use env for that: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 env: - name: spring.profiles.dynamic value: local You can use command line arguments: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 args: - foo - bar - "String containing:colons:"
###Instruction: Need to pass command line arguments for the docker containers appContainer1 & appContainer2 in the pod.yaml. pod.yaml apiVersion: v1 kind: Pod metadata: name: microservices labels: app: apps spec: containers: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 - name: appContainer2 image: b.gcr.io/mybucket/appContainerImage2 ports: - containerPort: 9090 In docker, I can pass the command line arguments via environment variable(-e) docker run --rm -it -p 9090:9090 -e spring.profiles.dynamic=local applicationimage1 Similarly, I need to pass command line arguments when the containers run inside kubernetes. ### Response: It sounds like you don't actually want command line arguments, but environment variables - and you can use env for that: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 env: - name: spring.profiles.dynamic value: local You can use command line arguments: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 args: - foo - bar - "String containing:colons:"
Need to pass command line arguments for the docker containers appContainer1 & appContainer2 in the pod.yaml. pod.yaml apiVersion: v1 kind: Pod metadata: name: microservices labels: app: apps spec: containers: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 - name: appContainer2 image: b.gcr.io/mybucket/appContainerImage2 ports: - containerPort: 9090 In docker, I can pass the command line arguments via environment variable(-e) docker run --rm -it -p 9090:9090 -e spring.profiles.dynamic=local applicationimage1 Similarly, I need to pass command line arguments when the containers run inside kubernetes.
Solution is to enable "DirectUpload", so instead of writing storage.objects().insert(bucket.getName(), so, content).execute(); you have to write: Storage.Objects.Insert insert = storage.objects().insert(bucket.getName(), so, content); insert.getMediaHttpUploader().setDirectUploadEnabled(true); insert.execute(); Performance I get with this solution: took {13s515ms}, that's about ~{7.6923076923076925}MB/s JavaDoc for the setDirectUploadEnabled: Sets whether direct media upload is enabled or disabled. If value is set to true then a direct upload will be done where the whole media content is uploaded in a single request. If value is set to false then the upload uses the resumable media upload protocol to upload in data chunks. Direct upload is recommended if the content size falls below a certain minimum limit. This is because there's minimum block write size for some Google APIs, so if the resumable request fails in the space of that first block, the client will have to restart from the beginning anyway. Defaults to false.
###Instruction: Uploading to GCE from a pod inside GKE takes really long. I hoped the upgrade to kubernetes 1.1 would help, but it didn't. It is faster, but not as fast as it should be. I made some benchmarks, uploading a single file with 100MiB: docker 1.7.2 local took {20m51s240ms}, that's about ~{0.07993605115907274}MB/s docker 1.8.3 local took {3m51s193ms}, that's about ~{0.4329004329004329}MB/s docker 1.9.0 local took {3m51s424ms}, that's about ~{0.4329004329004329}MB/s kubernetes 1.0 took {1h10s952ms}, that's about ~{0.027700831024930747}MB/s kubernetes 1.1.2 (docker 1.8.3) took {32m11s359ms}, that's about ~{0.05178663904712584}MB/s As you can see the thruput doubles with kubernetes 1.1.2, but is still really slow. If I want to upload 1GB I have to wait for ~5 hours, this can't be the expected behaviour. GKE runs inside the Google infrastructure, so I expect that it should be faster or at least as fast as uploading from local. I also noted a very high CPU load (70%) while uploading. It was tested with a n1-highmem-4 machine-type and a single RC/pod that was doing nothing then the upload. I'm using the java client with the GAV coordinates com.google.appengine.tools:appengine-gcs-client:0.5 The relevant code is as follows: InputStream inputStream = ...; // 100MB RandomData from RAM StorageObject so = new StorageObject().setContentType("text/plain").setName(objectName); AbstractInputStreamContent content = new InputStreamContent("text/plain", inputStream); Stopwatch watch = Stopwatch.createStarted(); storage.objects().insert(bucket.getName(), so, content).execute(); watch.stop(); Copying a 100MB file using a manually installed gcloud with gsutil cp took nearly no time (3 seconds). So it might be an issue with the java-library? The question still remains, how to improve the upload time using the java-library? ### Response: Solution is to enable "DirectUpload", so instead of writing storage.objects().insert(bucket.getName(), so, content).execute(); you have to write: Storage.Objects.Insert insert = storage.objects().insert(bucket.getName(), so, content); insert.getMediaHttpUploader().setDirectUploadEnabled(true); insert.execute(); Performance I get with this solution: took {13s515ms}, that's about ~{7.6923076923076925}MB/s JavaDoc for the setDirectUploadEnabled: Sets whether direct media upload is enabled or disabled. If value is set to true then a direct upload will be done where the whole media content is uploaded in a single request. If value is set to false then the upload uses the resumable media upload protocol to upload in data chunks. Direct upload is recommended if the content size falls below a certain minimum limit. This is because there's minimum block write size for some Google APIs, so if the resumable request fails in the space of that first block, the client will have to restart from the beginning anyway. Defaults to false.
Uploading to GCE from a pod inside GKE takes really long. I hoped the upgrade to kubernetes 1.1 would help, but it didn't. It is faster, but not as fast as it should be. I made some benchmarks, uploading a single file with 100MiB: docker 1.7.2 local took {20m51s240ms}, that's about ~{0.07993605115907274}MB/s docker 1.8.3 local took {3m51s193ms}, that's about ~{0.4329004329004329}MB/s docker 1.9.0 local took {3m51s424ms}, that's about ~{0.4329004329004329}MB/s kubernetes 1.0 took {1h10s952ms}, that's about ~{0.027700831024930747}MB/s kubernetes 1.1.2 (docker 1.8.3) took {32m11s359ms}, that's about ~{0.05178663904712584}MB/s As you can see the thruput doubles with kubernetes 1.1.2, but is still really slow. If I want to upload 1GB I have to wait for ~5 hours, this can't be the expected behaviour. GKE runs inside the Google infrastructure, so I expect that it should be faster or at least as fast as uploading from local. I also noted a very high CPU load (70%) while uploading. It was tested with a n1-highmem-4 machine-type and a single RC/pod that was doing nothing then the upload. I'm using the java client with the GAV coordinates com.google.appengine.tools:appengine-gcs-client:0.5 The relevant code is as follows: InputStream inputStream = ...; // 100MB RandomData from RAM StorageObject so = new StorageObject().setContentType("text/plain").setName(objectName); AbstractInputStreamContent content = new InputStreamContent("text/plain", inputStream); Stopwatch watch = Stopwatch.createStarted(); storage.objects().insert(bucket.getName(), so, content).execute(); watch.stop(); Copying a 100MB file using a manually installed gcloud with gsutil cp took nearly no time (3 seconds). So it might be an issue with the java-library? The question still remains, how to improve the upload time using the java-library?
Bash ignores SIGTERM when there are no traps. You can trap SIGTERM to force an exit. For example, trap 'exit 255' SIGTERM; sleep 3600
###Instruction: I have a pod with the following config: apiVersion: v1 kind: Pod metadata: labels: name: demo name: demo spec: containers: - name: demo image: ubuntu:14.04 command: - sleep - "3600" When I try to stop it, the SIGTERM is ignored by the sleep command, and it takes 30 seconds (the full default grace period) to stop. I can also get on the pod and send the signal to the process (pid 1) manually, and it does not kill the pod. How can I get sleep to die when a signal is sent to it? ### Response: Bash ignores SIGTERM when there are no traps. You can trap SIGTERM to force an exit. For example, trap 'exit 255' SIGTERM; sleep 3600
I have a pod with the following config: apiVersion: v1 kind: Pod metadata: labels: name: demo name: demo spec: containers: - name: demo image: ubuntu:14.04 command: - sleep - "3600" When I try to stop it, the SIGTERM is ignored by the sleep command, and it takes 30 seconds (the full default grace period) to stop. I can also get on the pod and send the signal to the process (pid 1) manually, and it does not kill the pod. How can I get sleep to die when a signal is sent to it?
The documentation is out of date, in that it refers to .dockercfg instead of .docker/config.json. I will update it. When you use the new .docker/config.json format, you need to set type: kubernetes.io/dockerconfigjson instead of type: kubernetes.io/.dockercfg. Support for type: kubernetes.io/dockerconfigjson was added in v1.1.0 so it is supported by your server, but is not supported by your client (which is v1.1.0-alpha which predates v1.1.0). When you use type: kubernetes.io/dockerconfigjson, it should validate your secret contents. With type: kubernetes.io/dockerconfigjson, you do want to keep the auths wrapper.
###Instruction: I'm struggling to get Kubernetes to work with my private hub.docker.com registry image. I am using kubectl version: Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} and Vagrant 1.7.4 on Mac OS X Yosemite 10.10.5 I followed the instructions given here: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting .docker/config.json, and use that in a yaml document as follows: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg Then feed that to kubectl. I then used the resulting key (here called myregistrykey) in my pod definition: apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey and kubectl created it. However, kubectl keeps failing to retrieve the image: [root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m docker pull on the Kubernetes master worked however. What am I missing? UPDATE In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: image: docker.io/intrinsic/authorities-backend:latest However, the problem persists. Doing kubectl get events -w gets me: 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) I know the secret has been properly registered, as I have it under kubectl get secrets: NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m Still confused... Candide ### Response: The documentation is out of date, in that it refers to .dockercfg instead of .docker/config.json. I will update it. When you use the new .docker/config.json format, you need to set type: kubernetes.io/dockerconfigjson instead of type: kubernetes.io/.dockercfg. Support for type: kubernetes.io/dockerconfigjson was added in v1.1.0 so it is supported by your server, but is not supported by your client (which is v1.1.0-alpha which predates v1.1.0). When you use type: kubernetes.io/dockerconfigjson, it should validate your secret contents. With type: kubernetes.io/dockerconfigjson, you do want to keep the auths wrapper.
I'm struggling to get Kubernetes to work with my private hub.docker.com registry image. I am using kubectl version: Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} and Vagrant 1.7.4 on Mac OS X Yosemite 10.10.5 I followed the instructions given here: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting .docker/config.json, and use that in a yaml document as follows: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg Then feed that to kubectl. I then used the resulting key (here called myregistrykey) in my pod definition: apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey and kubectl created it. However, kubectl keeps failing to retrieve the image: [root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m docker pull on the Kubernetes master worked however. What am I missing? UPDATE In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: image: docker.io/intrinsic/authorities-backend:latest However, the problem persists. Doing kubectl get events -w gets me: 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) I know the secret has been properly registered, as I have it under kubectl get secrets: NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m Still confused... Candide
The two things that contain the master's identity are its IP address and a few different pieces of data from its local disk. To make upgrades work on Google Compute Engine, we mount the four important directories listed below to a persistent disk and reserve the IP address of the VM, then detach the disk and IP and put them on a new VM. The directories saved are: /var/etcd /srv/kubernetes /srv/sshproxy /srv/salt-overlay
###Instruction: I need to migrate my kubernetes master to another server. How can we do that with all datas of the current running services, pod, rc ... What do we need to backup , because kubernetes doc don't talk about this. Thank you. ### Response: The two things that contain the master's identity are its IP address and a few different pieces of data from its local disk. To make upgrades work on Google Compute Engine, we mount the four important directories listed below to a persistent disk and reserve the IP address of the VM, then detach the disk and IP and put them on a new VM. The directories saved are: /var/etcd /srv/kubernetes /srv/sshproxy /srv/salt-overlay
I need to migrate my kubernetes master to another server. How can we do that with all datas of the current running services, pod, rc ... What do we need to backup , because kubernetes doc don't talk about this. Thank you.
So, I kept researching the web for an answer to my problem and eventually found this: https://github.com/kubernetes/kubernetes/issues/7954#issuecomment-115241561 At the very end of the thread, jjw27 has nailed it. The kubernetes documentation mentions the .dockercfg.json file just to say that its contents needs to be base64-encoded. There are actually two issues with this file: it looks like it morphed into another file actually, i.e. .docker/config.json the auth info in this file is wrapped by an additional auths objects, which you have to get rid of. Quoting jjw27 Did not work: { "auths": { "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } } Worked: { "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } Google, please update this doc!! Message to Kubernetes devs #2: Also, not complaining with a malformed base64-encoded secret is very misleading. Please validate user input and complain if it contains errors.
###Instruction: I'm struggling to get Kubernetes to work with my private hub.docker.com registry image. I am using kubectl version: Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} and Vagrant 1.7.4 on Mac OS X Yosemite 10.10.5 I followed the instructions given here: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting .docker/config.json, and use that in a yaml document as follows: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg Then feed that to kubectl. I then used the resulting key (here called myregistrykey) in my pod definition: apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey and kubectl created it. However, kubectl keeps failing to retrieve the image: [root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m docker pull on the Kubernetes master worked however. What am I missing? UPDATE In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: image: docker.io/intrinsic/authorities-backend:latest However, the problem persists. Doing kubectl get events -w gets me: 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) I know the secret has been properly registered, as I have it under kubectl get secrets: NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m Still confused... Candide ### Response: So, I kept researching the web for an answer to my problem and eventually found this: https://github.com/kubernetes/kubernetes/issues/7954#issuecomment-115241561 At the very end of the thread, jjw27 has nailed it. The kubernetes documentation mentions the .dockercfg.json file just to say that its contents needs to be base64-encoded. There are actually two issues with this file: it looks like it morphed into another file actually, i.e. .docker/config.json the auth info in this file is wrapped by an additional auths objects, which you have to get rid of. Quoting jjw27 Did not work: { "auths": { "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } } Worked: { "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } Google, please update this doc!! Message to Kubernetes devs #2: Also, not complaining with a malformed base64-encoded secret is very misleading. Please validate user input and complain if it contains errors.
I'm struggling to get Kubernetes to work with my private hub.docker.com registry image. I am using kubectl version: Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"} and Vagrant 1.7.4 on Mac OS X Yosemite 10.10.5 I followed the instructions given here: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting .docker/config.json, and use that in a yaml document as follows: apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg Then feed that to kubectl. I then used the resulting key (here called myregistrykey) in my pod definition: apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey and kubectl created it. However, kubectl keeps failing to retrieve the image: [root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m docker pull on the Kubernetes master worked however. What am I missing? UPDATE In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: image: docker.io/intrinsic/authorities-backend:latest However, the problem persists. Doing kubectl get events -w gets me: 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) I know the secret has been properly registered, as I have it under kubectl get secrets: NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m Still confused... Candide
The part that says Error syncing pod, skipping: API error means that kubelet got an error when trying to start a container for your Pod. Since you use CoreOS, I think you are using rkt, not docker. I think that rkt uses systemd to start containers. And I think systemd crashes when the "unit" name starts with an underscore: https://github.com/coreos/go-systemd/pull/49 So, maybe one of your pods or containers has a name that starts with an underscore. Change that.
###Instruction: I got some error when scheduling pod though ReplicationController: failedSync {kubelet 10.9.8.21} Error syncing pod, skipping: API error (500): Cannot start container 20c2fe3a3e5b5204db4475d1ce6ea37b3aea6da0762a214b9fdb3d624fd5c32c: [8] System error: Activation of org.freedesktop.systemd1 timed out The pod is scheduled but cannot run unless I re-deploy it with another image. I'm using kubelet 1.0.1, CoreOS v773.1.0 ### Response: The part that says Error syncing pod, skipping: API error means that kubelet got an error when trying to start a container for your Pod. Since you use CoreOS, I think you are using rkt, not docker. I think that rkt uses systemd to start containers. And I think systemd crashes when the "unit" name starts with an underscore: https://github.com/coreos/go-systemd/pull/49 So, maybe one of your pods or containers has a name that starts with an underscore. Change that.
I got some error when scheduling pod though ReplicationController: failedSync {kubelet 10.9.8.21} Error syncing pod, skipping: API error (500): Cannot start container 20c2fe3a3e5b5204db4475d1ce6ea37b3aea6da0762a214b9fdb3d624fd5c32c: [8] System error: Activation of org.freedesktop.systemd1 timed out The pod is scheduled but cannot run unless I re-deploy it with another image. I'm using kubelet 1.0.1, CoreOS v773.1.0
You can use kubectl edit to edit a resource: Usage: kubectl edit (RESOURCE/NAME | -f FILENAME) [flags] Examples: # Edit the service named 'docker-registry': $ kubectl edit svc/docker-registry # Use an alternative editor $ KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Edit the service 'docker-registry' in JSON using the v1 API format: $ kubectl edit svc/docker-registry --output-version=v1 -o json ^^ from the kubectl help
###Instruction: I run pods with replication controller, now i want to edit config like change value of environment and keep name of rc. apiVersion: v1 kind: ReplicationController metadata: name: backend spec: replicas: 3 template: spec: containers: - name: backend image: myproject/backend ports: - containerPort: 8080 env: - name: USER_ENDPOINT value: "http://10.0.7.29:10000" For example I move service to new server and just want change value of env USER_ENDPOINT to http://10.0.7.30:30100 Now I just know follow delete rc and recreate rc, but in production i don't stop it. I try rolling-update but not work because i want keep name of replication controller. What can i do ? Please suggest a solution, thanks. ### Response: You can use kubectl edit to edit a resource: Usage: kubectl edit (RESOURCE/NAME | -f FILENAME) [flags] Examples: # Edit the service named 'docker-registry': $ kubectl edit svc/docker-registry # Use an alternative editor $ KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Edit the service 'docker-registry' in JSON using the v1 API format: $ kubectl edit svc/docker-registry --output-version=v1 -o json ^^ from the kubectl help
I run pods with replication controller, now i want to edit config like change value of environment and keep name of rc. apiVersion: v1 kind: ReplicationController metadata: name: backend spec: replicas: 3 template: spec: containers: - name: backend image: myproject/backend ports: - containerPort: 8080 env: - name: USER_ENDPOINT value: "http://10.0.7.29:10000" For example I move service to new server and just want change value of env USER_ENDPOINT to http://10.0.7.30:30100 Now I just know follow delete rc and recreate rc, but in production i don't stop it. I try rolling-update but not work because i want keep name of replication controller. What can i do ? Please suggest a solution, thanks.
There is a way to pre-bind PVs to PVCs today, here is an example showing how: Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create: $ kubectl create -f pv.yaml persistentvolume "pv0003" created where pv.yaml contains: apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: storageClassName: "" capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: namespace: default name: myclaim nfs: path: /tmp server: 172.17.0.2 Then create the PVC with the same name: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 5Gi The PV and PVC should be bound immediately: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim Bound pv0003 5Gi RWO 4s $ ./cluster/kubectl.sh get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0003 5Gi RWO Bound default/myclaim 57s
###Instruction: This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195: Allowing users to ask for a specific PV breaks the separation between them I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason. How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs? We use NFS, and PersistentVolume is a handy abstraction because we can keep the server IP and the path there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path reuse. Can set volumeName in a PVC spec block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference. ### Response: There is a way to pre-bind PVs to PVCs today, here is an example showing how: Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create: $ kubectl create -f pv.yaml persistentvolume "pv0003" created where pv.yaml contains: apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: storageClassName: "" capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: namespace: default name: myclaim nfs: path: /tmp server: 172.17.0.2 Then create the PVC with the same name: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 5Gi The PV and PVC should be bound immediately: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim Bound pv0003 5Gi RWO 4s $ ./cluster/kubectl.sh get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0003 5Gi RWO Bound default/myclaim 57s
This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195: Allowing users to ask for a specific PV breaks the separation between them I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason. How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs? We use NFS, and PersistentVolume is a handy abstraction because we can keep the server IP and the path there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path reuse. Can set volumeName in a PVC spec block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference.
You should be able to run a second cluster by setting INSTANCE_PREFIX before running kube-up. That variable in turn sets CLUSTER_ID which should parameterize everything in the cluster/aws/* scripts.
###Instruction: We're currently running a Kubernetes 1.0 cluster on AWS in production, and we'd like to spin up a second cluster to test out 1.1. Based on the AWS helper functions, it looks like multiple clusters aren't supported, but I wanted to be sure. There's a doc that describes running multiple clusters, but it's fairly brief. In general, we'd like to have a second cluster continuously running for testing purposes. It seems like this would be a fairly common need. ### Response: You should be able to run a second cluster by setting INSTANCE_PREFIX before running kube-up. That variable in turn sets CLUSTER_ID which should parameterize everything in the cluster/aws/* scripts.
We're currently running a Kubernetes 1.0 cluster on AWS in production, and we'd like to spin up a second cluster to test out 1.1. Based on the AWS helper functions, it looks like multiple clusters aren't supported, but I wanted to be sure. There's a doc that describes running multiple clusters, but it's fairly brief. In general, we'd like to have a second cluster continuously running for testing purposes. It seems like this would be a fairly common need.
I've seen lots of cases where Docker images on-disk get random bits of corruption (causing weird inconsistencies like the one you describe here), and deleting and re-pulling the image "fixes" the problem. To test this, you'll want to make sure you not only docker rmi gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a (which will minimally output Untagged: gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a), but also delete the individual layers (and any other tags they may have) so that they're forced to be re-pulled.
###Instruction: Initial Post I have the same docker image running on two different CoreOS servers. (They're in a Kubernetes cluster, but I think that is irrelevant to the current problem.) They both are running image hash 01e95e0a93af. They both should have curl. One does not. This seems... impossible. Good Server core@ip-10-0-0-61 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 [cut] Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-61 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@d29cb8783830:/app/bundle# curl curl: try 'curl --help' or 'curl --manual' for more information root@d29cb8783830:/app/bundle# Bad Server core@ip-10-0-0-212 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a [cut] Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-212 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@fe6a536393f8:/app/bundle# curl bash: curl: command not found root@fe6a536393f8:/app/bundle# Full logs available on this gist. I took the bad server out of our production cluster but still have it running if anyone wants me to do any other research. Added 2015-12-04 I've run docker tag gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a weird-image on both servers to make everything more readable. which curl Can you do a which curl in the first component to check where it finds its curl? And see if that file exists in the second component. – VonC Seems to not exist at all on the bad server. Good Server core@ip-10-0-0-61 ~ $ docker run -it --rm weird-image /bin/bash root@529b8f20a610:/app/bundle# which curl /usr/bin/curl Bad Server core@ip-10-0-0-212 ~ $ docker run -it --rm weird-image /bin/bash root@ff98c850dbaa:/app/bundle# ls /usr/bin/curl ls: cannot access /usr/bin/curl: No such file or directory root@ff98c850dbaa:/app/bundle# alias docker Any chance you have set up an alias on the bad box? Run alias docker to check – morloch Nope. Good Server core@ip-10-0-0-61 ~ $ alias docker -bash: alias: docker: not found Bad Server core@ip-10-0-0-212 ~ $ alias docker -bash: alias: docker: not found time More weirdness: it takes a lot longer to run the container on the bad server. Good Server core@ip-10-0-0-61 ~ $ time docker run weird-image echo "Done" Done real 0m0.422s user 0m0.015s sys 0m0.015s Bad Server core@ip-10-0-0-212 ~ $ time docker run weird-image echo "Done" Done real 0m4.602s user 0m0.010s sys 0m0.010s ### Response: I've seen lots of cases where Docker images on-disk get random bits of corruption (causing weird inconsistencies like the one you describe here), and deleting and re-pulling the image "fixes" the problem. To test this, you'll want to make sure you not only docker rmi gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a (which will minimally output Untagged: gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a), but also delete the individual layers (and any other tags they may have) so that they're forced to be re-pulled.
Initial Post I have the same docker image running on two different CoreOS servers. (They're in a Kubernetes cluster, but I think that is irrelevant to the current problem.) They both are running image hash 01e95e0a93af. They both should have curl. One does not. This seems... impossible. Good Server core@ip-10-0-0-61 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 [cut] Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-61 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@d29cb8783830:/app/bundle# curl curl: try 'curl --help' or 'curl --manual' for more information root@d29cb8783830:/app/bundle# Bad Server core@ip-10-0-0-212 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a [cut] Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-212 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@fe6a536393f8:/app/bundle# curl bash: curl: command not found root@fe6a536393f8:/app/bundle# Full logs available on this gist. I took the bad server out of our production cluster but still have it running if anyone wants me to do any other research. Added 2015-12-04 I've run docker tag gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a weird-image on both servers to make everything more readable. which curl Can you do a which curl in the first component to check where it finds its curl? And see if that file exists in the second component. – VonC Seems to not exist at all on the bad server. Good Server core@ip-10-0-0-61 ~ $ docker run -it --rm weird-image /bin/bash root@529b8f20a610:/app/bundle# which curl /usr/bin/curl Bad Server core@ip-10-0-0-212 ~ $ docker run -it --rm weird-image /bin/bash root@ff98c850dbaa:/app/bundle# ls /usr/bin/curl ls: cannot access /usr/bin/curl: No such file or directory root@ff98c850dbaa:/app/bundle# alias docker Any chance you have set up an alias on the bad box? Run alias docker to check – morloch Nope. Good Server core@ip-10-0-0-61 ~ $ alias docker -bash: alias: docker: not found Bad Server core@ip-10-0-0-212 ~ $ alias docker -bash: alias: docker: not found time More weirdness: it takes a lot longer to run the container on the bad server. Good Server core@ip-10-0-0-61 ~ $ time docker run weird-image echo "Done" Done real 0m0.422s user 0m0.015s sys 0m0.015s Bad Server core@ip-10-0-0-212 ~ $ time docker run weird-image echo "Done" Done real 0m4.602s user 0m0.010s sys 0m0.010s
The Extending the API doc describes how you can add resources to the Kubernetes API. If you just want a REST API for rolling update, you should check out the Deployment object.
###Instruction: I need to add a REST API for Kubernetes rolling update feature. Is there a way we can write extensions for Kubernetes API? If that so is there any documentation about writing them? ### Response: The Extending the API doc describes how you can add resources to the Kubernetes API. If you just want a REST API for rolling update, you should check out the Deployment object.
I need to add a REST API for Kubernetes rolling update feature. Is there a way we can write extensions for Kubernetes API? If that so is there any documentation about writing them?
Did you specify --token-auth-file=<file> and/or --basic-auth-file=<otherfile> or one of the other authentication modes? I don't know that https endpoint will work without one of these (maybe it should, but it doesn't, apparently). Check out https://kubernetes.io/docs/admin/authentication/
###Instruction: Kubernetes API request curl https://192.168.0.139 --cacert /home/mongeo/ku-certs/ca.pem return Unauthorized Request curl localhost:8080 worked good. My kube-proxy and kube-apiserver standart (coreos+k8s tutorial) How do I get data on HTTPS? ### Response: Did you specify --token-auth-file=<file> and/or --basic-auth-file=<otherfile> or one of the other authentication modes? I don't know that https endpoint will work without one of these (maybe it should, but it doesn't, apparently). Check out https://kubernetes.io/docs/admin/authentication/
Kubernetes API request curl https://192.168.0.139 --cacert /home/mongeo/ku-certs/ca.pem return Unauthorized Request curl localhost:8080 worked good. My kube-proxy and kube-apiserver standart (coreos+k8s tutorial) How do I get data on HTTPS?
Install Golang plugin for IDEA: https://github.com/go-lang-plugin-org/go-lang-idea-plugin/wiki/Documentation
###Instruction: I want to run/build Kubernetes from source. I normally use Intelij idea to open sources but I can't see that there is a source support for 'Go' in my ide. My main concern is to write an extension for Kubernetes API. How can I easily setup source in idea to develop and test that extension? I also have a locally installed Kubernetes API. ### Response: Install Golang plugin for IDEA: https://github.com/go-lang-plugin-org/go-lang-idea-plugin/wiki/Documentation
I want to run/build Kubernetes from source. I normally use Intelij idea to open sources but I can't see that there is a source support for 'Go' in my ide. My main concern is to write an extension for Kubernetes API. How can I easily setup source in idea to develop and test that extension? I also have a locally installed Kubernetes API.
What version of Kubernetes is this? A long time ago (~1.0.4) we had a bug in the kube-proxy where it leaked sockets/file-descriptors. If you aren't running a 1.1.3 binary, consider upgrading. Also, you should be able to use lsof to figure out who has all of the files open.
###Instruction: I have installed K8S on OpenStack following this guide. The installation went fine and I was able to run pods but after some time my applications stops working. I can still create pods but request won't reach the services from outside the cluster and also from within the pods. Basically, something in networking gets messed up. The iptables -L -vnt nat still shows the proper configuration but things won't work. To make it working, I have to rebuild cluster, removing all services and replication controllers doesn't work. I tried to look into the logs. Below is the journal for kube-proxy: Dec 20 02:12:18 minion01.novalocal systemd[1]: Started Kubernetes Proxy. Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.269784 1030 proxier.go:487] Opened iptables from-containers public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.278952 1030 proxier.go:498] Opened iptables from-host public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 03:05:11 minion01.novalocal kube-proxy[1030]: W1220 03:05:11.806927 1030 api.go:224] Got error status on WatchEndpoints channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1433/544]) [2432] Reason: Details:<nil> Code:0} Dec 20 03:06:08 minion01.novalocal kube-proxy[1030]: W1220 03:06:08.177225 1030 api.go:153] Got error status on WatchServices channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1476/207]) [2475] Reason: Details:<nil> Code:0} .. .. .. Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448570 1030 proxier.go:161] Failed to ensure iptables: error creating chain "KUBE-PORTALS-CONTAINER": fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.448749 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448868 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/kubernetes:" Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448906 1030 proxier.go:176] Failed to ensure portal for "default/kubernetes:": error checking rule: fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.449006 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.449133 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/repo-client:" I found few posts relating to "failed to install iptables" but they don't seem to be relevant as initially everything works but after few hours it gets messed up. ### Response: What version of Kubernetes is this? A long time ago (~1.0.4) we had a bug in the kube-proxy where it leaked sockets/file-descriptors. If you aren't running a 1.1.3 binary, consider upgrading. Also, you should be able to use lsof to figure out who has all of the files open.
I have installed K8S on OpenStack following this guide. The installation went fine and I was able to run pods but after some time my applications stops working. I can still create pods but request won't reach the services from outside the cluster and also from within the pods. Basically, something in networking gets messed up. The iptables -L -vnt nat still shows the proper configuration but things won't work. To make it working, I have to rebuild cluster, removing all services and replication controllers doesn't work. I tried to look into the logs. Below is the journal for kube-proxy: Dec 20 02:12:18 minion01.novalocal systemd[1]: Started Kubernetes Proxy. Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.269784 1030 proxier.go:487] Opened iptables from-containers public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.278952 1030 proxier.go:498] Opened iptables from-host public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 03:05:11 minion01.novalocal kube-proxy[1030]: W1220 03:05:11.806927 1030 api.go:224] Got error status on WatchEndpoints channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1433/544]) [2432] Reason: Details:<nil> Code:0} Dec 20 03:06:08 minion01.novalocal kube-proxy[1030]: W1220 03:06:08.177225 1030 api.go:153] Got error status on WatchServices channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1476/207]) [2475] Reason: Details:<nil> Code:0} .. .. .. Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448570 1030 proxier.go:161] Failed to ensure iptables: error creating chain "KUBE-PORTALS-CONTAINER": fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.448749 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448868 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/kubernetes:" Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448906 1030 proxier.go:176] Failed to ensure portal for "default/kubernetes:": error checking rule: fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.449006 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.449133 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/repo-client:" I found few posts relating to "failed to install iptables" but they don't seem to be relevant as initially everything works but after few hours it gets messed up.
If the new pods have labels matching the service's label selector, they should be added to the service as soon as they come up. If you want to experiment with different logic for a rolling update, you can write a client-side controller using the Kubernetes API client libraries, or create a server-side object by extending the API.
###Instruction: Currently kubernetes rolling update creates a new pod to a terminated pod and add it to the service. At the moment of rolling update there could be two types of pods registered (old ones and new ones) for a service. However I need to enforce the consistency. For example when a rolling update request comes to Kubernetes, first it creates a new rc but pods created under that rc is not added to the service. Once all replications of that rc becomes available, all the traffic came to the service is routed to that rc. Finally the old rc is deleted. Can we currently do this using Kubernetes? If not is there a way I can write an extension to Kubernetes to implement this functionality? ### Response: If the new pods have labels matching the service's label selector, they should be added to the service as soon as they come up. If you want to experiment with different logic for a rolling update, you can write a client-side controller using the Kubernetes API client libraries, or create a server-side object by extending the API.
Currently kubernetes rolling update creates a new pod to a terminated pod and add it to the service. At the moment of rolling update there could be two types of pods registered (old ones and new ones) for a service. However I need to enforce the consistency. For example when a rolling update request comes to Kubernetes, first it creates a new rc but pods created under that rc is not added to the service. Once all replications of that rc becomes available, all the traffic came to the service is routed to that rc. Finally the old rc is deleted. Can we currently do this using Kubernetes? If not is there a way I can write an extension to Kubernetes to implement this functionality?
The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod) env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP Recreate the pod/rc and then try echo $MY_POD_IP also run env to see what else kubernetes provides you with.
###Instruction: Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations. Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set. ### Response: The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod) env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP Recreate the pod/rc and then try echo $MY_POD_IP also run env to see what else kubernetes provides you with.
Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations. Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.
You can add the --flatten flag, which is described in the document to "flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files)"
###Instruction: There are multiple admins who accesses k8s clusters. What is the recommended way to share the config file? I know, kubectl config view --minify but certification part is REDACTED by this command. ### Response: You can add the --flatten flag, which is described in the document to "flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files)"
There are multiple admins who accesses k8s clusters. What is the recommended way to share the config file? I know, kubectl config view --minify but certification part is REDACTED by this command.
Nothing prevents you from placing an external load balancer in front of your nodes and use the NodePort option. The LoadBalancer option is only used to additionally ask your cloud provider for a new software LB instance, automatically in the background. I'm not up to date which cloud providers are supported yet, but i saw it working for Compute Engine and OpenStack already.
###Instruction: I have just started with Kubernetes and I am confused about the difference between NodePort and LoadBalancer type of service. The difference I understand is that LoadBalancer does not support UDP but apart from that whenever we create a service either Nodeport or Loadbalancer we get a service IP and port, a NodePort, and endpoints. From Kubernetes docs: NodePort: on top of having a cluster-internal IP, expose the service on a port on each node of the cluster (the same port on each node). You'll be able to contact the service on any NodeIP:NodePort address. LoadBalancer: on top of having a cluster-internal IP and exposing service on a NodePort also, ask the cloud provider for a load balancer which forwards to the Service exposed as a NodeIP:NodePort for each Node. So, I will always access service on NodeIP:NodePort. My understanding is, whenever we access the node:NodePort, the kubeproxy will intercept the request and forward it to the respective pod. The other thing mentioned about LoadBalancer is that we can have an external LB which will LB between the Nodes. What prevents us to put a LB for services created as nodeport? I am really confused. Most of the docs or tutorials talk only about LoadBalancer service therefore I couldn't find much on internet. ### Response: Nothing prevents you from placing an external load balancer in front of your nodes and use the NodePort option. The LoadBalancer option is only used to additionally ask your cloud provider for a new software LB instance, automatically in the background. I'm not up to date which cloud providers are supported yet, but i saw it working for Compute Engine and OpenStack already.
I have just started with Kubernetes and I am confused about the difference between NodePort and LoadBalancer type of service. The difference I understand is that LoadBalancer does not support UDP but apart from that whenever we create a service either Nodeport or Loadbalancer we get a service IP and port, a NodePort, and endpoints. From Kubernetes docs: NodePort: on top of having a cluster-internal IP, expose the service on a port on each node of the cluster (the same port on each node). You'll be able to contact the service on any NodeIP:NodePort address. LoadBalancer: on top of having a cluster-internal IP and exposing service on a NodePort also, ask the cloud provider for a load balancer which forwards to the Service exposed as a NodeIP:NodePort for each Node. So, I will always access service on NodeIP:NodePort. My understanding is, whenever we access the node:NodePort, the kubeproxy will intercept the request and forward it to the respective pod. The other thing mentioned about LoadBalancer is that we can have an external LB which will LB between the Nodes. What prevents us to put a LB for services created as nodeport? I am really confused. Most of the docs or tutorials talk only about LoadBalancer service therefore I couldn't find much on internet.
In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster.... Do I need to run separate clusters for different requirement? Yes, this is currently true. We are working on relaxing this restriction, but in the mean time you can copy the instance template to create another set of nodes with a different size.
###Instruction: I am trying to deploy a web application using Kubernetes and google container engine. My application requires different types of machine. In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster, and it reduces to wasting resource or money to mix different pods in single cluster because I need to match machine type with maximum requirement. Let's say database requires 8 CPUs and 100GB ram, and application servers needs 2 CPUs and 4GB ram. I have to have at least 8 cpu / 100GB machine in the cluster for database pods to be scheduled. Kubernetes will schedule 4 application pods on each machine, and it will waste 84GB of ram of the machine. Is it correct? If it is, how can I solve the problem? Do I need to run separate clusters for different requirement? Connecting services between different clusters doesn't seem to be s trivial problem either. ### Response: In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster.... Do I need to run separate clusters for different requirement? Yes, this is currently true. We are working on relaxing this restriction, but in the mean time you can copy the instance template to create another set of nodes with a different size.
I am trying to deploy a web application using Kubernetes and google container engine. My application requires different types of machine. In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster, and it reduces to wasting resource or money to mix different pods in single cluster because I need to match machine type with maximum requirement. Let's say database requires 8 CPUs and 100GB ram, and application servers needs 2 CPUs and 4GB ram. I have to have at least 8 cpu / 100GB machine in the cluster for database pods to be scheduled. Kubernetes will schedule 4 application pods on each machine, and it will waste 84GB of ram of the machine. Is it correct? If it is, how can I solve the problem? Do I need to run separate clusters for different requirement? Connecting services between different clusters doesn't seem to be s trivial problem either.
So the issue here was that there's a missing piece of the puzzle when you use nodePort. I was also making a mistake with the commands. Firstly, you need to make sure you expose the right ports, in this case 80 for nginx: kubectl expose rc nginx --port=80 --type=NodePort Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node: [root@kubemaster ~]# kubectl describe svc nginx Name: nginx Namespace: default Labels: run=nginx Selector: run=nginx Type: NodePort IP: 172.16.92.8 Port: <unnamed> 80/TCP NodePort: <unnamed> 32033/TCP Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80 Session Affinity: None No events. You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
###Instruction: I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly: kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16 flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"} docker subnet setting: --bip=10.0.0.1/24 Hostnode IP: 192.168.4.57 I've got the nginx service running and I've tried to expose it like so: [root@kubemaster ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-px6uy 1/1 Running 0 4m [root@kubemaster ~]# kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) AGE kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m and then I exposed the service like this: kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort NAME LABELS SELECTOR IP(S) PORT(S) AGE nginx run=nginx run=nginx 9000/TCP 292y I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :( Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think? ### Response: So the issue here was that there's a missing piece of the puzzle when you use nodePort. I was also making a mistake with the commands. Firstly, you need to make sure you expose the right ports, in this case 80 for nginx: kubectl expose rc nginx --port=80 --type=NodePort Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node: [root@kubemaster ~]# kubectl describe svc nginx Name: nginx Namespace: default Labels: run=nginx Selector: run=nginx Type: NodePort IP: 172.16.92.8 Port: <unnamed> 80/TCP NodePort: <unnamed> 32033/TCP Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80 Session Affinity: None No events. You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly: kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16 flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"} docker subnet setting: --bip=10.0.0.1/24 Hostnode IP: 192.168.4.57 I've got the nginx service running and I've tried to expose it like so: [root@kubemaster ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-px6uy 1/1 Running 0 4m [root@kubemaster ~]# kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) AGE kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m and then I exposed the service like this: kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort NAME LABELS SELECTOR IP(S) PORT(S) AGE nginx run=nginx run=nginx 9000/TCP 292y I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :( Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
Kubernetes doesn't have a cluster lock object, but you can use an annotation on the replication controller to specify the lock holder and TTL. For example, each pod could read the the annotation key "lock", and if empty (or if TTL expired), try to write "lock": "pod-xyz: 2015-12-22T18:39:12+00:00". If multiple writes are attempted, kubernetes will accept one, and reject the others w/ a 409 because the resource version will not be correct. The lock holder would then continue updating the annotation to refresh the TTL. If you have a service that corresponds to this replication controller, it might make sense to put the lock annotation on the service instead of the RC. Then the locking semantics would survive software upgrades (e.g. rolling-update). The annotation can go on any object, so there's some flexibility to figure out what works best for you. podmaster.go had a good example of the logic you might use to implement this. It is running directly against etcd, which you could also do if you don't mind introducing another component.
###Instruction: I have an application running in a Kubernetes pod that is replicated using a replication controller. However I need to some critical tasks that should be done by a single application (one replication) at a time. Previously I used zookeeper to get a cluster lock to do that task. Is there a way in Kubernetes to get a cluster lock for a particular replication controller? ### Response: Kubernetes doesn't have a cluster lock object, but you can use an annotation on the replication controller to specify the lock holder and TTL. For example, each pod could read the the annotation key "lock", and if empty (or if TTL expired), try to write "lock": "pod-xyz: 2015-12-22T18:39:12+00:00". If multiple writes are attempted, kubernetes will accept one, and reject the others w/ a 409 because the resource version will not be correct. The lock holder would then continue updating the annotation to refresh the TTL. If you have a service that corresponds to this replication controller, it might make sense to put the lock annotation on the service instead of the RC. Then the locking semantics would survive software upgrades (e.g. rolling-update). The annotation can go on any object, so there's some flexibility to figure out what works best for you. podmaster.go had a good example of the logic you might use to implement this. It is running directly against etcd, which you could also do if you don't mind introducing another component.
I have an application running in a Kubernetes pod that is replicated using a replication controller. However I need to some critical tasks that should be done by a single application (one replication) at a time. Previously I used zookeeper to get a cluster lock to do that task. Is there a way in Kubernetes to get a cluster lock for a particular replication controller?
There is a known issue with SELinux and Kubernetes Secrets as per the Atomic issue tracker, see ISSUE-117.
###Instruction: I followed kubernetes documentation to manage secrets of my applications. http://kubernetes.io/v1.1/docs/user-guide/secrets.html When pod starts it kubernetes mounts secret at the right place, but application is unable to read secret data as it described in documentation. root@quoter-controller-whw7k:/etc/quoter# whoami root root@quoter-controller-whw7k:/etc/quoter# ls -l ls: cannot access local.py: Permission denied total 0 -????????? ? ? ? ? ? local.py root@quoter-controller-whw7k:/etc/quoter# cat local.py cat: local.py: Permission denied What is wrong with that? SELinux configured with enforcing mode SELINUX=enforcing Docker started with the following command /usr/bin/docker daemon --registry-mirror=http://mirror.internal:5000 --selinux-enabled --insecure-registry registry.internal:5555 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/atomicos-docker--pool --bip=10.16.16.1/24 --mtu=8951 ### Response: There is a known issue with SELinux and Kubernetes Secrets as per the Atomic issue tracker, see ISSUE-117.
I followed kubernetes documentation to manage secrets of my applications. http://kubernetes.io/v1.1/docs/user-guide/secrets.html When pod starts it kubernetes mounts secret at the right place, but application is unable to read secret data as it described in documentation. root@quoter-controller-whw7k:/etc/quoter# whoami root root@quoter-controller-whw7k:/etc/quoter# ls -l ls: cannot access local.py: Permission denied total 0 -????????? ? ? ? ? ? local.py root@quoter-controller-whw7k:/etc/quoter# cat local.py cat: local.py: Permission denied What is wrong with that? SELinux configured with enforcing mode SELINUX=enforcing Docker started with the following command /usr/bin/docker daemon --registry-mirror=http://mirror.internal:5000 --selinux-enabled --insecure-registry registry.internal:5555 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/atomicos-docker--pool --bip=10.16.16.1/24 --mtu=8951
Today's christmas but I spent the whole day trying to get this to work :) This is what I did: #1 FLANNEL As mentioned I'd set the flannel etcd key on the master with: etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' but I got this error when trying to start flannel on the minion: Failed to retrieve network config: 100: Key not found (/atomic.io) So I edited the /etc/sysconfig/flanneld file on the minion from: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" to: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" i.e. changed the FLANNEL_ETCD key. After this systemctl start flanneld worked. #2 DOCKER I didn't find a way to make the version installed as a dependency by kubernetes work so I uninstalled it and following the docker docs for Centos installed docker-engine and manually created a docker.service file for systemctl. cd /usr/lib/systemd/system and the contents of the docker.service: [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket Requires=flanneld.service After=flanneld.service [Service] EnvironmentFile=/etc/sysconfig/flanneld ExecStart=/usr/bin/docker daemon -H fd:// --bip=${FLANNEL_SUBNET} Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target then start and enable the daemon with systemctl as well as query the status. systemctl restart docker systemctl enable docker systemctl status docker
###Instruction: I'm trying to set up a kubernetes cluster on 2 nodes , centos 7.1 using this guide. However when I attempt to start the services on the minion like so: for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done I get the following error: -- Logs begin at Wed 2015-12-23 13:00:41 UTC, end at Wed 2015-12-23 16:03:54 UTC. -- Dec 23 16:03:47 sc-test2 systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE Dec 23 16:03:47 sc-test2 systemd[1]: Failed to start Docker Storage Setup. -- Subject: Unit docker-storage-setup.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker-storage-setup.service has failed. -- -- The result is failed. Dec 23 16:03:47 sc-test2 systemd[1]: Unit docker-storage-setup.service entered failed state. Dec 23 16:03:48 sc-test2 flanneld[36477]: E1223 16:03:48.187350 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:49 sc-test2 flanneld[36477]: E1223 16:03:49.189860 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:50 sc-test2 flanneld[36477]: E1223 16:03:50.192894 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:51 sc-test2 flanneld[36477]: E1223 16:03:51.194940 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:52 sc-test2 flanneld[36477]: E1223 16:03:52.197222 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:53 sc-test2 flanneld[36477]: E1223 16:03:53.199248 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:54 sc-test2 flanneld[36477]: E1223 16:03:54.201160 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) I'm sure I set the key on the master with : etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' By far installation seems to be the hardest bit on using kubernetes :( ### Response: Today's christmas but I spent the whole day trying to get this to work :) This is what I did: #1 FLANNEL As mentioned I'd set the flannel etcd key on the master with: etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' but I got this error when trying to start flannel on the minion: Failed to retrieve network config: 100: Key not found (/atomic.io) So I edited the /etc/sysconfig/flanneld file on the minion from: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" to: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" i.e. changed the FLANNEL_ETCD key. After this systemctl start flanneld worked. #2 DOCKER I didn't find a way to make the version installed as a dependency by kubernetes work so I uninstalled it and following the docker docs for Centos installed docker-engine and manually created a docker.service file for systemctl. cd /usr/lib/systemd/system and the contents of the docker.service: [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket Requires=flanneld.service After=flanneld.service [Service] EnvironmentFile=/etc/sysconfig/flanneld ExecStart=/usr/bin/docker daemon -H fd:// --bip=${FLANNEL_SUBNET} Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target then start and enable the daemon with systemctl as well as query the status. systemctl restart docker systemctl enable docker systemctl status docker
I'm trying to set up a kubernetes cluster on 2 nodes , centos 7.1 using this guide. However when I attempt to start the services on the minion like so: for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done I get the following error: -- Logs begin at Wed 2015-12-23 13:00:41 UTC, end at Wed 2015-12-23 16:03:54 UTC. -- Dec 23 16:03:47 sc-test2 systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE Dec 23 16:03:47 sc-test2 systemd[1]: Failed to start Docker Storage Setup. -- Subject: Unit docker-storage-setup.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker-storage-setup.service has failed. -- -- The result is failed. Dec 23 16:03:47 sc-test2 systemd[1]: Unit docker-storage-setup.service entered failed state. Dec 23 16:03:48 sc-test2 flanneld[36477]: E1223 16:03:48.187350 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:49 sc-test2 flanneld[36477]: E1223 16:03:49.189860 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:50 sc-test2 flanneld[36477]: E1223 16:03:50.192894 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:51 sc-test2 flanneld[36477]: E1223 16:03:51.194940 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:52 sc-test2 flanneld[36477]: E1223 16:03:52.197222 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:53 sc-test2 flanneld[36477]: E1223 16:03:53.199248 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:54 sc-test2 flanneld[36477]: E1223 16:03:54.201160 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) I'm sure I set the key on the master with : etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' By far installation seems to be the hardest bit on using kubernetes :(
No, you cannot specify multiple ports in kubectl run. But you can use kubectl create to create a replication controller, and specify multiple ports for the container. https://github.com/kubernetes/examples/blob/master/cassandra/cassandra-statefulset.yaml has an example: ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql
###Instruction: Is it possible to create a Pod in the Google Container Engine where two ports are exposed: port 8080 is listening for incoming content and port 80 distributes this content to clients? The following command to create a Pod is given as example by Google: kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node --port=8080 I can't seem to define a listening port, and when adding a second "--port=" switch only one port is exposed. Is there a way to expose a second port or am I limited to one port per container? ### Response: No, you cannot specify multiple ports in kubectl run. But you can use kubectl create to create a replication controller, and specify multiple ports for the container. https://github.com/kubernetes/examples/blob/master/cassandra/cassandra-statefulset.yaml has an example: ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql
Is it possible to create a Pod in the Google Container Engine where two ports are exposed: port 8080 is listening for incoming content and port 80 distributes this content to clients? The following command to create a Pod is given as example by Google: kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node --port=8080 I can't seem to define a listening port, and when adding a second "--port=" switch only one port is exposed. Is there a way to expose a second port or am I limited to one port per container?
To get your setup working, you can do the same thing local-up-cluster.sh is doing: Generate a signing key: openssl genrsa -out /tmp/serviceaccount.key 2048 Update /etc/kubernetes/apiserver: KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key" Update /etc/kubernetes/controller-manager: KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key" From https://github.com/kubernetes/kubernetes/issues/11355#issuecomment-127378691
###Instruction: I have the following mysql.yaml file: apiVersion: v1beta3 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 1 image: mysql name: mysql env: - name: MYSQL_ROOT_PASSWORD # change this value: yourpassword ports: - containerPort: 3306 name: mysql Running kubectl create -f mysql.yaml gives the error: Error from server: error when creating "mysql.yaml": Pod "Unknown" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account I have a master and a node both centos 7.1. ### Response: To get your setup working, you can do the same thing local-up-cluster.sh is doing: Generate a signing key: openssl genrsa -out /tmp/serviceaccount.key 2048 Update /etc/kubernetes/apiserver: KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key" Update /etc/kubernetes/controller-manager: KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key" From https://github.com/kubernetes/kubernetes/issues/11355#issuecomment-127378691
I have the following mysql.yaml file: apiVersion: v1beta3 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 1 image: mysql name: mysql env: - name: MYSQL_ROOT_PASSWORD # change this value: yourpassword ports: - containerPort: 3306 name: mysql Running kubectl create -f mysql.yaml gives the error: Error from server: error when creating "mysql.yaml": Pod "Unknown" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account I have a master and a node both centos 7.1.
From the error message and from the docs: A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed. So you have two options to fix this: Set the replicas in your ReplicationController to 1 Make the volume readOnly: true gcePersistentDisk: pdName: my-data-disk fsType: ext4 readOnly: true
###Instruction: I would like to use persistent disk in my replication controller but if i use a gcePersistentDisk the console returns me an error ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only. ### Response: From the error message and from the docs: A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed. So you have two options to fix this: Set the replicas in your ReplicationController to 1 Make the volume readOnly: true gcePersistentDisk: pdName: my-data-disk fsType: ext4 readOnly: true
I would like to use persistent disk in my replication controller but if i use a gcePersistentDisk the console returns me an error ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.
The Ingress resource, which was added in version 1.1.0, was designed specifically for this use case. It allows you to put multiple services behind a single IP address, routing to them based on HTTP path. Check out the user guide on it for more details, but feel free to ask if you have more questions about it! edit: For a non-HTTP(S) service, you can have to find a way to make sure all necessary ports get load balanced by the ELB and then properly routed by Kubernetes. On GCE, you could manually create the load balancer with the ports you need, and then put the load balancer's IP in the externalIPs field for each service. My memory's a little fuzzy, but I don't believe that'll work with an ELB due to its packet rewriting. You might instead want to create each service as a NodePort service, then configure your ELB to forward the packets from the correct external port to the node port for each service.
###Instruction: I have the following services hosted in my Kubernetes cluster on AWS. An nginx server, on ports 80 and 443. A Minecraft server, at port 25565. Both are working great. I currently have both of them set to type: LoadBalancer, so they both have Elastic Load Balancers that are providing ingress to the cluster. I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP. I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. Here's the setup on a gist. When I try and curl on the allocated nodePort from inside the cluster it just hangs. Is there a way to have one service balance to multiple services? ### Response: The Ingress resource, which was added in version 1.1.0, was designed specifically for this use case. It allows you to put multiple services behind a single IP address, routing to them based on HTTP path. Check out the user guide on it for more details, but feel free to ask if you have more questions about it! edit: For a non-HTTP(S) service, you can have to find a way to make sure all necessary ports get load balanced by the ELB and then properly routed by Kubernetes. On GCE, you could manually create the load balancer with the ports you need, and then put the load balancer's IP in the externalIPs field for each service. My memory's a little fuzzy, but I don't believe that'll work with an ELB due to its packet rewriting. You might instead want to create each service as a NodePort service, then configure your ELB to forward the packets from the correct external port to the node port for each service.
I have the following services hosted in my Kubernetes cluster on AWS. An nginx server, on ports 80 and 443. A Minecraft server, at port 25565. Both are working great. I currently have both of them set to type: LoadBalancer, so they both have Elastic Load Balancers that are providing ingress to the cluster. I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP. I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. Here's the setup on a gist. When I try and curl on the allocated nodePort from inside the cluster it just hangs. Is there a way to have one service balance to multiple services?
I had a huge experience working with containers and different container solutions, including Amazon ECS and Kubernetes, and I have found, that Kubernetes as one of the most useful solutions for managing containers in the different environments. The main benefit of Kubernetes - that it is a mature solution, originally developed by Google, but it is completely open-source! That means that anyone may look under the hood and (if necessary), modify and update the source code up to his purposes. Another huge benefit of Kubernetes - it is completely free. That mean that you may install it and run on your own infrastructure, without paying any additional costs for Kubernetes itself. You may run Kubernetes on the huge amount of different providers. It doesn't matter on what environment do you run the Kubernetes cluster - you should only take care of the Kubernetes cluster itself. That allows you to run, for example, the development cluster locally on Vagrant, as make the distributed production environment on the public cloud like AWS or GCE and the private cloud like OpenStack or simply using some libvirt solutions (using CoreOS for example). Again, from the point of view of Kubernetes - it doesn't matter what infrastructure solution do you use - the only one requirement for it - to be Kubernetes-enabled. Speaking about the Amazon ECS - that is a proprietary and vendor-locked solution. It also may give you the same performance as Kubernetes, but it won't give you the same flexibility. So, globally one may compare both Amazon ECS and Kubernetes, but Kubernetes is much more flexible and ready-to-customize solution.
###Instruction: What is the difference between Amazon ECS and Kubernetes implementation architecture? I need to decide to pick a technology for container management in cloud. What is the deciding factor while picking any of these technology? I am using Docker for container creation and execution. ### Response: I had a huge experience working with containers and different container solutions, including Amazon ECS and Kubernetes, and I have found, that Kubernetes as one of the most useful solutions for managing containers in the different environments. The main benefit of Kubernetes - that it is a mature solution, originally developed by Google, but it is completely open-source! That means that anyone may look under the hood and (if necessary), modify and update the source code up to his purposes. Another huge benefit of Kubernetes - it is completely free. That mean that you may install it and run on your own infrastructure, without paying any additional costs for Kubernetes itself. You may run Kubernetes on the huge amount of different providers. It doesn't matter on what environment do you run the Kubernetes cluster - you should only take care of the Kubernetes cluster itself. That allows you to run, for example, the development cluster locally on Vagrant, as make the distributed production environment on the public cloud like AWS or GCE and the private cloud like OpenStack or simply using some libvirt solutions (using CoreOS for example). Again, from the point of view of Kubernetes - it doesn't matter what infrastructure solution do you use - the only one requirement for it - to be Kubernetes-enabled. Speaking about the Amazon ECS - that is a proprietary and vendor-locked solution. It also may give you the same performance as Kubernetes, but it won't give you the same flexibility. So, globally one may compare both Amazon ECS and Kubernetes, but Kubernetes is much more flexible and ready-to-customize solution.
What is the difference between Amazon ECS and Kubernetes implementation architecture? I need to decide to pick a technology for container management in cloud. What is the deciding factor while picking any of these technology? I am using Docker for container creation and execution.
You'll probably have the best time using the Kubernetes client library, which is the same code the Kubectl uses, but if for some reason that isn't an option, than my best suggestion is to look through the client library's code for executing remote commands and seeing what headers it sets.
###Instruction: I'm trying to execute command in a contianer (in a Kubernetes POD on GKE with kubernetes 1.1.2). Reading documentation I understood that I can use GET or POST query to open websocket connection on API endpoint to execute command. When I use GET, it does not work completly, returns error. When I try to use POST, something like that could work probably (but it's not): curl 'https://admin:xxx@IP/api/v1/namespaces/default/pods/hello-whue1/exec?stdout=1&stderr=1&command=ls' -H "Connection: upgrade" -k -X POST -H 'Upgrade: websocket' repsponse for that is unable to upgrade: missing upgrade headers in request: http.Header{"User-Agent":[]string{"curl/7.44.0"}, "Content-Length":[]string{"0"}, "Accept":[]string{"*/*"}, "Authorization":[]string{"Basic xxx=="}, "Connection":[]string{"upgrade"}, "Upgrade":[]string{"websocket"}} Looks like that should be enough to upgrade post request and start using websocket streams, right? What I'm missing? I was also pointed that opening websocket with POST is probably violation of websocket protocol (only GET should work?). Also ### Response: You'll probably have the best time using the Kubernetes client library, which is the same code the Kubectl uses, but if for some reason that isn't an option, than my best suggestion is to look through the client library's code for executing remote commands and seeing what headers it sets.
I'm trying to execute command in a contianer (in a Kubernetes POD on GKE with kubernetes 1.1.2). Reading documentation I understood that I can use GET or POST query to open websocket connection on API endpoint to execute command. When I use GET, it does not work completly, returns error. When I try to use POST, something like that could work probably (but it's not): curl 'https://admin:xxx@IP/api/v1/namespaces/default/pods/hello-whue1/exec?stdout=1&stderr=1&command=ls' -H "Connection: upgrade" -k -X POST -H 'Upgrade: websocket' repsponse for that is unable to upgrade: missing upgrade headers in request: http.Header{"User-Agent":[]string{"curl/7.44.0"}, "Content-Length":[]string{"0"}, "Accept":[]string{"*/*"}, "Authorization":[]string{"Basic xxx=="}, "Connection":[]string{"upgrade"}, "Upgrade":[]string{"websocket"}} Looks like that should be enough to upgrade post request and start using websocket streams, right? What I'm missing? I was also pointed that opening websocket with POST is probably violation of websocket protocol (only GET should work?). Also
As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you. Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running: apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - nodePort: 30080 port: 80 targetPort: 80 type: NodePort Create a backend service with HTTP health check on port 30080 for your instance group (nodes).
###Instruction: I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site. How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses? I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet: kind: Ingress metadata: name: example-ingress spec: rules: - host: example.org http: paths: - backend: serviceName: example-web servicePort: 80 path: / I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost: apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - port: 80 targetPort: 80 loadBalancerIP: "10.10.10.10" type: LoadBalancer I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer. This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong? ### Response: As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you. Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running: apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - nodePort: 30080 port: 80 targetPort: 80 type: NodePort Create a backend service with HTTP health check on port 30080 for your instance group (nodes).
I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site. How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses? I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet: kind: Ingress metadata: name: example-ingress spec: rules: - host: example.org http: paths: - backend: serviceName: example-web servicePort: 80 path: / I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost: apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - port: 80 targetPort: 80 loadBalancerIP: "10.10.10.10" type: LoadBalancer I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer. This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.
###Instruction: Hi I am running kubernetes cluster where I run Logstash container. But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command: docker run --log-driver=gelf logstash -f /config-dir/logstash.conf But I need to run it via Kubernetes pod. My pod looks like: spec: containers: - name: logstash-logging image: "logstash:latest" command: ["logstash", "-f" , "/config-dir/logstash.conf"] volumeMounts: - name: configs mountPath: /config-dir/logstash.conf How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks. ### Response: Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.
Hi I am running kubernetes cluster where I run Logstash container. But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command: docker run --log-driver=gelf logstash -f /config-dir/logstash.conf But I need to run it via Kubernetes pod. My pod looks like: spec: containers: - name: logstash-logging image: "logstash:latest" command: ["logstash", "-f" , "/config-dir/logstash.conf"] volumeMounts: - name: configs mountPath: /config-dir/logstash.conf How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you're interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API. The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs. Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)
###Instruction: We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging. We also use LivenessProbes on our containers to make sure they operate as expected. Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search. Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod? ### Response: The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (source code, if you're interested). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API. The output of successful probes isn't recorded anywhere unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs. Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)
We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging. We also use LivenessProbes on our containers to make sure they operate as expected. Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search. Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?
The mechanism for copying the configuration file necessary to get remote access to your cluster is described in Sharing Cluster Access. You may need to manually tweak ~/.kube/config after copying if it contains an endpoint that isn't remotely reachable (a non-routable IP) and/or open up firewall access to your apiserver running in Azure. Otherwise, once you get the local config file in place you should be all set.
###Instruction: I have provisioned kubernetes cluster in Azure Cloud(using CoreOS) using the guide http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/azure/README.html Its working fine, now I want to run kubectl commands from my local machine(I use Mac). for that I installed kubernetes-cli with brew, but I am not able to connect to the remote kubernetes cluster. When I run "kubectl version", user$ kubectl version Client Version: version.Info{Major:"1", Minor:"1",GitVersion:"v1.1.2+3085895",GitCommit:"3085895b8a70a3d985e9320a098e74f545546171",GitTreeState:"not a git tree"} error: couldn't read version from server: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused how to connect to the kubernetes cluster via SSH or so? Note: manually I am able to ssh to the kubernetes nodes and run kubectl commands there. ### Response: The mechanism for copying the configuration file necessary to get remote access to your cluster is described in Sharing Cluster Access. You may need to manually tweak ~/.kube/config after copying if it contains an endpoint that isn't remotely reachable (a non-routable IP) and/or open up firewall access to your apiserver running in Azure. Otherwise, once you get the local config file in place you should be all set.
I have provisioned kubernetes cluster in Azure Cloud(using CoreOS) using the guide http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/azure/README.html Its working fine, now I want to run kubectl commands from my local machine(I use Mac). for that I installed kubernetes-cli with brew, but I am not able to connect to the remote kubernetes cluster. When I run "kubectl version", user$ kubectl version Client Version: version.Info{Major:"1", Minor:"1",GitVersion:"v1.1.2+3085895",GitCommit:"3085895b8a70a3d985e9320a098e74f545546171",GitTreeState:"not a git tree"} error: couldn't read version from server: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused how to connect to the kubernetes cluster via SSH or so? Note: manually I am able to ssh to the kubernetes nodes and run kubectl commands there.
There are several options, but none that I know that does it all: you will need 2 things: autoscaling hosts according to signals, then autoscale containers on the hosts. The following are the solutions to deploy and scale containers on the hosts (not necessarily auto-scale though): Kubernetes is an orchestration tool which allows to schedule and (with the optional autoscaler) to autoscale pods (groups of containers) in the cluster. It makes sure your containers are running somewhere if a host fails. Google Container Engine (GKE) offers this as a service, however i am not sure they have the same functionalities to autoscale the number of VMs in the cluster as AWS does. Mesos: somewhat similar to Kubernetes but not dedicated to running containers. Docker Swarm: the Docker multi-host deployment solution, allows you control many hosts as if they were a single Docker host. I don't believe it has any kind of 'autoscaling' capability, and I don't believe it takes care of making sure pods are always running somewhere: it's basically docker for cluster. [EDIT] Docker supports restarting failing containers with the restart=always option, also, as of Docker 1.11 Docker Swarm is a mode in Docker Daemon, and supports rescheduling containers on node failure: it will restart containers on a different node if a node is no longer available. Docker 1.11+ is becoming a lot like Kubernetes in terms of functionalities. It has some nice features (like TLS between nodes by default), but still lacks things like static IPs and storage provisioning None of these solutions will autoscale the number of hosts for you, but they can scale the number of containers on the hosts. For autoscaling hosts, solutions are specific to your cloud provider, so these are dedicated solution. The key part for you is to integrate the two: AWS allows deployment of Kubernetes on CoreOS; I don't think they offer this as a service, so you need to deploy your own CoreOS cluster and Kubernetes. Now my personal opinion (and disclaimer) I have mostly used Kubernetes on GKE and bare-metal, as well as Swarm a about 6 months ago, and i run an infra with ~35 services on GKE: Frankly, GKE with Kubernetes as a Service offers most of what you want, but it's not AWS. Scaling hosts is still a bit tricky and will require some work. Setting up your own Kubernetes or Mesos on AWS or bare metal is very feasible, but there is quite a learning curve: it all depends if you really strongly feel about being on AWS and are willing to spend the time. Swarm is probably the easiest to get working with, but more limited, however homebuilt script can well do the job core job: use AWS APIs to scale hosts, and Swarm to deploy. The availability guarantee though would require you monitoring and take care of re-launching containers if a node fails. Other than that, there are also container hosting providers that may do the job for you: Scalingo is one i know of but there are others. https://scalingo.com/ OVH Sail Above has this service in alpha. https://www.runabove.com/sailabove.xml
###Instruction: I have a few basic questions on scaling Docker containers: I have 5 different apps. They are not connected to each other. Before having containers I would run 1 app per VM and scale them up and down individually in the cloud. Now with containers I get the isolation on top of a VM, so now I can potentially run one host with 5 docker containers where each app is isolated in its own container. As long as I have enough resources on my host I can scale up and down those containers individually as my traffic grows or shrinks. e.g. I have 3 containers running app 1, but only 1 container running app 2. At peak times app 3 gets a lot of traffic and I need to launch a 2nd host which runs only containers for app 3. My first question is if the above makes sense what I say or if I have misunderstood something. My second question is what technology is currently available to get this all done in an automated way. I need a load balancer and an auto scaling group which is capable of the above scenario without me having to do manual interventions. I looked into AWS ECS and am not quite sure if it can satisfy my needs as I outlined it above. Does anyone know how to achieve this, or is there a better way of managing and scaling my 5 apps which I am missing? UPDATE: Via Twitter I have been pointed to Kubernetes and specifically to the docs on the Horizontal Pod Autoscaler. Might be useful for others as well. I will update this question as I learn more. ### Response: There are several options, but none that I know that does it all: you will need 2 things: autoscaling hosts according to signals, then autoscale containers on the hosts. The following are the solutions to deploy and scale containers on the hosts (not necessarily auto-scale though): Kubernetes is an orchestration tool which allows to schedule and (with the optional autoscaler) to autoscale pods (groups of containers) in the cluster. It makes sure your containers are running somewhere if a host fails. Google Container Engine (GKE) offers this as a service, however i am not sure they have the same functionalities to autoscale the number of VMs in the cluster as AWS does. Mesos: somewhat similar to Kubernetes but not dedicated to running containers. Docker Swarm: the Docker multi-host deployment solution, allows you control many hosts as if they were a single Docker host. I don't believe it has any kind of 'autoscaling' capability, and I don't believe it takes care of making sure pods are always running somewhere: it's basically docker for cluster. [EDIT] Docker supports restarting failing containers with the restart=always option, also, as of Docker 1.11 Docker Swarm is a mode in Docker Daemon, and supports rescheduling containers on node failure: it will restart containers on a different node if a node is no longer available. Docker 1.11+ is becoming a lot like Kubernetes in terms of functionalities. It has some nice features (like TLS between nodes by default), but still lacks things like static IPs and storage provisioning None of these solutions will autoscale the number of hosts for you, but they can scale the number of containers on the hosts. For autoscaling hosts, solutions are specific to your cloud provider, so these are dedicated solution. The key part for you is to integrate the two: AWS allows deployment of Kubernetes on CoreOS; I don't think they offer this as a service, so you need to deploy your own CoreOS cluster and Kubernetes. Now my personal opinion (and disclaimer) I have mostly used Kubernetes on GKE and bare-metal, as well as Swarm a about 6 months ago, and i run an infra with ~35 services on GKE: Frankly, GKE with Kubernetes as a Service offers most of what you want, but it's not AWS. Scaling hosts is still a bit tricky and will require some work. Setting up your own Kubernetes or Mesos on AWS or bare metal is very feasible, but there is quite a learning curve: it all depends if you really strongly feel about being on AWS and are willing to spend the time. Swarm is probably the easiest to get working with, but more limited, however homebuilt script can well do the job core job: use AWS APIs to scale hosts, and Swarm to deploy. The availability guarantee though would require you monitoring and take care of re-launching containers if a node fails. Other than that, there are also container hosting providers that may do the job for you: Scalingo is one i know of but there are others. https://scalingo.com/ OVH Sail Above has this service in alpha. https://www.runabove.com/sailabove.xml
I have a few basic questions on scaling Docker containers: I have 5 different apps. They are not connected to each other. Before having containers I would run 1 app per VM and scale them up and down individually in the cloud. Now with containers I get the isolation on top of a VM, so now I can potentially run one host with 5 docker containers where each app is isolated in its own container. As long as I have enough resources on my host I can scale up and down those containers individually as my traffic grows or shrinks. e.g. I have 3 containers running app 1, but only 1 container running app 2. At peak times app 3 gets a lot of traffic and I need to launch a 2nd host which runs only containers for app 3. My first question is if the above makes sense what I say or if I have misunderstood something. My second question is what technology is currently available to get this all done in an automated way. I need a load balancer and an auto scaling group which is capable of the above scenario without me having to do manual interventions. I looked into AWS ECS and am not quite sure if it can satisfy my needs as I outlined it above. Does anyone know how to achieve this, or is there a better way of managing and scaling my 5 apps which I am missing? UPDATE: Via Twitter I have been pointed to Kubernetes and specifically to the docs on the Horizontal Pod Autoscaler. Might be useful for others as well. I will update this question as I learn more.
In the case of Kubernetes, the Horizontal Pod Autoscaling documentation lists the exact use case described under Next steps (i.e. the feature is on the backlog and may be implemented after v1.1. of Kubernetes). The cited feature description (Unidling proposal) is as follows: Scale the number of pods starting from 0. All pods can be turned-off, and then turned-on when there is a demand for them. When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler to create a new pod. So basically, it may be possible to do what I've described in future using Kubernetes, but it is not possible right now. This in itself does not address the requirement to only scale from 0 based on authenticated user activity. It's worth noting, as a cluster-agnostic aside, on-demand container activation based on systemd. This solution will of course not scale back down to 0 without a controlling process, but it's still worth noting.
###Instruction: There are applications and services in enterprises that do not need to run all the time and that have a limited user base (say a handful of people). These applications can be shut down and started either based on scheduling or even better user activity. So, we are talking about on-demand service (say wrapped by a container) and node start-up and shut down. Now, first to mention that the reason why I mention authenticated user activity is because is makes sense to startup and shutdown on that basis (i.e. not based on lower level network traffic). One can imagine corporate SSO (say OAuth 2 based) being involved. So, my question is whether anyone has attempted to implement what I have described using Consul or Kubernetes? In the case of Consul, it could be that the key-value store could be used to give "Micro" (i.e. small user base) class applications a TTL, each time an authenticated user requests access to a given "Micro" class application it's TTL is updated. During the TTL window we want to check the health of the node(s), containers and services - outside of the window we don't (since we want to save on op ex). This question is similar to this autoscaling question, however different in the sense that this use case is about scaling from 0 nodes and then down to 0 based on an authenticated user base (most likely using SSO). ### Response: In the case of Kubernetes, the Horizontal Pod Autoscaling documentation lists the exact use case described under Next steps (i.e. the feature is on the backlog and may be implemented after v1.1. of Kubernetes). The cited feature description (Unidling proposal) is as follows: Scale the number of pods starting from 0. All pods can be turned-off, and then turned-on when there is a demand for them. When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler to create a new pod. So basically, it may be possible to do what I've described in future using Kubernetes, but it is not possible right now. This in itself does not address the requirement to only scale from 0 based on authenticated user activity. It's worth noting, as a cluster-agnostic aside, on-demand container activation based on systemd. This solution will of course not scale back down to 0 without a controlling process, but it's still worth noting.
There are applications and services in enterprises that do not need to run all the time and that have a limited user base (say a handful of people). These applications can be shut down and started either based on scheduling or even better user activity. So, we are talking about on-demand service (say wrapped by a container) and node start-up and shut down. Now, first to mention that the reason why I mention authenticated user activity is because is makes sense to startup and shutdown on that basis (i.e. not based on lower level network traffic). One can imagine corporate SSO (say OAuth 2 based) being involved. So, my question is whether anyone has attempted to implement what I have described using Consul or Kubernetes? In the case of Consul, it could be that the key-value store could be used to give "Micro" (i.e. small user base) class applications a TTL, each time an authenticated user requests access to a given "Micro" class application it's TTL is updated. During the TTL window we want to check the health of the node(s), containers and services - outside of the window we don't (since we want to save on op ex). This question is similar to this autoscaling question, however different in the sense that this use case is about scaling from 0 nodes and then down to 0 based on an authenticated user base (most likely using SSO).
You could also simply use nginx as a proxy for your minecraft server, and forward traffic from ingress port 25565 to the minecraft server. That way all traffic goes through one Service
###Instruction: I have the following services hosted in my Kubernetes cluster on AWS. An nginx server, on ports 80 and 443. A Minecraft server, at port 25565. Both are working great. I currently have both of them set to type: LoadBalancer, so they both have Elastic Load Balancers that are providing ingress to the cluster. I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP. I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. Here's the setup on a gist. When I try and curl on the allocated nodePort from inside the cluster it just hangs. Is there a way to have one service balance to multiple services? ### Response: You could also simply use nginx as a proxy for your minecraft server, and forward traffic from ingress port 25565 to the minecraft server. That way all traffic goes through one Service
I have the following services hosted in my Kubernetes cluster on AWS. An nginx server, on ports 80 and 443. A Minecraft server, at port 25565. Both are working great. I currently have both of them set to type: LoadBalancer, so they both have Elastic Load Balancers that are providing ingress to the cluster. I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP. I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. Here's the setup on a gist. When I try and curl on the allocated nodePort from inside the cluster it just hangs. Is there a way to have one service balance to multiple services?
The guide you use don't include instructions for bare-metal machines. You need networking (e.g., flannel, calico) that implements Kubernetes's networking model. You can check the table of solutions for getting-started guides for different IaaS/OS/Network combinations.
###Instruction: I used the node.yaml and master.yaml files here: http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/coreos_multinode_cluster.html to create a multi-node cluster on 3 bare-metal machines running CoreOS. However, pods on different nodes can’t communicate with each other. I’d appreciate any pointers or suggestions. I’m at a loss. I have three pods running rabbitmq: thuey:~ thuey$ kbg pods | grep rabbitmq rabbitmq-bootstrap 1/1 Running 0 3h rabbitmq-jz2q7 1/1 Running 0 3h rabbitmq-mrnfc 1/1 Running 0 3h Two of the pods are on one machine: kbd node jolt-server-3 | grep rabbitmq thuey rabbitmq-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) thuey rabbitmq-jz2q7 0 (0%) 0 (0%) 0 (0%) 0 (0%) And the other pod is on another machine: thuey:~ thuey$ kbd node jolt-server-4 | grep rabbitmq thuey rabbitmq-mrnfc 0 (0%) 0 (0%) 0 (0%) 0 (0%) I can successfully ping from rabbitmq-bootstrap to rabbitmq-jz2q7: root@rabbitmq-bootstrap:/# ping 172.17.0.5 PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data. 64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.035 ms 64 bytes from 172.17.0.5: icmp_seq=3 ttl=64 time=0.064 ms 64 bytes from 172.17.0.5: icmp_seq=4 ttl=64 time=0.055 ms ^C --- 172.17.0.5 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.035/0.053/0.064/0.010 ms But I can't ping rabbitmq-mrnfc: root@rabbitmq-bootstrap:/# ping 172.17.0.8 PING 172.17.0.8 (172.17.0.8) 56(84) bytes of data. From 172.17.0.2 icmp_seq=1 Destination Host Unreachable From 172.17.0.2 icmp_seq=2 Destination Host Unreachable From 172.17.0.2 icmp_seq=3 Destination Host Unreachable From 172.17.0.2 icmp_seq=4 Destination Host Unreachable ^C --- 172.17.0.8 ping statistics --- 5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4000ms pipe 4 ### Response: The guide you use don't include instructions for bare-metal machines. You need networking (e.g., flannel, calico) that implements Kubernetes's networking model. You can check the table of solutions for getting-started guides for different IaaS/OS/Network combinations.
I used the node.yaml and master.yaml files here: http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/coreos_multinode_cluster.html to create a multi-node cluster on 3 bare-metal machines running CoreOS. However, pods on different nodes can’t communicate with each other. I’d appreciate any pointers or suggestions. I’m at a loss. I have three pods running rabbitmq: thuey:~ thuey$ kbg pods | grep rabbitmq rabbitmq-bootstrap 1/1 Running 0 3h rabbitmq-jz2q7 1/1 Running 0 3h rabbitmq-mrnfc 1/1 Running 0 3h Two of the pods are on one machine: kbd node jolt-server-3 | grep rabbitmq thuey rabbitmq-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) thuey rabbitmq-jz2q7 0 (0%) 0 (0%) 0 (0%) 0 (0%) And the other pod is on another machine: thuey:~ thuey$ kbd node jolt-server-4 | grep rabbitmq thuey rabbitmq-mrnfc 0 (0%) 0 (0%) 0 (0%) 0 (0%) I can successfully ping from rabbitmq-bootstrap to rabbitmq-jz2q7: root@rabbitmq-bootstrap:/# ping 172.17.0.5 PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data. 64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.035 ms 64 bytes from 172.17.0.5: icmp_seq=3 ttl=64 time=0.064 ms 64 bytes from 172.17.0.5: icmp_seq=4 ttl=64 time=0.055 ms ^C --- 172.17.0.5 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.035/0.053/0.064/0.010 ms But I can't ping rabbitmq-mrnfc: root@rabbitmq-bootstrap:/# ping 172.17.0.8 PING 172.17.0.8 (172.17.0.8) 56(84) bytes of data. From 172.17.0.2 icmp_seq=1 Destination Host Unreachable From 172.17.0.2 icmp_seq=2 Destination Host Unreachable From 172.17.0.2 icmp_seq=3 Destination Host Unreachable From 172.17.0.2 icmp_seq=4 Destination Host Unreachable ^C --- 172.17.0.8 ping statistics --- 5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4000ms pipe 4
To list all images in your gcr.io private registry, you can use the docker search command, pointing at your registry, using your Google credentials: gcloud docker search gcr.io/your-registry Or in two steps, configuring docker to use your Google credentials: gcloud docker -a docker search gcr.io/your-registry
###Instruction: I'm trying to read Images using the kubernetes API, but am not seeing an API for that. Is there an API to Read Images List from my google cloud account? ### Response: To list all images in your gcr.io private registry, you can use the docker search command, pointing at your registry, using your Google credentials: gcloud docker search gcr.io/your-registry Or in two steps, configuring docker to use your Google credentials: gcloud docker -a docker search gcr.io/your-registry
I'm trying to read Images using the kubernetes API, but am not seeing an API for that. Is there an API to Read Images List from my google cloud account?
I resolved this issue by starting the docker deamon manually using: sudo service docker start
###Instruction: I followed docker instructions to install and verify the docker installation (from http://docs.docker.com/linux/step_one/). I tried on 2 Ubuntu 14.04 machines and on both I got following error when starting docker daemon: $ sudo docker daemon INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] [graphdriver] using prior storage driver "aufs" INFO[0000] Option DefaultDriver: bridge INFO[0000] Option DefaultNetwork: bridge WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found. , error: exit status 1 INFO[0000] Firewalld running: false WARN[0000] Your kernel does not support cgroup memory limit: mountpoint for memory not found WARN[0000] mountpoint for cpu not found FATA[0000] Error starting daemon: Devices cgroup isn't mounted I appreciate any help to resolve this issue. ### Response: I resolved this issue by starting the docker deamon manually using: sudo service docker start
I followed docker instructions to install and verify the docker installation (from http://docs.docker.com/linux/step_one/). I tried on 2 Ubuntu 14.04 machines and on both I got following error when starting docker daemon: $ sudo docker daemon INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] [graphdriver] using prior storage driver "aufs" INFO[0000] Option DefaultDriver: bridge INFO[0000] Option DefaultNetwork: bridge WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found. , error: exit status 1 INFO[0000] Firewalld running: false WARN[0000] Your kernel does not support cgroup memory limit: mountpoint for memory not found WARN[0000] mountpoint for cpu not found FATA[0000] Error starting daemon: Devices cgroup isn't mounted I appreciate any help to resolve this issue.
You'd probably need to deploy Kubernetes with Heat and use autoscale in your template. Example: http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat and a template for Kubernetes to build on: https://github.com/metral/corekube/blob/master/corekube-openstack.yaml
###Instruction: Is it possible to enable autoscaling of nodes (minions) in Kubernetes running on CoreOS in OpenStack? I only read about AWS and GCE. ### Response: You'd probably need to deploy Kubernetes with Heat and use autoscale in your template. Example: http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat and a template for Kubernetes to build on: https://github.com/metral/corekube/blob/master/corekube-openstack.yaml
Is it possible to enable autoscaling of nodes (minions) in Kubernetes running on CoreOS in OpenStack? I only read about AWS and GCE.
Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? All hosts (that are currently running a functional kube-proxy process) are capable of receiving and handling incoming requests for the externalized service. The requests will land on an arbitrary node VM in your cluster, match an iptables rule and be forwarded (by kube-proxy process) to a pod that has a label selector that matches the service. So the case where a healthchecker would prevent requests from being dropped is if you had a node VM that was running in a broken state. The VM would still have the target tag matching the forwarding rule but wouldn't be able to handle the incoming packets. In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This is working as intended. Each service can use any port that is desires, meaning that multiple services can use ports 80 and 443. If a packet arrives on the host IP on port 80, the host has no way to know which of the (possibly many) services using port 80 the packet should be forwarded to. The iptables rules for services handle packets that are destined to the virtual internal cluster service IP and the external service IP, but not the host IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually. If you want to set up a healthcheck to verify that a node is working properly, you can healthcheck the kubelet process that is running on port 10250 by installing a firewall rule: $ gcloud compute firewall-rules create kubelet-healthchecks \ --source-ranges 130.211.0.0/22 \ --target-tags $TAG \ --allow tcp:10250 (check out the Container Engine HTTP Load Balancer documentation to help find what you should be using for $TAG). It would be better to health check the kube-proxy process directly, but it only binds to localhost, whereas the kubelet process binds to all interfaces so it is reachable by the health checkers and it should serve as a good indicator that the node is healthy enough to serve requests to your service.
###Instruction: Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status." I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a ephemeral external IP address for the LB and an associated Firewall rule that allows incoming traffic is created. My issue is that the Kubernetes auto-generated firewall rule description is "KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1" (IP is the LB external IP). In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually. I have proper firewall rules so that any IP address may contact TCP 443, 80 on any instance within my pool, so that's not the issue. Can someone explain this to me because it makes me think that the LB is passing HTTP requests to both instances despite only one of those instances having the NGINX pod running on it. ### Response: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? All hosts (that are currently running a functional kube-proxy process) are capable of receiving and handling incoming requests for the externalized service. The requests will land on an arbitrary node VM in your cluster, match an iptables rule and be forwarded (by kube-proxy process) to a pod that has a label selector that matches the service. So the case where a healthchecker would prevent requests from being dropped is if you had a node VM that was running in a broken state. The VM would still have the target tag matching the forwarding rule but wouldn't be able to handle the incoming packets. In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This is working as intended. Each service can use any port that is desires, meaning that multiple services can use ports 80 and 443. If a packet arrives on the host IP on port 80, the host has no way to know which of the (possibly many) services using port 80 the packet should be forwarded to. The iptables rules for services handle packets that are destined to the virtual internal cluster service IP and the external service IP, but not the host IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually. If you want to set up a healthcheck to verify that a node is working properly, you can healthcheck the kubelet process that is running on port 10250 by installing a firewall rule: $ gcloud compute firewall-rules create kubelet-healthchecks \ --source-ranges 130.211.0.0/22 \ --target-tags $TAG \ --allow tcp:10250 (check out the Container Engine HTTP Load Balancer documentation to help find what you should be using for $TAG). It would be better to health check the kube-proxy process directly, but it only binds to localhost, whereas the kubelet process binds to all interfaces so it is reachable by the health checkers and it should serve as a good indicator that the node is healthy enough to serve requests to your service.
Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status." I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a ephemeral external IP address for the LB and an associated Firewall rule that allows incoming traffic is created. My issue is that the Kubernetes auto-generated firewall rule description is "KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1" (IP is the LB external IP). In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually. I have proper firewall rules so that any IP address may contact TCP 443, 80 on any instance within my pool, so that's not the issue. Can someone explain this to me because it makes me think that the LB is passing HTTP requests to both instances despite only one of those instances having the NGINX pod running on it.
seems like a TCP problem. try to isolate the problem by checking if TCP/8080 is open by issue telnet 127.0.0.1 8080 if you got a 'connection refused' - you should probably look at the firewall/security setting of your machine.
###Instruction: I installed Kubernetes on linux using the steps here. Everything worked fine until I exited the terminal and opened a new terminal session. I got a permission denied error and after restarting my machine I get the following error > kubectl get pod<br/> error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused I am just getting started with Kubernetes any help would be appreciated. ### Response: seems like a TCP problem. try to isolate the problem by checking if TCP/8080 is open by issue telnet 127.0.0.1 8080 if you got a 'connection refused' - you should probably look at the firewall/security setting of your machine.
I installed Kubernetes on linux using the steps here. Everything worked fine until I exited the terminal and opened a new terminal session. I got a permission denied error and after restarting my machine I get the following error > kubectl get pod<br/> error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused I am just getting started with Kubernetes any help would be appreciated.
If you're running at least version 1.1.0 of Kubernetes (you most likely are), then if the logs you write are JSON formatted, they'll show up as structured logs in the Cloud Logging console. Then certain JSON keys are interpreted specially when imported into Cloud Logging, for example 'severity' will be used to set the log level in the console, or 'timestamp' can be used to set the time.
###Instruction: I have a python service running in kubernetes container and writing logs to stdout. I can see the logs in Cloud Logging Console, but they are not structured, meanining: 1. I can't filter log levels 2. Log record with multiple lines interpreted as multiple log records 3. Dates are not parse etc. How can I address this problem? Can I configure flunetd deamon somehow? Or should I write in a specific format? Thanks ### Response: If you're running at least version 1.1.0 of Kubernetes (you most likely are), then if the logs you write are JSON formatted, they'll show up as structured logs in the Cloud Logging console. Then certain JSON keys are interpreted specially when imported into Cloud Logging, for example 'severity' will be used to set the log level in the console, or 'timestamp' can be used to set the time.
I have a python service running in kubernetes container and writing logs to stdout. I can see the logs in Cloud Logging Console, but they are not structured, meanining: 1. I can't filter log levels 2. Log record with multiple lines interpreted as multiple log records 3. Dates are not parse etc. How can I address this problem? Can I configure flunetd deamon somehow? Or should I write in a specific format? Thanks
Here is the solution we came up with. When we are running against single node Kubernetes we realized by trial and error that when you expose a service the external IP does not come back via IngressIP; rather, it comes back via the clusterIP, which as mentioned above is publicly viewable. So, we just modified our code to work with that. We use the clusterIP in the single node case. Here is the code we use to establish a watch on the service to figure out when k8s has allocated our externally visible IP: First we use the fabric8 API to create the service configuration: case "Service" => val serviceConf = mapper.readValue(f, classOf[Service]) val service = kube.services().inNamespace(namespaceId).create(serviceConf) watchService(service) The 'watchService' method is defined below: private def watchService(service: Service) = { val namespace = service.getMetadata.getNamespace val name = service.getMetadata.getName logger.debug("start -> watching service -> namespace: " + namespace + " name: " + name) val kube = createClient() try { @volatile var complete = false val socket = kube.services().inNamespace(namespace).withName(name).watch(new Watcher[Service]() { def eventReceived(action: Action, resource: Service) { logger.info(action + ":" + resource) action match { case Action.MODIFIED => if (resource.getMetadata.getName == name) { complete = isServiceComplete(resource) } // case Action.DELETED => // complete = true case _ => } } }) while (!complete) { Thread.sleep(5000) complete = isServiceComplete(kube.services().inNamespace(namespace).withName(name).get) } logger.info("Closing socket connection") socket.close() } finally { logger.info("Closing client connection") kube.close() } logger.debug("complete -> watching services , namespace: " + namespace + " name: " + name) } The key hack we introduced was in the method 'isServiceComplete' .. when using single node k8s the value of 'isUsingMock' is true. so that makes us use clusterIP to determine if service configuration has completed or not. private def isServiceComplete(service: Service) = { !service.getStatus.getLoadBalancer.getIngress.isEmpty || mockServiceComplete(service) } def mockServiceComplete(service: Service): Boolean = { val clusterIP = service.getSpec.getClusterIP logger.trace(s"mockServiceComplete: $isUsingMock / $clusterIP / $KUBE_SERVER" ) isUsingMock && ! clusterIP.isEmpty } Sorry if there is not a lot of extra context here. Eventually our project should be open source and we can post a complete solution. -chris
###Instruction: I am trying to set up a single node kubernetes cluster for demo and testing purposes, and I want it to behave like a 'full blown' k8s cluster (like google container engine). My client has their own k8s installation, which for this discussion we can assume acts pretty much like google container engine's k8s installation. Getting the Ingress IP on Full Blown K8s I am creating a wordpress pod and exposing it as a service, as described in this tutorial: https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress If you want to replicate the issue, just can just copy paste the commands below, which I lifted from the tutorial: (This assumes you have a project called 'stellar-access-117903'.. if not please set to name of your Google Container Engine project.) # set up the cluster (this will take a while to provision) # gcloud config set project stellar-access-117903 gcloud config set compute/zone us-central1-b gcloud container clusters create hello-world \ --num-nodes 1 \ --machine-type g1-small # Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress The output of the describe command contains a line 'LoadBalancer Ingress: {some-ip-address}' which is exactly what I'd expect. Now, when I do the same thing with the single node cluster setup i don't get that line. I am able to hit the wordpress service at the IP that appears in the output of the 'describe service' command.. But in 'single node' mode, the IP that is printed out is the >cluster IP< of the service, which typically (as I understand it) is not publicly accessible. For some reason it is publicly accessible in single node mode. We can replicate this with the following steps. NOT Getting the Ingress IP on Single Node K8s First setup single node k8s, as described in this tutorial: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md For easy reproducibility, I have included all the commands below, so you can just copy/paste: K8S_VERSION=1.1.1 sudo docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v${K8S_VERSION} \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 # set your context to use the locally running k8s API server # kubectl config set-cluster dev --server=http://localhost:8080 kubectl config set-context dev --cluster=dev --namespace=$NS kubectl config use-context dev Now, execute the very same commands that you performed against Google Container Engine's k8s # Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress The output of the last command (which you will see has no 'Ingress' information) is: Name: wordpress Namespace: default Labels: run=wordpress Selector: run=wordpress Type: LoadBalancer IP: 10.0.0.61 Port: <unnamed> 80/TCP NodePort: <unnamed> 31795/TCP Endpoints: 172.17.0.30:80 Session Affinity: None No events. In google container engine's k8s, I see events like ' Creating load balancer ', ' Load balancer created '. But nothing like that happens in the single node instance. I am wondering ... is there some configuration I need to do to get them to work identically ? It is very important that they work identically... differing only in their scalability, because we want to run tests against the single node version, and it will be very confusing if it behaves differently. Thanks in advance for your help -chris ### Response: Here is the solution we came up with. When we are running against single node Kubernetes we realized by trial and error that when you expose a service the external IP does not come back via IngressIP; rather, it comes back via the clusterIP, which as mentioned above is publicly viewable. So, we just modified our code to work with that. We use the clusterIP in the single node case. Here is the code we use to establish a watch on the service to figure out when k8s has allocated our externally visible IP: First we use the fabric8 API to create the service configuration: case "Service" => val serviceConf = mapper.readValue(f, classOf[Service]) val service = kube.services().inNamespace(namespaceId).create(serviceConf) watchService(service) The 'watchService' method is defined below: private def watchService(service: Service) = { val namespace = service.getMetadata.getNamespace val name = service.getMetadata.getName logger.debug("start -> watching service -> namespace: " + namespace + " name: " + name) val kube = createClient() try { @volatile var complete = false val socket = kube.services().inNamespace(namespace).withName(name).watch(new Watcher[Service]() { def eventReceived(action: Action, resource: Service) { logger.info(action + ":" + resource) action match { case Action.MODIFIED => if (resource.getMetadata.getName == name) { complete = isServiceComplete(resource) } // case Action.DELETED => // complete = true case _ => } } }) while (!complete) { Thread.sleep(5000) complete = isServiceComplete(kube.services().inNamespace(namespace).withName(name).get) } logger.info("Closing socket connection") socket.close() } finally { logger.info("Closing client connection") kube.close() } logger.debug("complete -> watching services , namespace: " + namespace + " name: " + name) } The key hack we introduced was in the method 'isServiceComplete' .. when using single node k8s the value of 'isUsingMock' is true. so that makes us use clusterIP to determine if service configuration has completed or not. private def isServiceComplete(service: Service) = { !service.getStatus.getLoadBalancer.getIngress.isEmpty || mockServiceComplete(service) } def mockServiceComplete(service: Service): Boolean = { val clusterIP = service.getSpec.getClusterIP logger.trace(s"mockServiceComplete: $isUsingMock / $clusterIP / $KUBE_SERVER" ) isUsingMock && ! clusterIP.isEmpty } Sorry if there is not a lot of extra context here. Eventually our project should be open source and we can post a complete solution. -chris
I am trying to set up a single node kubernetes cluster for demo and testing purposes, and I want it to behave like a 'full blown' k8s cluster (like google container engine). My client has their own k8s installation, which for this discussion we can assume acts pretty much like google container engine's k8s installation. Getting the Ingress IP on Full Blown K8s I am creating a wordpress pod and exposing it as a service, as described in this tutorial: https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress If you want to replicate the issue, just can just copy paste the commands below, which I lifted from the tutorial: (This assumes you have a project called 'stellar-access-117903'.. if not please set to name of your Google Container Engine project.) # set up the cluster (this will take a while to provision) # gcloud config set project stellar-access-117903 gcloud config set compute/zone us-central1-b gcloud container clusters create hello-world \ --num-nodes 1 \ --machine-type g1-small # Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress The output of the describe command contains a line 'LoadBalancer Ingress: {some-ip-address}' which is exactly what I'd expect. Now, when I do the same thing with the single node cluster setup i don't get that line. I am able to hit the wordpress service at the IP that appears in the output of the 'describe service' command.. But in 'single node' mode, the IP that is printed out is the >cluster IP< of the service, which typically (as I understand it) is not publicly accessible. For some reason it is publicly accessible in single node mode. We can replicate this with the following steps. NOT Getting the Ingress IP on Single Node K8s First setup single node k8s, as described in this tutorial: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md For easy reproducibility, I have included all the commands below, so you can just copy/paste: K8S_VERSION=1.1.1 sudo docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v${K8S_VERSION} \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 # set your context to use the locally running k8s API server # kubectl config set-cluster dev --server=http://localhost:8080 kubectl config set-context dev --cluster=dev --namespace=$NS kubectl config use-context dev Now, execute the very same commands that you performed against Google Container Engine's k8s # Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress The output of the last command (which you will see has no 'Ingress' information) is: Name: wordpress Namespace: default Labels: run=wordpress Selector: run=wordpress Type: LoadBalancer IP: 10.0.0.61 Port: <unnamed> 80/TCP NodePort: <unnamed> 31795/TCP Endpoints: 172.17.0.30:80 Session Affinity: None No events. In google container engine's k8s, I see events like ' Creating load balancer ', ' Load balancer created '. But nothing like that happens in the single node instance. I am wondering ... is there some configuration I need to do to get them to work identically ? It is very important that they work identically... differing only in their scalability, because we want to run tests against the single node version, and it will be very confusing if it behaves differently. Thanks in advance for your help -chris
We've seen the GCE PD attach calls take upwards of 10 minutes to complete, so this is more or less expected. For example see https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655, where PD tests were timing out before GCE PD attach/detach calls could complete. We're working with the GCE team to improve performance and reduce latency. If the pod never gets out of pending state, then you might've hit a bug. In that case, grab your kubelet log and open an issue at https://github.com/kubernetes/kubernetes/
###Instruction: I start a kubernetes replication controller. When the corresponding container in the single pod in this replication controller has a gcePersistentDisk specified the pods will start very slow. After 5 minutes the pod is still in the Pending state. kubectl get po will tell me: NAME READY STATUS RESTARTS AGE app-1-a4ni7 0/1 Pending 0 5m Without the gcePersistentDisk the pod is Running in max 30 seconds. (I am using a 10 GB Google Cloud Storage disk and I know that these disks have lower performance for lower capacities, but I am not sure this is the issue.) What could be the cause of this? ### Response: We've seen the GCE PD attach calls take upwards of 10 minutes to complete, so this is more or less expected. For example see https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655, where PD tests were timing out before GCE PD attach/detach calls could complete. We're working with the GCE team to improve performance and reduce latency. If the pod never gets out of pending state, then you might've hit a bug. In that case, grab your kubelet log and open an issue at https://github.com/kubernetes/kubernetes/
I start a kubernetes replication controller. When the corresponding container in the single pod in this replication controller has a gcePersistentDisk specified the pods will start very slow. After 5 minutes the pod is still in the Pending state. kubectl get po will tell me: NAME READY STATUS RESTARTS AGE app-1-a4ni7 0/1 Pending 0 5m Without the gcePersistentDisk the pod is Running in max 30 seconds. (I am using a 10 GB Google Cloud Storage disk and I know that these disks have lower performance for lower capacities, but I am not sure this is the issue.) What could be the cause of this?
In Google Container Engine the master is managed for you and kept running by Google. According to the SLA for Google Container Engine the master should be available at least 99.5% of the time.
###Instruction: In GKE every cluster has a single master endpoint, which is managed by Google Container Engine. Is this master node high available? I deploy a beautiful cluster of redundant nodes with kubernetes but what happen if the master node goes down? How can i test this situation? ### Response: In Google Container Engine the master is managed for you and kept running by Google. According to the SLA for Google Container Engine the master should be available at least 99.5% of the time.
In GKE every cluster has a single master endpoint, which is managed by Google Container Engine. Is this master node high available? I deploy a beautiful cluster of redundant nodes with kubernetes but what happen if the master node goes down? How can i test this situation?
There isn't currently a way to inject packets from an App Engine application into the private GCP network where your Kubernetes cluster is running (e.g. coming from a 10.0.0.0/8 address). So the only way to connect your application to your Container Engine service is to use the external IPs (as you are currently doing).
###Instruction: I have a project with Appengine part and Google Containers cluster. Appengine app needs to make http calls to a Service deployed to Google Containers. I know that I can assign an external IP to the Service, hardcode it into my Appengine app, and then make UrlFetch requests agains such IP. That works. But I don't want to use public network for such communication. I wondering maybe I can also get access like it's done inside Kubernetes cluster between Pods? by specifying a service host name, that resolves to an internal IP in 10.x.x.x range. Is it possible to do same from Appengine? is there a special naming schema resolvable to GKE services? ### Response: There isn't currently a way to inject packets from an App Engine application into the private GCP network where your Kubernetes cluster is running (e.g. coming from a 10.0.0.0/8 address). So the only way to connect your application to your Container Engine service is to use the external IPs (as you are currently doing).
I have a project with Appengine part and Google Containers cluster. Appengine app needs to make http calls to a Service deployed to Google Containers. I know that I can assign an external IP to the Service, hardcode it into my Appengine app, and then make UrlFetch requests agains such IP. That works. But I don't want to use public network for such communication. I wondering maybe I can also get access like it's done inside Kubernetes cluster between Pods? by specifying a service host name, that resolves to an internal IP in 10.x.x.x range. Is it possible to do same from Appengine? is there a special naming schema resolvable to GKE services?
Yes, this is the expected behavior. More than one health checker is used to check the service, and each health checker independently obeys the interval specified in your configuration.
###Instruction: I have a health check with a 1-second check interval (http://s.drollette.com/0B2A3Z1w2X1G). It was created by the GLBC Ingress controller in kubernetes. But looking at the logs it appears that it is generating 3 requests per second (http://s.drollette.com/2U432C2f1d2f). Is this expected behavior from a Google Compute Health Check? Nothing else is configured to be hitting that route. ### Response: Yes, this is the expected behavior. More than one health checker is used to check the service, and each health checker independently obeys the interval specified in your configuration.
I have a health check with a 1-second check interval (http://s.drollette.com/0B2A3Z1w2X1G). It was created by the GLBC Ingress controller in kubernetes. But looking at the logs it appears that it is generating 3 requests per second (http://s.drollette.com/2U432C2f1d2f). Is this expected behavior from a Google Compute Health Check? Nothing else is configured to be hitting that route.
First, I would upgrade to a newer Docker Version. But I think the Problem has something to do with your kubelet configuration. The parameter --hostname-override allows you to override the hostname. I'm not 100% sure, but I think your node has to be accessible via the hostname from the kube-api server. If you api-server is on another node, localhost won't work. See: http://kubernetes.io/v1.1/docs/admin/kubelet.html for more Information
###Instruction: tried to config one master node following the guide (http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#master-node) script master.sh ran successfully, but the api server failed to boot up. Software Version: K8S_VERSION=1.1.3 ETCD_VERSION=2.2.1 FLANNEL_VERSION=0.5.5 OS Version: VERSION="2015.09" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2015.09" PRETTY_NAME="Amazon Linux AMI 2015.09" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2015.09:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" Docker: 1.7.1 Kernel Version: Linux ip-172-0-11-22 4.1.10-17.31.amzn1.x86_64 #1 SMP Sat Oct 24 01:31:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux log message of kubelet: I0113 15:44:42.517777 7987 server.go:770] Started kubelet E0113 15:44:42.517812 7987 kubelet.go:756] Image garbage collection failed: unable to find data for container / E0113 15:44:42.518437 7987 event.go:197] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping) I0113 15:44:42.518460 7987 server.go:89] Starting to listen read-only on 0.0.0.0:10255 I0113 15:44:42.518885 7987 server.go:72] Starting to listen on 0.0.0.0:10250 I0113 15:44:42.524222 7987 kubelet.go:777] Running in container "/kubelet" I0113 15:44:42.696510 7987 factory.go:239] Registering Docker factory I0113 15:44:42.698516 7987 factory.go:93] Registering Raw factory I0113 15:44:42.698837 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.698862 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.698871 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.699523 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.829361 7987 manager.go:1006] Started watching for new ooms in manager I0113 15:44:42.830001 7987 oomparser.go:183] oomparser using systemd I0113 15:44:42.842667 7987 manager.go:250] Starting recovery of all containers I0113 15:44:42.868829 7987 manager.go:255] Recovery completed I0113 15:44:42.880876 7987 container_manager_linux.go:215] Configure resource-only container /docker-daemon with memory limit: 2903034265 I0113 15:44:42.880910 7987 manager.go:104] Starting to sync pod status with apiserver I0113 15:44:42.880963 7987 kubelet.go:1960] Starting kubelet main sync loop. I0113 15:44:42.881004 7987 kubelet.go:2012] SyncLoop (ADD): "k8s-master-localhost_default" E0113 15:44:42.881457 7987 kubelet.go:1915] error getting node: node 'localhost' is not in cache E0113 15:44:42.884752 7987 kubelet.go:1356] Failed creating a mirror pod "k8s-master-localhost_default": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused E0113 15:44:42.884780 7987 kubelet.go:1361] Mirror pod not available I0113 15:44:42.884839 7987 manager.go:1707] Need to restart pod infra container for "k8s-master-localhost_default" because it is not found W0113 15:44:42.885688 7987 manager.go:108] Failed to updated pod status: error updating status for pod "k8s-master-localhost_default": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-localhost: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.900665 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.900693 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.900751 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.901194 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.977270 7987 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider I0113 15:44:42.977458 7987 docker.go:159] Pulling image gcr.io/google_containers/pause:0.8.0 without credentials I0113 15:44:43.302487 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:43.302552 7987 kubelet.go:2300] Recording NodeReady event message for node localhost ### Response: First, I would upgrade to a newer Docker Version. But I think the Problem has something to do with your kubelet configuration. The parameter --hostname-override allows you to override the hostname. I'm not 100% sure, but I think your node has to be accessible via the hostname from the kube-api server. If you api-server is on another node, localhost won't work. See: http://kubernetes.io/v1.1/docs/admin/kubelet.html for more Information
tried to config one master node following the guide (http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#master-node) script master.sh ran successfully, but the api server failed to boot up. Software Version: K8S_VERSION=1.1.3 ETCD_VERSION=2.2.1 FLANNEL_VERSION=0.5.5 OS Version: VERSION="2015.09" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2015.09" PRETTY_NAME="Amazon Linux AMI 2015.09" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2015.09:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" Docker: 1.7.1 Kernel Version: Linux ip-172-0-11-22 4.1.10-17.31.amzn1.x86_64 #1 SMP Sat Oct 24 01:31:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux log message of kubelet: I0113 15:44:42.517777 7987 server.go:770] Started kubelet E0113 15:44:42.517812 7987 kubelet.go:756] Image garbage collection failed: unable to find data for container / E0113 15:44:42.518437 7987 event.go:197] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping) I0113 15:44:42.518460 7987 server.go:89] Starting to listen read-only on 0.0.0.0:10255 I0113 15:44:42.518885 7987 server.go:72] Starting to listen on 0.0.0.0:10250 I0113 15:44:42.524222 7987 kubelet.go:777] Running in container "/kubelet" I0113 15:44:42.696510 7987 factory.go:239] Registering Docker factory I0113 15:44:42.698516 7987 factory.go:93] Registering Raw factory I0113 15:44:42.698837 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.698862 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.698871 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.699523 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.829361 7987 manager.go:1006] Started watching for new ooms in manager I0113 15:44:42.830001 7987 oomparser.go:183] oomparser using systemd I0113 15:44:42.842667 7987 manager.go:250] Starting recovery of all containers I0113 15:44:42.868829 7987 manager.go:255] Recovery completed I0113 15:44:42.880876 7987 container_manager_linux.go:215] Configure resource-only container /docker-daemon with memory limit: 2903034265 I0113 15:44:42.880910 7987 manager.go:104] Starting to sync pod status with apiserver I0113 15:44:42.880963 7987 kubelet.go:1960] Starting kubelet main sync loop. I0113 15:44:42.881004 7987 kubelet.go:2012] SyncLoop (ADD): "k8s-master-localhost_default" E0113 15:44:42.881457 7987 kubelet.go:1915] error getting node: node 'localhost' is not in cache E0113 15:44:42.884752 7987 kubelet.go:1356] Failed creating a mirror pod "k8s-master-localhost_default": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused E0113 15:44:42.884780 7987 kubelet.go:1361] Mirror pod not available I0113 15:44:42.884839 7987 manager.go:1707] Need to restart pod infra container for "k8s-master-localhost_default" because it is not found W0113 15:44:42.885688 7987 manager.go:108] Failed to updated pod status: error updating status for pod "k8s-master-localhost_default": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-localhost: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.900665 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.900693 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.900751 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.901194 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.977270 7987 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider I0113 15:44:42.977458 7987 docker.go:159] Pulling image gcr.io/google_containers/pause:0.8.0 without credentials I0113 15:44:43.302487 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:43.302552 7987 kubelet.go:2300] Recording NodeReady event message for node localhost
fluentd-elasticsearch pods are static pods which are created via placing pod manifest files (fluentd-es.yaml) in a directory watched by Kubelet. The corresponding pod (a.k.a. the mirror pod) with the same name and namespace in the API server is created automatically for the purpose of introspection -- it reflects the status of the static pod. Kubernetes treats the static pod (the pod manifest file) in the directory as the source of the truth; operations (deletion/update, etc) on the mirror pod will not have any effect on the static pod. You are encouraged to move away from static pods and use DaemonSet, except for a few particular use cases (e.g., standalone Kubelets). The system add-on pods such as fluentd-elasticsearch will be converted to DaemonSet eventually.
###Instruction: I recently had cause to restart a fluentd-elasticsearch pod for all my nodes. Out of 7 nodes where the pods were deleted only 1 of them deleted and came back as "Running". Is there a way to completely purge a pod in k8s? ### Response: fluentd-elasticsearch pods are static pods which are created via placing pod manifest files (fluentd-es.yaml) in a directory watched by Kubelet. The corresponding pod (a.k.a. the mirror pod) with the same name and namespace in the API server is created automatically for the purpose of introspection -- it reflects the status of the static pod. Kubernetes treats the static pod (the pod manifest file) in the directory as the source of the truth; operations (deletion/update, etc) on the mirror pod will not have any effect on the static pod. You are encouraged to move away from static pods and use DaemonSet, except for a few particular use cases (e.g., standalone Kubelets). The system add-on pods such as fluentd-elasticsearch will be converted to DaemonSet eventually.
I recently had cause to restart a fluentd-elasticsearch pod for all my nodes. Out of 7 nodes where the pods were deleted only 1 of them deleted and came back as "Running". Is there a way to completely purge a pod in k8s?
Containerising db for each app seems to be "the docker way" and provide better isolation and portability The docker way isn't a db per app but a service per container. MySQL is a service at soon as you don't run in the mysql container an another service (app/ssh/monitoring...) it's the way to go. So the decision between one db per app or one db for all is up to you. My personal choice is the third: Deploy a separate database container into the cluster and link this container to the various application containers in the cluster. I'm using kubernetes with a postgres container that is used as a DB server for all applications. I prefer this choice because it's easier as an OP point of view to backup/replicate/apply maintenance than having 30 differents db servers + 30*slaves + 30*external pool + 30*monitoring tools etc... Also in my case I have a better hw resources usage. But I conserve the possibility to move a database to another dedicated db-server container in case an application is using too much resource or if too many app are already using the DB.
###Instruction: Most of the articles online regarding setting up Docker containers seem to be written around the idea of breaking an application into microservices and allocating them into various containers and deploying them into a cluster. I would like to find out the best way to handle databases (e.g. MySQL) for multiple unrelated applications, written for different clients, and deployed into the same cluster. Say I have 10 unrelated small applications (like WordPress), all requiring access to MySQL database. I could: Deploy the applications as containers into the cluster, containing just the application code, and setting up a dedicated MySQL server or a Google Cloud SQL instance and asking each of the application containers to connect to the database as 3rd party services. Deploy the applications as containers into the cluster. For each applications, also deploy a separate database container into the cluster and link the two. Deploy a separate database container into the cluster and link this container to the various application containers in the cluster. Which of these solutions is the best in terms of application architecture design and which of these is the best use of computer resources? I have the feeling that deploying multiple MySQL containers (one for each application) may be the best design but it might not be the most resource-efficient as we will have a bunch of MySQL containers running. ### Response: Containerising db for each app seems to be "the docker way" and provide better isolation and portability The docker way isn't a db per app but a service per container. MySQL is a service at soon as you don't run in the mysql container an another service (app/ssh/monitoring...) it's the way to go. So the decision between one db per app or one db for all is up to you. My personal choice is the third: Deploy a separate database container into the cluster and link this container to the various application containers in the cluster. I'm using kubernetes with a postgres container that is used as a DB server for all applications. I prefer this choice because it's easier as an OP point of view to backup/replicate/apply maintenance than having 30 differents db servers + 30*slaves + 30*external pool + 30*monitoring tools etc... Also in my case I have a better hw resources usage. But I conserve the possibility to move a database to another dedicated db-server container in case an application is using too much resource or if too many app are already using the DB.
Most of the articles online regarding setting up Docker containers seem to be written around the idea of breaking an application into microservices and allocating them into various containers and deploying them into a cluster. I would like to find out the best way to handle databases (e.g. MySQL) for multiple unrelated applications, written for different clients, and deployed into the same cluster. Say I have 10 unrelated small applications (like WordPress), all requiring access to MySQL database. I could: Deploy the applications as containers into the cluster, containing just the application code, and setting up a dedicated MySQL server or a Google Cloud SQL instance and asking each of the application containers to connect to the database as 3rd party services. Deploy the applications as containers into the cluster. For each applications, also deploy a separate database container into the cluster and link the two. Deploy a separate database container into the cluster and link this container to the various application containers in the cluster. Which of these solutions is the best in terms of application architecture design and which of these is the best use of computer resources? I have the feeling that deploying multiple MySQL containers (one for each application) may be the best design but it might not be the most resource-efficient as we will have a bunch of MySQL containers running.
You can view statistics for container(s) using the docker stats command. For example; docker stats containera containerb CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O containera 0.00% 24.15 MB / 1.041 GB 2.32% 1.8 MB / 79.37 kB 0 B / 81.92 kB containerb 0.00% 24.95 MB / 1.041 GB 2.40% 1.798 MB / 80.72 kB 0 B / 81.92 kB Or, see processes running in a container using docker top <container> docker top containera UID PID PPID C STIME TTY TIME CMD root 4558 2850 0 21:13 ? 00:00:00 sh -c npm install http-server -g && mkdir -p /public && echo "welcome to containera" > /public/index.html && http-server -a 0.0.0.0 -p 4200 root 4647 4558 0 21:13 ? 00:00:00 node /usr/local/bin/http-server -a 0.0.0.0 -p 4200 Limiting resources Docker compose (like docker itself) allows you to set limits on resources for a container, for example, limiting the maximum amount of memory used, cpu-shares, etc. Read this section in the docker-compose yaml reference, and the docker run reference on "Runtime constraints on resources"
###Instruction: I'm composing yaml file for scripts running in docker and orchestrated by kubernetes. Is there a way to evaluate the resource utilization for a specicific command or docker, or what's the best practice to set the limit of cpu and mem for pods? Edit Most of these scripts will run in a short time, so it's hard to get the resource info. I just wanna try to find a tool to get the maximum usage of cpu and mem, the tool works like time, to print out the execution time. ### Response: You can view statistics for container(s) using the docker stats command. For example; docker stats containera containerb CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O containera 0.00% 24.15 MB / 1.041 GB 2.32% 1.8 MB / 79.37 kB 0 B / 81.92 kB containerb 0.00% 24.95 MB / 1.041 GB 2.40% 1.798 MB / 80.72 kB 0 B / 81.92 kB Or, see processes running in a container using docker top <container> docker top containera UID PID PPID C STIME TTY TIME CMD root 4558 2850 0 21:13 ? 00:00:00 sh -c npm install http-server -g && mkdir -p /public && echo "welcome to containera" > /public/index.html && http-server -a 0.0.0.0 -p 4200 root 4647 4558 0 21:13 ? 00:00:00 node /usr/local/bin/http-server -a 0.0.0.0 -p 4200 Limiting resources Docker compose (like docker itself) allows you to set limits on resources for a container, for example, limiting the maximum amount of memory used, cpu-shares, etc. Read this section in the docker-compose yaml reference, and the docker run reference on "Runtime constraints on resources"
I'm composing yaml file for scripts running in docker and orchestrated by kubernetes. Is there a way to evaluate the resource utilization for a specicific command or docker, or what's the best practice to set the limit of cpu and mem for pods? Edit Most of these scripts will run in a short time, so it's hard to get the resource info. I just wanna try to find a tool to get the maximum usage of cpu and mem, the tool works like time, to print out the execution time.
I have seen this error when the liveness probe is timing out. Try lengthening the timeoutSeconds on your livenessProbe and see if the problem goes away.
###Instruction: I've introduced a bunch of readiness and liveness checks in our Kubernetes pods. And apart from currently being fairly CPU-heavy. They appear to work as expected. But then we started to run some load-testing on our solution. And almost immediately pods gets killed and event like this show up: Liveness probe errored: read tcp 10.244.27.123:8080: use of closed network connection There appear to have been an issue with keep-alive and the http probe (issue 15643). But that also appears to have been fixed by disabling keep-alive in the probe in Kubernetes 1.1.1 (which is what we are running) So does anyone have any idea what could be going on? ### Response: I have seen this error when the liveness probe is timing out. Try lengthening the timeoutSeconds on your livenessProbe and see if the problem goes away.
I've introduced a bunch of readiness and liveness checks in our Kubernetes pods. And apart from currently being fairly CPU-heavy. They appear to work as expected. But then we started to run some load-testing on our solution. And almost immediately pods gets killed and event like this show up: Liveness probe errored: read tcp 10.244.27.123:8080: use of closed network connection There appear to have been an issue with keep-alive and the http probe (issue 15643). But that also appears to have been fixed by disabling keep-alive in the probe in Kubernetes 1.1.1 (which is what we are running) So does anyone have any idea what could be going on?
The current implementation does not allow your PersistentVolumeClaim to target specific PersistentVolumes. Claims bind to volumes based on its capabilities (access modes) and capacity. In the works is the next iteration of PersistentVolumes, which includes a PersistentVolumeSelector on the claim. This would work exactly like a NodeSelector on Pod works. The volume would have to match the label selector in order to bind. This is the targeting you are looking for. Please see https://github.com/kubernetes/kubernetes/pull/17056 for the proposal containing PersistentVolumeSelector.
###Instruction: I have multiple volumes and one claim. How can I tell the claim to which volume to bind to? How does a PersistentVolumeClaim know to which volume to bind? Can I controls this using some other parameters or metadata? I have the following PersistentVolumeClaim: { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "default-drive-claim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "default-drive-disk", "labels": { "name": "default-drive-disk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } If I create the claim and the volume using: kubectl create -f pvc.json -f pv.json I get the following listing of the volumes and claims: NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE default-drive-disk name=default-drive-disk 10Gi RWO Bound default/default-drive-claim 2s NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE default-drive-claim <none> Bound default-drive-disk 10Gi RWO 2s How does the claim know to which volume to bind? ### Response: The current implementation does not allow your PersistentVolumeClaim to target specific PersistentVolumes. Claims bind to volumes based on its capabilities (access modes) and capacity. In the works is the next iteration of PersistentVolumes, which includes a PersistentVolumeSelector on the claim. This would work exactly like a NodeSelector on Pod works. The volume would have to match the label selector in order to bind. This is the targeting you are looking for. Please see https://github.com/kubernetes/kubernetes/pull/17056 for the proposal containing PersistentVolumeSelector.
I have multiple volumes and one claim. How can I tell the claim to which volume to bind to? How does a PersistentVolumeClaim know to which volume to bind? Can I controls this using some other parameters or metadata? I have the following PersistentVolumeClaim: { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "default-drive-claim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "default-drive-disk", "labels": { "name": "default-drive-disk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } If I create the claim and the volume using: kubectl create -f pvc.json -f pv.json I get the following listing of the volumes and claims: NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE default-drive-disk name=default-drive-disk 10Gi RWO Bound default/default-drive-claim 2s NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE default-drive-claim <none> Bound default-drive-disk 10Gi RWO 2s How does the claim know to which volume to bind?
Making the request from requests and from command line sends it to different urls. The requests request from Python code really tries to use url including the quotes. curl from command line does strip the quotes (in other cases it escapes the quotes). I am unable to test your real url for real requests, but I guess, that following might work: url = 'http://10.69.117.136:8080/api/v1/namespaces/default/pods/tas-core/' json = requests.get(url) print json.content
###Instruction: i met a problem by using requests.get() on kubernetes api url = 'http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/' json = requests.get(url) print json.content error code 404 will be returned as: {"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "pods \"tas-core\" not found","reason": "NotFound","details": {"name": "tas-core","kind": "pods"},"code": 404} but if i use GET/curl, the response could be returned successfully: curl http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/ {"kind": "Pod","apiVersion": "v1","metadata": {"name": "tas-core","namespace":"default","selfLink": "/api/v1/namespaces/default/pods/tas-core","uid": "a264ce8e-a956-11e5-8293-0050569761f2","resourceVersion": "158546","creationTimestamp": "2015-12-23T09:22:06Z","labels": {"app": "tas-core"},"annotations": {"ctrl": "dynamic","oam": "dynamic"}},"spec": {"volumes":[ ... further more shorter url works fine url = 'http://10.69.117.136:8080/api/v1/namespaces/' json = requests.get(url) print json.content {"kind":"NamespaceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/","resourceVersion":"220452"},"items":[{"metadata":{"name":"default","selfLink":"/api/v1/namespaces/default","uid":"74f89440-a94a-11e5-9afd-0050569761f2","resourceVersion":"6","creationTimestamp":"2015-12-23T07:54:55Z"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}]} where did i wrong? ### Response: Making the request from requests and from command line sends it to different urls. The requests request from Python code really tries to use url including the quotes. curl from command line does strip the quotes (in other cases it escapes the quotes). I am unable to test your real url for real requests, but I guess, that following might work: url = 'http://10.69.117.136:8080/api/v1/namespaces/default/pods/tas-core/' json = requests.get(url) print json.content
i met a problem by using requests.get() on kubernetes api url = 'http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/' json = requests.get(url) print json.content error code 404 will be returned as: {"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "pods \"tas-core\" not found","reason": "NotFound","details": {"name": "tas-core","kind": "pods"},"code": 404} but if i use GET/curl, the response could be returned successfully: curl http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/ {"kind": "Pod","apiVersion": "v1","metadata": {"name": "tas-core","namespace":"default","selfLink": "/api/v1/namespaces/default/pods/tas-core","uid": "a264ce8e-a956-11e5-8293-0050569761f2","resourceVersion": "158546","creationTimestamp": "2015-12-23T09:22:06Z","labels": {"app": "tas-core"},"annotations": {"ctrl": "dynamic","oam": "dynamic"}},"spec": {"volumes":[ ... further more shorter url works fine url = 'http://10.69.117.136:8080/api/v1/namespaces/' json = requests.get(url) print json.content {"kind":"NamespaceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/","resourceVersion":"220452"},"items":[{"metadata":{"name":"default","selfLink":"/api/v1/namespaces/default","uid":"74f89440-a94a-11e5-9afd-0050569761f2","resourceVersion":"6","creationTimestamp":"2015-12-23T07:54:55Z"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}]} where did i wrong?
Ok it seems I can answer my own question, the problem was lying in the NFS share that was being used as the persistent volume. I had it set to 'squash_all' in the export but it needs to have a 'no_root_squash' to allow root in case of docker container to chown on the nfs bound volume.
###Instruction: I am currently experimenting with Kubernetes and have installed a small cluster on ESX infra I had running here locally. I installed two slave nodes with a master node using Project Atomic with Fedora. The cluster is all installed fine and seems to be running. However I first want to get a MySQL container up and running, but no matter what I try i cannot get it to run. apiVersion: v1 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 0.5 image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: myPassw0rd ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage nfs: server: 10.0.0.2 path: "/export/mysql" For the volume I already tried all kinds of solutions, I tried using persistent volume with and without claim. I tried using host volume and emptyDir, but I always end up with this error when the container starts: chown: changing ownership of '/var/lib/mysql/': Operation not permitted I must be doing something stupid, but no idea what to do here? ### Response: Ok it seems I can answer my own question, the problem was lying in the NFS share that was being used as the persistent volume. I had it set to 'squash_all' in the export but it needs to have a 'no_root_squash' to allow root in case of docker container to chown on the nfs bound volume.
I am currently experimenting with Kubernetes and have installed a small cluster on ESX infra I had running here locally. I installed two slave nodes with a master node using Project Atomic with Fedora. The cluster is all installed fine and seems to be running. However I first want to get a MySQL container up and running, but no matter what I try i cannot get it to run. apiVersion: v1 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 0.5 image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: myPassw0rd ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage nfs: server: 10.0.0.2 path: "/export/mysql" For the volume I already tried all kinds of solutions, I tried using persistent volume with and without claim. I tried using host volume and emptyDir, but I always end up with this error when the container starts: chown: changing ownership of '/var/lib/mysql/': Operation not permitted I must be doing something stupid, but no idea what to do here?
GCE (along with AWS and OpenStack) must first attach a disk/volume to the node before it can be mounted and exposed to your pod. The time required for attachment is dependent on the cloud provider. In the case of pods created by a ReplicationController, there is an additional detach operation that has to happen. The same disk cannot be attached to more than one node (at least not in read/write mode). Detaching and pod cleanup happen in a different thread than attaching. To be specific, Kubelet running on a node has to reconcile the pods it currently has (and the sum of their volumes) with the volumes currently present on the node. Orphaned volumes are unmounted and detached. If your pod was scheduled on a different node, it must wait until the original node detaches the volume. The cluster eventually reaches the correct state, but it might take time for each component to get there. This is your wait time.
###Instruction: I have a kubernetes pod to which I attach a GCE persistent volume using a persistence volume claim. (For the even worse issue without a volume claim see: Mounting a gcePersistentDisk kubernetes volume is very slow) When there is no volume attached, the pod starts in no time (max 2 seconds). But when the pod has a GCE persistent volume mount, the Running state is reached somewhere between 20 and 60 seconds. I was testing with different disk sizes (10, 200, 500 GiB) and multiple pod creations and the size does not seem to be correlated with the delay. And this delay is not only happening in the beginning but also when rolling updates are performed with the replication controllers or when the code crashes during runtime. Below I have the kubernetes specifications: The replication controller { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "a1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "a1" } }, "spec": { "containers": [ { "name": "a1-setup", "image": "nginx", "ports": [ { "containerPort": 80 }, { "containerPort": 443 } ] } ] } } } } The volume claim { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "myclaim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } And the volume { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "mydisk", "labels": { "name": "mydisk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } Also ### Response: GCE (along with AWS and OpenStack) must first attach a disk/volume to the node before it can be mounted and exposed to your pod. The time required for attachment is dependent on the cloud provider. In the case of pods created by a ReplicationController, there is an additional detach operation that has to happen. The same disk cannot be attached to more than one node (at least not in read/write mode). Detaching and pod cleanup happen in a different thread than attaching. To be specific, Kubelet running on a node has to reconcile the pods it currently has (and the sum of their volumes) with the volumes currently present on the node. Orphaned volumes are unmounted and detached. If your pod was scheduled on a different node, it must wait until the original node detaches the volume. The cluster eventually reaches the correct state, but it might take time for each component to get there. This is your wait time.
I have a kubernetes pod to which I attach a GCE persistent volume using a persistence volume claim. (For the even worse issue without a volume claim see: Mounting a gcePersistentDisk kubernetes volume is very slow) When there is no volume attached, the pod starts in no time (max 2 seconds). But when the pod has a GCE persistent volume mount, the Running state is reached somewhere between 20 and 60 seconds. I was testing with different disk sizes (10, 200, 500 GiB) and multiple pod creations and the size does not seem to be correlated with the delay. And this delay is not only happening in the beginning but also when rolling updates are performed with the replication controllers or when the code crashes during runtime. Below I have the kubernetes specifications: The replication controller { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "a1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "a1" } }, "spec": { "containers": [ { "name": "a1-setup", "image": "nginx", "ports": [ { "containerPort": 80 }, { "containerPort": 443 } ] } ] } } } } The volume claim { "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "myclaim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } And the volume { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "mydisk", "labels": { "name": "mydisk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } Also
The way the documentation states to export environment variables is a bit inaccurate. They need to be added to the environment that is referenced by the systemd unit file or the node needs to be granted appropriate IAM permissions. For configuring the credentials in the environment for the node, add the following to /etc/sysconfig/origin-node (assuming Origin 1.1): AWS_ACCESS_KEY_ID=<key id> AWS_SECRET_ACCESS_KEY=<secret key> Alternatively, the nodes can be assigned an IAM role with the appropriate permissions. The following cloudformation resource snippet creates a role with the appropriate permissions for a node: "NodeIAMRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Policies": [ { "PolicyName": "demo-node-1", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" } ] } }, { "PolicyName": "demo-node-2", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:AttachVolume", "Resource": "*" } ] } }, { "PolicyName": "demo-node-3", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:DetachVolume", "Resource": "*" } ] } } ] } }
###Instruction: We are having issues with our openshift aws deployment when trying to use persistent volumes. These are some of the there errors when trying to deploy the mysql-persistent instance. -Unable to mount volumes for pod "mysql-4-uizxn_persistent-test": Cloud provider does not support volumes -Error syncing pod, skipping: Cloud provider does not support volumes We added the following on each of our nodes node-config.yaml kubeletArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" and also added the following to our master-config.yaml kubernetesMasterConfig: apiServerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" controllerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" Not sure if we are just missing something or if there is a known issue/work around. Also a question is how does openshift or kubernetes know that the config files have been changed? Also just to give you some context we used openshift-ansible to deploy our environment. ### Response: The way the documentation states to export environment variables is a bit inaccurate. They need to be added to the environment that is referenced by the systemd unit file or the node needs to be granted appropriate IAM permissions. For configuring the credentials in the environment for the node, add the following to /etc/sysconfig/origin-node (assuming Origin 1.1): AWS_ACCESS_KEY_ID=<key id> AWS_SECRET_ACCESS_KEY=<secret key> Alternatively, the nodes can be assigned an IAM role with the appropriate permissions. The following cloudformation resource snippet creates a role with the appropriate permissions for a node: "NodeIAMRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Policies": [ { "PolicyName": "demo-node-1", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" } ] } }, { "PolicyName": "demo-node-2", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:AttachVolume", "Resource": "*" } ] } }, { "PolicyName": "demo-node-3", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:DetachVolume", "Resource": "*" } ] } } ] } }
We are having issues with our openshift aws deployment when trying to use persistent volumes. These are some of the there errors when trying to deploy the mysql-persistent instance. -Unable to mount volumes for pod "mysql-4-uizxn_persistent-test": Cloud provider does not support volumes -Error syncing pod, skipping: Cloud provider does not support volumes We added the following on each of our nodes node-config.yaml kubeletArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" and also added the following to our master-config.yaml kubernetesMasterConfig: apiServerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" controllerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" Not sure if we are just missing something or if there is a known issue/work around. Also a question is how does openshift or kubernetes know that the config files have been changed? Also just to give you some context we used openshift-ansible to deploy our environment.
Are you using Kubernetes Services for your applications? With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned. So you start the Services, then start nginx and whatever Pod you want in the order you want.
###Instruction: I'm trying to build a Kubernetes cluster with following services inside: Docker-registry (which will contain my django Docker image) Nginx listenning both on port 80 and 443 PostgreSQL Several django applications served with gunicorn letsencrypt container to generate and automatically renew signed SSL certificates My problem is a chicken and egg problem that occurs during the creation of the cluster: My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80) So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed... So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443 -> This kind of look like a waste of resources in my opinion, but why not. Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https. So in my nginx configuration, I'll have a docker-registry.conf file looking like: upstream docker-registry { server registry:5000; } server { listen 443; server_name docker.thedivernetwork.net; # SSL ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486) chunked_transfer_encoding on; location /v2/ { # Do not allow connections from docker 1.5 and earlier # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) { return 404; } # To add basic authentication to v2 use auth_basic setting plus add_header auth_basic "registry.localhost"; auth_basic_user_file /etc/nginx/conf.d/registry.password; add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always; proxy_pass http://docker-registry; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; } } The important part is the proxy_pass that redirect toward the registry container. The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf: upstream django { server django:5000; } server { listen 443 ssl; server_name example.com; charset utf-8; ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; client_max_body_size 20M; location / { # checks for static file, if not found proxy to app try_files $uri @proxy_to_django; } location @proxy_to_django { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; #proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 65; proxy_read_timeout 65; proxy_pass http://django; } } So nginx will successfully start only under 3 conditions: secret is mounted (this could be addressed by splitting Nginx into 2 separate containers) registry service is started django service is started The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again. I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations: I start docker registry service I start Nginx with only the registry.conf I create my django rc and service I reload nginx with both registry.conf and django.conf If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well. How can I cleanly achieve this setup? Thanks for your help Thibault ### Response: Are you using Kubernetes Services for your applications? With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned. So you start the Services, then start nginx and whatever Pod you want in the order you want.
I'm trying to build a Kubernetes cluster with following services inside: Docker-registry (which will contain my django Docker image) Nginx listenning both on port 80 and 443 PostgreSQL Several django applications served with gunicorn letsencrypt container to generate and automatically renew signed SSL certificates My problem is a chicken and egg problem that occurs during the creation of the cluster: My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80) So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed... So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443 -> This kind of look like a waste of resources in my opinion, but why not. Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https. So in my nginx configuration, I'll have a docker-registry.conf file looking like: upstream docker-registry { server registry:5000; } server { listen 443; server_name docker.thedivernetwork.net; # SSL ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486) chunked_transfer_encoding on; location /v2/ { # Do not allow connections from docker 1.5 and earlier # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) { return 404; } # To add basic authentication to v2 use auth_basic setting plus add_header auth_basic "registry.localhost"; auth_basic_user_file /etc/nginx/conf.d/registry.password; add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always; proxy_pass http://docker-registry; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; } } The important part is the proxy_pass that redirect toward the registry container. The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf: upstream django { server django:5000; } server { listen 443 ssl; server_name example.com; charset utf-8; ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; client_max_body_size 20M; location / { # checks for static file, if not found proxy to app try_files $uri @proxy_to_django; } location @proxy_to_django { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; #proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 65; proxy_read_timeout 65; proxy_pass http://django; } } So nginx will successfully start only under 3 conditions: secret is mounted (this could be addressed by splitting Nginx into 2 separate containers) registry service is started django service is started The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again. I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations: I start docker registry service I start Nginx with only the registry.conf I create my django rc and service I reload nginx with both registry.conf and django.conf If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well. How can I cleanly achieve this setup? Thanks for your help Thibault
kubectl describe pod example will give you more info on what's going on also kubectl get events can get you more details too although not dedicated to the given pod.
###Instruction: I'm setting up a Kubernetes cluster and am testing a small container. This is my YAML file for the pod: apiVersion: v1 kind: Pod metadata: name: example spec: restartPolicy: Never containers: - name: node image: 'node:5' command: ['node'] args: ['-e', 'console.log(1234)'] I deploy it with kubectl create -f example.yml and sure enough it runs as expected: $ kubectl logs example 1234 However, the pod's status is "Error": $ kubectl get po example NAME READY STATUS RESTARTS AGE example 0/1 Error 0 16m How can I investigate why the status is "Error"? ### Response: kubectl describe pod example will give you more info on what's going on also kubectl get events can get you more details too although not dedicated to the given pod.
I'm setting up a Kubernetes cluster and am testing a small container. This is my YAML file for the pod: apiVersion: v1 kind: Pod metadata: name: example spec: restartPolicy: Never containers: - name: node image: 'node:5' command: ['node'] args: ['-e', 'console.log(1234)'] I deploy it with kubectl create -f example.yml and sure enough it runs as expected: $ kubectl logs example 1234 However, the pod's status is "Error": $ kubectl get po example NAME READY STATUS RESTARTS AGE example 0/1 Error 0 16m How can I investigate why the status is "Error"?
Yes, it is. To attach an autoscaler to your existing GKE cluster: Find the name of your cluster's instance group: $ gcloud compute instance-groups managed list NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED gke-buildlets-69898e2d-group us-central1-f gke-buildlets-69898e2d-node 1 1 gke-buildlets-69898e2d-1-1-3 yes Here I have a GKE cluster named buildlets, and its instance group is named gke-buildlets-6989e2d-group Enable autoscaling. This particular example will scale on a target CPU utilization of 70%: gcloud compute instance-groups managed set-autoscaling YOUR_INSTANCE_GROUP_NAME \ --zone=YOUR_INSTANCE_GROUP_ZONE \ --min-num-replicas=1 \ --max-num-replicas=8 \ --scale-based-on-cpu \ --target-cpu-utilization=.7 You can also use Google Cloud Deployment manager to create your GKE cluster, and create/attach an autoscaler right along with it: resources: - name: buildlets type: container.v1.cluster properties: zone: us-central1-f cluster: initial_node_count: 1 network: "default" logging_service: "logging.googleapis.com" monitoring_service: "monitoring.googleapis.com" node_config: machine_type: n1-standard-1 oauth_scopes: - "https://www.googleapis.com/auth/cloud-platform" master_auth: username: admin password: password123 - name: autoscaler type: compute.v1.autoscaler properties: zone: us-central1-f name: buildlets target: "$(ref.buildlets.instanceGroupUrls[0])" autoscalingPolicy: minNumReplicas: 2 maxNumReplicas: 8 coolDownPeriodSec: 600 cpuUtilization: utilizationTarget: .7`
###Instruction: When you create a Google Container Engine (GKE) cluster you specify what the number and what types of machines you want to use in the cluster. Is it possible to auto-scale the number of cluster machines based on (for example) CPU load? If this is not supported, is there a reason why or is Google working on something like this for the future? ### Response: Yes, it is. To attach an autoscaler to your existing GKE cluster: Find the name of your cluster's instance group: $ gcloud compute instance-groups managed list NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED gke-buildlets-69898e2d-group us-central1-f gke-buildlets-69898e2d-node 1 1 gke-buildlets-69898e2d-1-1-3 yes Here I have a GKE cluster named buildlets, and its instance group is named gke-buildlets-6989e2d-group Enable autoscaling. This particular example will scale on a target CPU utilization of 70%: gcloud compute instance-groups managed set-autoscaling YOUR_INSTANCE_GROUP_NAME \ --zone=YOUR_INSTANCE_GROUP_ZONE \ --min-num-replicas=1 \ --max-num-replicas=8 \ --scale-based-on-cpu \ --target-cpu-utilization=.7 You can also use Google Cloud Deployment manager to create your GKE cluster, and create/attach an autoscaler right along with it: resources: - name: buildlets type: container.v1.cluster properties: zone: us-central1-f cluster: initial_node_count: 1 network: "default" logging_service: "logging.googleapis.com" monitoring_service: "monitoring.googleapis.com" node_config: machine_type: n1-standard-1 oauth_scopes: - "https://www.googleapis.com/auth/cloud-platform" master_auth: username: admin password: password123 - name: autoscaler type: compute.v1.autoscaler properties: zone: us-central1-f name: buildlets target: "$(ref.buildlets.instanceGroupUrls[0])" autoscalingPolicy: minNumReplicas: 2 maxNumReplicas: 8 coolDownPeriodSec: 600 cpuUtilization: utilizationTarget: .7`
When you create a Google Container Engine (GKE) cluster you specify what the number and what types of machines you want to use in the cluster. Is it possible to auto-scale the number of cluster machines based on (for example) CPU load? If this is not supported, is there a reason why or is Google working on something like this for the future?
I was also getting the same error - "Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: net/http: TLS handshake timeout" I just tried "./cluster/kube-push.sh" and this time cluster was created and validated successfully. Environment details: Host Machine- Ubuntu 14.04 Vagrant - 1.8.1 virtualbox - 5.0.14 kubernetes- 1.1.4 just to add, after setting up the cluster with default vm memory(1024MB) I was not able to run any pod(I tried NGNIX), it was always in pending state. So I increased the memory and restarted & now it runs fine.
###Instruction: I used the instructions in the official getting started guide (http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html) to get started with kubernetes on vagrant with the vmware fusion provider on OS X. When running export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash everything seems to work fine, but in the end i get the following error: Validating minion-1 ...... Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: net/http: TLS handshake timeout I've found the following github issues: https://github.com/kubernetes/kubernetes/issues/13382 https://github.com/kubernetes/kubernetes/issues/17426 Because it seems that both do not post the question on SO as recommended, i decided to do so. My Environment: OS X 10.11.1 Vagrant 1.7.4 VMWare Fusion 7.1.3 I'm new to Kubernetes, if you need more information i will provide them. ### Response: I was also getting the same error - "Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: net/http: TLS handshake timeout" I just tried "./cluster/kube-push.sh" and this time cluster was created and validated successfully. Environment details: Host Machine- Ubuntu 14.04 Vagrant - 1.8.1 virtualbox - 5.0.14 kubernetes- 1.1.4 just to add, after setting up the cluster with default vm memory(1024MB) I was not able to run any pod(I tried NGNIX), it was always in pending state. So I increased the memory and restarted & now it runs fine.
I used the instructions in the official getting started guide (http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html) to get started with kubernetes on vagrant with the vmware fusion provider on OS X. When running export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash everything seems to work fine, but in the end i get the following error: Validating minion-1 ...... Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: net/http: TLS handshake timeout I've found the following github issues: https://github.com/kubernetes/kubernetes/issues/13382 https://github.com/kubernetes/kubernetes/issues/17426 Because it seems that both do not post the question on SO as recommended, i decided to do so. My Environment: OS X 10.11.1 Vagrant 1.7.4 VMWare Fusion 7.1.3 I'm new to Kubernetes, if you need more information i will provide them.
Prashanth B was right! Having no commands causes the pod to restart in the crashloop. Thanks and good day.
###Instruction: We are running a Jetty service on the Google container engine. This one service runs just fine in a pod with a rc. We can shut it down, rebuild it and do all manner of things to it and it will still work. Now we want to extend our infrastructure with a debian image that runs something else. Locally, the docker works fine and we can access the debian commandline. Once we try to run the pod in the cloud, we get issues. The Dockerfile we use contains: FROM debian:latest Then we run the next commands: docker build -t eu.gcr.io/project_id/debstable:stable . gcloud docker push eu.gcr.io/project_id/debstable:stable kubectl run debstable --image=eu.gcr.io/project_id/debstable:stable The pod receives the CrashLoopBackOff STATUS and keeps on restarting. Part of the logs show this: I0120 14:19:58.438979 3479 kubelet.go:2012] SyncLoop (ADD): "debstable-blvdi_default" I0120 14:19:58.478235 3479 manager.go:1707] Need to restart pod infra container for "debstable-blvdi_default" because it is not found I0120 14:20:00.025467 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (100.384Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40650] I0120 14:20:05.017006 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (56.159Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40694] I0120 14:20:10.015072 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (66.802Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40708] I0120 14:20:15.017521 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (32.91Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40566] I0120 14:20:18.530030 3479 manager.go:2022] Back-off 10s restarting failed container=debstable pod=debstable-blvdi_default The docker info shows the versions, which may be relevant: $ sudo docker info Containers: 24 Images: 68 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 116 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.16.0-0.bpo.4-amd64 Operating System: Debian GNU/Linux 7 (wheezy) CPUs: 1 Total Memory: 3.625 GiB WARNING: No swap limit support Thanks and good day ### Response: Prashanth B was right! Having no commands causes the pod to restart in the crashloop. Thanks and good day.
We are running a Jetty service on the Google container engine. This one service runs just fine in a pod with a rc. We can shut it down, rebuild it and do all manner of things to it and it will still work. Now we want to extend our infrastructure with a debian image that runs something else. Locally, the docker works fine and we can access the debian commandline. Once we try to run the pod in the cloud, we get issues. The Dockerfile we use contains: FROM debian:latest Then we run the next commands: docker build -t eu.gcr.io/project_id/debstable:stable . gcloud docker push eu.gcr.io/project_id/debstable:stable kubectl run debstable --image=eu.gcr.io/project_id/debstable:stable The pod receives the CrashLoopBackOff STATUS and keeps on restarting. Part of the logs show this: I0120 14:19:58.438979 3479 kubelet.go:2012] SyncLoop (ADD): "debstable-blvdi_default" I0120 14:19:58.478235 3479 manager.go:1707] Need to restart pod infra container for "debstable-blvdi_default" because it is not found I0120 14:20:00.025467 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (100.384Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40650] I0120 14:20:05.017006 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (56.159Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40694] I0120 14:20:10.015072 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (66.802Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40708] I0120 14:20:15.017521 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (32.91Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40566] I0120 14:20:18.530030 3479 manager.go:2022] Back-off 10s restarting failed container=debstable pod=debstable-blvdi_default The docker info shows the versions, which may be relevant: $ sudo docker info Containers: 24 Images: 68 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 116 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.16.0-0.bpo.4-amd64 Operating System: Debian GNU/Linux 7 (wheezy) CPUs: 1 Total Memory: 3.625 GiB WARNING: No swap limit support Thanks and good day
finally my config looks like this: docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -N \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN-SEARCH cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" -u for the VPN server address and port -n for all the DNS servers to use -s to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already) -d to disable NAT -p to push options to the client -N to enable NAT: it seems critical for this setup on Kubernetes the last part, pushing the search domains to the client, was the key to getting nslookup etc.. to work. note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.
###Instruction: I spinned a docker-openvpn container in my (local) Kubernetes cluster to access my Services securely and debug dependent services locally. I can connect to the cluster via the openVPN server. However I can't resolve my Services via DNS. I managed to get to the point where after setting routes on the VPN server: I can ping a Pod by IP (subnet 10.2.0.0/16) I can ping a Service by IP (subnet 10.3.0.0/16 like the DNS which is at 10.3.0.10) I can curl to a Services by IP and get the data I need. but when i nslookup kubernetes or any Service, I get: nslookup kubernetes ;; Got recursion not available from 10.3.0.10, trying next server ;; Got SERVFAIL reply from 10.3.0.10, trying next server I am still missing something for the data to return from the DNS server, but can't figure what I need to do. How do I debug this SERVFAIL issue in Kubernetes DNS? EDIT: Things I have noticed and am looking to understand: nslookup works to resolve Service name in any pod except the openvpn Pod while nslookup works in those other Pods, ping does not. similarly traceroute in those other Pods leads to the flannel layer 10.0.2.2 and then stops there. from this I guess ICMP must be blocked at the flannel layer, and that doesn't help me figure where DNS is blocked. EDIT2: I finally figured how to get nslookup to work: I had to push the DNS search domain to the client with push "dhcp-option DOMAIN-SEARCH cluster.local" push "dhcp-option DOMAIN-SEARCH svc.cluster.local" push "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" add with the -p option in the docker-openvpn image so i end up with docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -d \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" Now, nslookup works but curl still does not ### Response: finally my config looks like this: docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -N \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN-SEARCH cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" -u for the VPN server address and port -n for all the DNS servers to use -s to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already) -d to disable NAT -p to push options to the client -N to enable NAT: it seems critical for this setup on Kubernetes the last part, pushing the search domains to the client, was the key to getting nslookup etc.. to work. note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.
I spinned a docker-openvpn container in my (local) Kubernetes cluster to access my Services securely and debug dependent services locally. I can connect to the cluster via the openVPN server. However I can't resolve my Services via DNS. I managed to get to the point where after setting routes on the VPN server: I can ping a Pod by IP (subnet 10.2.0.0/16) I can ping a Service by IP (subnet 10.3.0.0/16 like the DNS which is at 10.3.0.10) I can curl to a Services by IP and get the data I need. but when i nslookup kubernetes or any Service, I get: nslookup kubernetes ;; Got recursion not available from 10.3.0.10, trying next server ;; Got SERVFAIL reply from 10.3.0.10, trying next server I am still missing something for the data to return from the DNS server, but can't figure what I need to do. How do I debug this SERVFAIL issue in Kubernetes DNS? EDIT: Things I have noticed and am looking to understand: nslookup works to resolve Service name in any pod except the openvpn Pod while nslookup works in those other Pods, ping does not. similarly traceroute in those other Pods leads to the flannel layer 10.0.2.2 and then stops there. from this I guess ICMP must be blocked at the flannel layer, and that doesn't help me figure where DNS is blocked. EDIT2: I finally figured how to get nslookup to work: I had to push the DNS search domain to the client with push "dhcp-option DOMAIN-SEARCH cluster.local" push "dhcp-option DOMAIN-SEARCH svc.cluster.local" push "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" add with the -p option in the docker-openvpn image so i end up with docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -d \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" Now, nslookup works but curl still does not
for what you want to do you should be using a Headless Service http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services this means setting clusterIP: None in your Service and that means there won't be an IP associated with the service but it will return all IPs of the Pods selected by the selector
###Instruction: I'm trying to create 3 instances of Kafka and deploy it a local Kubernetes setup. Because each instance needs some specific configuration, I'm creating one RC and one service for each - eagerly waiting for #18016 ;) However, I'm having problems because Kafka can't establish a network connection to itself when it uses the service IP (a Kafka broker tries to do this when it is exchanging replication messages with other brokers). For example, let's say I have two worker hosts (172.17.8.201 and 172.17.8.202) and my pods are scheduled like this: Host 1 (172.17.8.201) kafka1 pod (10.2.16.1) Host 2 (172.17.8.202) kafka2 pod (10.2.68.1) kafka3 pod (10.2.68.2) In addition, let's say I have the following service IPs: kafka1 cluster IP: 11.1.2.96 kafka2 cluster IP: 11.1.2.120 kafka3 cluster IP: 11.1.2.123 The problem happens when the kafka1 pod (container) tries to send a message (to itself) using the kafka1 cluster IP (11.1.2.96). For some reason, the connection cannot established and the message is not sent. Some more information: If I manually connect to the kafka1 pod, I can correctly telnet to kafka2 and kafka3 pods using their respective cluster IPs (11.1.2.120 / 11.1.2.123). Also, if I'm in the kafka2 pod, I connect to both kafka1 and kafka3 pods using 11.1.2.96 and 11.1.2.123. Finally, I can connect to all pods (from all pods) if I use the pod IPs. It is important to emphasize that I shouldn't tell the kafka brokers to use the pod IPs instead of the cluster IPs for replication. As it is right now, Kafka uses for replication whatever IP you configure to be "advertised" - which is the IP that your client uses to connect to the brokers. Even if I could, I believe this problem may appear with other software as well. This problem seems to happen only with the combination I am using, because the exact same files work correctly in GCE. Right now, I'm running: Kubernetes 1.1.2 coreos 928.0.0 network setup with flannel everything on vagrant + VirtualBpx After some debugging, I'm not sure if the problem is in the workers iptables rules, in kube-proxy, or in flannel. PS: I posted this question originally as an Issue on their github, but I have been redirected to here by the Kubernetes team. I reword the text a bit because it was sounding like it was a "support request", but actually I believe it is some sort of bug. Anyway, sorry about that Kubernetes team! Edit: This problem has been confirmed as a bug https://github.com/kubernetes/kubernetes/issues/20391 ### Response: for what you want to do you should be using a Headless Service http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services this means setting clusterIP: None in your Service and that means there won't be an IP associated with the service but it will return all IPs of the Pods selected by the selector
I'm trying to create 3 instances of Kafka and deploy it a local Kubernetes setup. Because each instance needs some specific configuration, I'm creating one RC and one service for each - eagerly waiting for #18016 ;) However, I'm having problems because Kafka can't establish a network connection to itself when it uses the service IP (a Kafka broker tries to do this when it is exchanging replication messages with other brokers). For example, let's say I have two worker hosts (172.17.8.201 and 172.17.8.202) and my pods are scheduled like this: Host 1 (172.17.8.201) kafka1 pod (10.2.16.1) Host 2 (172.17.8.202) kafka2 pod (10.2.68.1) kafka3 pod (10.2.68.2) In addition, let's say I have the following service IPs: kafka1 cluster IP: 11.1.2.96 kafka2 cluster IP: 11.1.2.120 kafka3 cluster IP: 11.1.2.123 The problem happens when the kafka1 pod (container) tries to send a message (to itself) using the kafka1 cluster IP (11.1.2.96). For some reason, the connection cannot established and the message is not sent. Some more information: If I manually connect to the kafka1 pod, I can correctly telnet to kafka2 and kafka3 pods using their respective cluster IPs (11.1.2.120 / 11.1.2.123). Also, if I'm in the kafka2 pod, I connect to both kafka1 and kafka3 pods using 11.1.2.96 and 11.1.2.123. Finally, I can connect to all pods (from all pods) if I use the pod IPs. It is important to emphasize that I shouldn't tell the kafka brokers to use the pod IPs instead of the cluster IPs for replication. As it is right now, Kafka uses for replication whatever IP you configure to be "advertised" - which is the IP that your client uses to connect to the brokers. Even if I could, I believe this problem may appear with other software as well. This problem seems to happen only with the combination I am using, because the exact same files work correctly in GCE. Right now, I'm running: Kubernetes 1.1.2 coreos 928.0.0 network setup with flannel everything on vagrant + VirtualBpx After some debugging, I'm not sure if the problem is in the workers iptables rules, in kube-proxy, or in flannel. PS: I posted this question originally as an Issue on their github, but I have been redirected to here by the Kubernetes team. I reword the text a bit because it was sounding like it was a "support request", but actually I believe it is some sort of bug. Anyway, sorry about that Kubernetes team! Edit: This problem has been confirmed as a bug https://github.com/kubernetes/kubernetes/issues/20391
ExitCode 0 means it terminated normally Exit codes can be used if you pipe to another process, so the process knows what to do next (if previous process failed do this, else do something with the data passed...)
###Instruction: There's two kinds of status code of one-shot pods, running from API or the command: kubectl run --restart=Never --image test:v0.1 .... The pods produce output files to a NFS server, and I've got files successfully. kubectl get pods -ao wide: NAME READY STATUS RESTARTS AGE test-90 0/1 ExitCode:0 0 23m 192.168.1.43 test-91 0/1 ExitCode:0 0 23m 192.168.1.43 test-92 0/1 ExitCode:0 0 23m 192.168.1.43 test-93 0/1 ExitCode:0 0 23m 192.168.1.43 test-94 0/1 Error 0 23m 192.168.1.46 test-95 0/1 Error 0 23m 192.168.1.46 test-96 0/1 Error 0 23m 192.168.1.46 test-97 0/1 Error 0 23m 192.168.1.46 test-98 0/1 Error 0 23m 192.168.1.46 test-99 0/1 ExitCode:0 0 23m 192.168.1.43 the description of ExitCode:0 pod: Name: test-99 Namespace: default Image(s): test:v0.1 Node: 192.168.1.43/192.168.1.43 Status: Succeeded Replication Controllers: <none> Containers: test: State: Terminated Exit Code: 0 Ready: False Restart Count: 0 the description of Error pod: Name: test-98 Namespace: default Image(s): test:v0.1 Node: 192.168.1.46/192.168.1.46 Status: Succeeded Replication Controllers: <none> Containers: test: State: Terminated Reason: Error Exit Code: 0 Ready: False Restart Count: 0 Their NFS volumes: Volumes: input: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/input ReadOnly: false output: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/output ReadOnly: false default-token-nmviv: Type: Secret (a secret that should populate this volume) SecretName: default-token-nmviv kubectl logs returns none, since the container just produces output files. Thanks in advance! ### Response: ExitCode 0 means it terminated normally Exit codes can be used if you pipe to another process, so the process knows what to do next (if previous process failed do this, else do something with the data passed...)
There's two kinds of status code of one-shot pods, running from API or the command: kubectl run --restart=Never --image test:v0.1 .... The pods produce output files to a NFS server, and I've got files successfully. kubectl get pods -ao wide: NAME READY STATUS RESTARTS AGE test-90 0/1 ExitCode:0 0 23m 192.168.1.43 test-91 0/1 ExitCode:0 0 23m 192.168.1.43 test-92 0/1 ExitCode:0 0 23m 192.168.1.43 test-93 0/1 ExitCode:0 0 23m 192.168.1.43 test-94 0/1 Error 0 23m 192.168.1.46 test-95 0/1 Error 0 23m 192.168.1.46 test-96 0/1 Error 0 23m 192.168.1.46 test-97 0/1 Error 0 23m 192.168.1.46 test-98 0/1 Error 0 23m 192.168.1.46 test-99 0/1 ExitCode:0 0 23m 192.168.1.43 the description of ExitCode:0 pod: Name: test-99 Namespace: default Image(s): test:v0.1 Node: 192.168.1.43/192.168.1.43 Status: Succeeded Replication Controllers: <none> Containers: test: State: Terminated Exit Code: 0 Ready: False Restart Count: 0 the description of Error pod: Name: test-98 Namespace: default Image(s): test:v0.1 Node: 192.168.1.46/192.168.1.46 Status: Succeeded Replication Controllers: <none> Containers: test: State: Terminated Reason: Error Exit Code: 0 Ready: False Restart Count: 0 Their NFS volumes: Volumes: input: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/input ReadOnly: false output: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/output ReadOnly: false default-token-nmviv: Type: Secret (a secret that should populate this volume) SecretName: default-token-nmviv kubectl logs returns none, since the container just produces output files. Thanks in advance!
There is a resource named daemonset that spawn at least a pod per node, on a new node added the pod will be spawn on it automatically. About update, change the image in the resource daemonset and all pods will be updated.
###Instruction: I am new to Kubernetes and been looking at it as an option for a specific solution. We have a scenario where we have 100+ physical machines running RHEL distributed across different locations. There is a plan to deploy and manage docker based containers on each of these machines. Let's group these containers as Pod A. Now each of these machines require an instance of Pod A running on them and automatically synchronise if there are any changes. Over time new machines maybe added and they will need to automatically get Pod A running on them as well. I understand the idea behind Kubernetes is to abstract the machine and OS layer but in this case we can't do that. So I guess I have a few questions around this: - Is Kubernetes the correct choice here? Are we breaking the fundamental concept behind it? Is it possible to tag each machine as an identifiable node? Target a specific Pod to a subset of nodes? Are there any similar examples available? ### Response: There is a resource named daemonset that spawn at least a pod per node, on a new node added the pod will be spawn on it automatically. About update, change the image in the resource daemonset and all pods will be updated.
I am new to Kubernetes and been looking at it as an option for a specific solution. We have a scenario where we have 100+ physical machines running RHEL distributed across different locations. There is a plan to deploy and manage docker based containers on each of these machines. Let's group these containers as Pod A. Now each of these machines require an instance of Pod A running on them and automatically synchronise if there are any changes. Over time new machines maybe added and they will need to automatically get Pod A running on them as well. I understand the idea behind Kubernetes is to abstract the machine and OS layer but in this case we can't do that. So I guess I have a few questions around this: - Is Kubernetes the correct choice here? Are we breaking the fundamental concept behind it? Is it possible to tag each machine as an identifiable node? Target a specific Pod to a subset of nodes? Are there any similar examples available?
It seems that there were two problems I had: Cert Creation My implementation is based off the ansible deployment found here: https://github.com/kubernetes/contrib/tree/master/ansible This deployment seems to generates the certs for all networking interfaces. It also adds IP: in front of them and then in the script that generates the certs(make-ca-cert.sh) it prepends IP again. Not 100% sure if that is okay. However, I changed it to just generate certs for the networking interface and removed the addition IP: and that seems to of resolved the issue. Very good thread explaining certs, how to create them and how they work with Kubernetes: https://github.com/kubernetes/kubernetes/issues/11000 APIServer Setting --advertise-address Also, apparently I needed to set --advertise-address as well for the apiserver. Adjusting these two things seemed to of resolved the issue.
###Instruction: Whenever DNS gets ran on a kubelet other than the one that resides on the master node then the Liveness and Readiness probes for skydns keep failing. I am deploying the add ons as a service similar to what is used in the salt cluster. I have configured my system to use tokens and have verified that a token gets generated for system:dns and gets configured correctly for the kubelet. Is there something additional I need to do inside the skydns rc/svc yamls as well because of this? Salt Cluster: https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase/salt/kube-addons Ansible Deployment: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons/files I am using the standard skydns rc/svc yamls. Pod Description: Name: kube-dns-v10-pgqig Namespace: kube-system Image(s): gcr.io/google_containers/etcd:2.0.9,gcr.io/google_containers/kube2sky:1.12,gcr.io/google_containers/skydns:2015-10-13-8c72f8c,gcr.io/google_containers/exechealthz:1.0 Node: minion-1/172.28.129.2 Start Time: Thu, 21 Jan 2016 08:54:50 -0800 Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v10 Status: Running Reason: Message: IP: 18.16.18.9 Replication Controllers: kube-dns-v10 (1/1 replicas created) Containers: etcd: Container ID: docker://49216f478c99fcd3c25763e99bb18861d31025a0cadd538f9590295e78846f69 Image: gcr.io/google_containers/etcd:2.0.9 Image ID: docker://b6b9a86dc06aa1361357ca1b105feba961f6a4145adca6c54e142c0be0fe87b0 Command: /usr/local/bin/etcd -data-dir /var/etcd/data -listen-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -initial-cluster-token skydns-etcd QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: kube2sky: Container ID: docker://4cbdf45e1ba0a6a820120c934473e61bf74af49d1ff42a0da01abd593516f4ee Image: gcr.io/google_containers/kube2sky:1.12 Image ID: docker://b8f3273706d3fc51375779110828379bdbb663e556cca3925e87fbc614725bb1 Args: -domain=cluster.local -kube_master_url=http://master:8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: memory: 50Mi cpu: 100m Requests: memory: 50Mi cpu: 100m State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: skydns: Container ID: docker://bd3103f514dcc4e42ff2c126446d963d03ef1101833239926c84d5c0ba577929 Image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c Image ID: docker://763c92e53f311c40a922628a34daf0be4397463589a7d148cea8291f02c12a5d Args: -machines=http://127.0.0.1:4001 -addr=0.0.0.0:53 -ns-rotate=false -domain=cluster.local. QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 09:13:50 -0800 Last Termination State: Terminated Reason: Error Exit Code: 2 Started: Thu, 21 Jan 2016 09:13:14 -0800 Finished: Thu, 21 Jan 2016 09:13:50 -0800 Ready: False Restart Count: 28 Environment Variables: healthz: Container ID: docker://b46d2bb06a72cda25565b4f40ce956f252dce5df7f590217b3307126ec29e7c7 Image: gcr.io/google_containers/exechealthz:1.0 Image ID: docker://4f3d04b1d47b64834d494f9416d1f17a5f93a3e2035ad604fee47cfbba62be60 Args: -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null -port=8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 10m memory: 20Mi Requests: cpu: 10m memory: 20Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: etcd-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-62irv: Type: Secret (a secret that should populate this volume) SecretName: default-token-62irv Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Created Created container with docker id 49216f478c99 19m 19m 1 {scheduler } Normal Scheduled Successfully assigned kube-dns-v10-pgqig to minion-1 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Pulled Container image "gcr.io/google_containers/etcd:2.0.9" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Created Created container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Started Started container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz:1.0" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Created Created container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Started Started container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Started Started container with docker id 49216f478c99 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Pulled Container image "gcr.io/google_containers/kube2sky:1.12" already present on machine 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id fdb1278aaf93: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 70474f1ca315 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 70474f1ca315 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 70474f1ca315: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 8e18a0b404dd 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 8e18a0b404dd 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 8e18a0b404dd: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 3df9a304e09a 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 00b4e2a46779: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 3df9a304e09a 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 3df9a304e09a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 4b3ee7fccfd2 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 4b3ee7fccfd2 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 4b3ee7fccfd2: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id d1100cb0a5be: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 13m 13m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 19e2bbda4f80: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 12m 12m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id c424c0ad713a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 19m 1s 29 {kubelet minion-1} spec.containers{skydns} Normal Pulled Container image "gcr.io/google_containers/skydns:2015-10-13-8c72f8c" already present on machine 12m 1s 19 {kubelet minion-1} spec.containers{skydns} Normal Killing (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Created (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Started (events with common reason combined) 18m 1s 30 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 503 18m 1s 114 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503 (etcd) $ kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:23:03 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:21 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:25 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:29 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:33 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:37 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:41 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" [vagrant@kubernetes-master ~]$ kubectl logs kube-dns-v10-0biid etcd --namespace=kube-system 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:2380 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:7001 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:2379 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: datadir is valid for the 2.0.1 format 2016/01/21 23:28:10 etcdserver: name = default 2016/01/21 23:28:10 etcdserver: data dir = /var/etcd/data 2016/01/21 23:28:10 etcdserver: member dir = /var/etcd/data/member 2016/01/21 23:28:10 etcdserver: heartbeat = 100ms 2016/01/21 23:28:10 etcdserver: election = 1000ms 2016/01/21 23:28:10 etcdserver: snapshot count = 10000 2016/01/21 23:28:10 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001 2016/01/21 23:28:10 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001 2016/01/21 23:28:10 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 0 2016/01/21 23:28:10 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 1 2016/01/21 23:28:10 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c is starting a new election at term 1 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became candidate at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became leader at term 2 2016/01/21 23:28:12 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a (kube2sky) I0121 23:28:19.352170 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001 I0121 23:28:20.354200 1 kube2sky.go:503] Using https://10.254.0.1:443 for kubernetes master I0121 23:28:20.354248 1 kube2sky.go:504] Using kubernetes API <nil> (skydns) kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:27:43 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:49 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:53 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:57 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:01 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:05 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" The service endpoint IP does NOT seem to be getting set: kubectl describe svc kube-dns --namespace=kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS Selector: k8s-app=kube-dns Type: ClusterIP IP: 10.254.0.10 Port: dns 53/UDP Endpoints: Port: dns-tcp 53/TCP Endpoints: Session Affinity: None No events. I have double checked the serviceaccounts and that all seems configured correctly: kubectl get secrets --all-namespaces NAMESPACE NAME TYPE DATA AGE default default-token-z71xj kubernetes.io/service-account-token 2 1h kube-system default-token-wce74 kubernetes.io/service-account-token 2 1h kube-system token-system-controller-manager-master Opaque 1 1h kube-system token-system-dns Opaque 1 1h kube-system token-system-kubectl-master Opaque 1 1h kube-system token-system-kubelet-minion-1 Opaque 1 1h kube-system token-system-logging Opaque 1 1h kube-system token-system-monitoring Opaque 1 1h kube-system token-system-proxy-minion-1 Opaque 1 1h kube-system token-system-scheduler-master Opaque 1 1h The default secret for kube-system namespaces which matches the one the POD is using. kubectl describe secrets default-token-wce74 --namespace=kube-system Name: default-token-wce74 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name=default,kubernetes.io/service-account.uid=70da0a10-c096-11e5-aa7b-08002771c788 Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXdjZTc0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MGRhMGExMC1jMDk2LTExZTUtYWE3Yi0wODAwMjc3MWM3ODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.sykf8qmh9ekAEHnSPAMLPz04zebvDJhb72A2YC1Y8_BXoA57U7KRAVDVyyxQHrEUSlHsSfxzqHHOcLniPQbqWZxc0bK4taV6zdBKIgndEthz0HGJQJdfZJKxurP5dhI6TOIpeLYpUE6BN6ubsVQiJksVLK_Lfq_c1posqAUi8eXD-KsqRDA98JMUZyirRGRXzZfF7-KscIqys7AiHAURHHwDibjmXIdYKBpDwc6hOIATpS3r6rLj30R1hNYy4u2GkpNsIYo83zIt515rnfCH9Yq1syT6-qho0SaPnj3us-uT8ZXF0x_7SlChV9Wx5Mo6kW3EHg6-A6q6m3R0KlsHjQ ca.crt: 1387 bytes I have also kubectl exec into the kube2sky container and the ca.crt matches the one on the server. ### Response: It seems that there were two problems I had: Cert Creation My implementation is based off the ansible deployment found here: https://github.com/kubernetes/contrib/tree/master/ansible This deployment seems to generates the certs for all networking interfaces. It also adds IP: in front of them and then in the script that generates the certs(make-ca-cert.sh) it prepends IP again. Not 100% sure if that is okay. However, I changed it to just generate certs for the networking interface and removed the addition IP: and that seems to of resolved the issue. Very good thread explaining certs, how to create them and how they work with Kubernetes: https://github.com/kubernetes/kubernetes/issues/11000 APIServer Setting --advertise-address Also, apparently I needed to set --advertise-address as well for the apiserver. Adjusting these two things seemed to of resolved the issue.
Whenever DNS gets ran on a kubelet other than the one that resides on the master node then the Liveness and Readiness probes for skydns keep failing. I am deploying the add ons as a service similar to what is used in the salt cluster. I have configured my system to use tokens and have verified that a token gets generated for system:dns and gets configured correctly for the kubelet. Is there something additional I need to do inside the skydns rc/svc yamls as well because of this? Salt Cluster: https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase/salt/kube-addons Ansible Deployment: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons/files I am using the standard skydns rc/svc yamls. Pod Description: Name: kube-dns-v10-pgqig Namespace: kube-system Image(s): gcr.io/google_containers/etcd:2.0.9,gcr.io/google_containers/kube2sky:1.12,gcr.io/google_containers/skydns:2015-10-13-8c72f8c,gcr.io/google_containers/exechealthz:1.0 Node: minion-1/172.28.129.2 Start Time: Thu, 21 Jan 2016 08:54:50 -0800 Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v10 Status: Running Reason: Message: IP: 18.16.18.9 Replication Controllers: kube-dns-v10 (1/1 replicas created) Containers: etcd: Container ID: docker://49216f478c99fcd3c25763e99bb18861d31025a0cadd538f9590295e78846f69 Image: gcr.io/google_containers/etcd:2.0.9 Image ID: docker://b6b9a86dc06aa1361357ca1b105feba961f6a4145adca6c54e142c0be0fe87b0 Command: /usr/local/bin/etcd -data-dir /var/etcd/data -listen-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -initial-cluster-token skydns-etcd QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: kube2sky: Container ID: docker://4cbdf45e1ba0a6a820120c934473e61bf74af49d1ff42a0da01abd593516f4ee Image: gcr.io/google_containers/kube2sky:1.12 Image ID: docker://b8f3273706d3fc51375779110828379bdbb663e556cca3925e87fbc614725bb1 Args: -domain=cluster.local -kube_master_url=http://master:8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: memory: 50Mi cpu: 100m Requests: memory: 50Mi cpu: 100m State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: skydns: Container ID: docker://bd3103f514dcc4e42ff2c126446d963d03ef1101833239926c84d5c0ba577929 Image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c Image ID: docker://763c92e53f311c40a922628a34daf0be4397463589a7d148cea8291f02c12a5d Args: -machines=http://127.0.0.1:4001 -addr=0.0.0.0:53 -ns-rotate=false -domain=cluster.local. QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 09:13:50 -0800 Last Termination State: Terminated Reason: Error Exit Code: 2 Started: Thu, 21 Jan 2016 09:13:14 -0800 Finished: Thu, 21 Jan 2016 09:13:50 -0800 Ready: False Restart Count: 28 Environment Variables: healthz: Container ID: docker://b46d2bb06a72cda25565b4f40ce956f252dce5df7f590217b3307126ec29e7c7 Image: gcr.io/google_containers/exechealthz:1.0 Image ID: docker://4f3d04b1d47b64834d494f9416d1f17a5f93a3e2035ad604fee47cfbba62be60 Args: -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null -port=8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 10m memory: 20Mi Requests: cpu: 10m memory: 20Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: etcd-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-62irv: Type: Secret (a secret that should populate this volume) SecretName: default-token-62irv Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Created Created container with docker id 49216f478c99 19m 19m 1 {scheduler } Normal Scheduled Successfully assigned kube-dns-v10-pgqig to minion-1 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Pulled Container image "gcr.io/google_containers/etcd:2.0.9" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Created Created container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Started Started container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz:1.0" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Created Created container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Started Started container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Started Started container with docker id 49216f478c99 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Pulled Container image "gcr.io/google_containers/kube2sky:1.12" already present on machine 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id fdb1278aaf93: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 70474f1ca315 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 70474f1ca315 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 70474f1ca315: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 8e18a0b404dd 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 8e18a0b404dd 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 8e18a0b404dd: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 3df9a304e09a 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 00b4e2a46779: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 3df9a304e09a 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 3df9a304e09a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 4b3ee7fccfd2 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 4b3ee7fccfd2 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 4b3ee7fccfd2: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id d1100cb0a5be: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 13m 13m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 19e2bbda4f80: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 12m 12m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id c424c0ad713a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 19m 1s 29 {kubelet minion-1} spec.containers{skydns} Normal Pulled Container image "gcr.io/google_containers/skydns:2015-10-13-8c72f8c" already present on machine 12m 1s 19 {kubelet minion-1} spec.containers{skydns} Normal Killing (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Created (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Started (events with common reason combined) 18m 1s 30 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 503 18m 1s 114 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503 (etcd) $ kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:23:03 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:21 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:25 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:29 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:33 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:37 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:41 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" [vagrant@kubernetes-master ~]$ kubectl logs kube-dns-v10-0biid etcd --namespace=kube-system 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:2380 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:7001 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:2379 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: datadir is valid for the 2.0.1 format 2016/01/21 23:28:10 etcdserver: name = default 2016/01/21 23:28:10 etcdserver: data dir = /var/etcd/data 2016/01/21 23:28:10 etcdserver: member dir = /var/etcd/data/member 2016/01/21 23:28:10 etcdserver: heartbeat = 100ms 2016/01/21 23:28:10 etcdserver: election = 1000ms 2016/01/21 23:28:10 etcdserver: snapshot count = 10000 2016/01/21 23:28:10 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001 2016/01/21 23:28:10 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001 2016/01/21 23:28:10 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 0 2016/01/21 23:28:10 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 1 2016/01/21 23:28:10 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c is starting a new election at term 1 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became candidate at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became leader at term 2 2016/01/21 23:28:12 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a (kube2sky) I0121 23:28:19.352170 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001 I0121 23:28:20.354200 1 kube2sky.go:503] Using https://10.254.0.1:443 for kubernetes master I0121 23:28:20.354248 1 kube2sky.go:504] Using kubernetes API <nil> (skydns) kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:27:43 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:49 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:53 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:57 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:01 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:05 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" The service endpoint IP does NOT seem to be getting set: kubectl describe svc kube-dns --namespace=kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS Selector: k8s-app=kube-dns Type: ClusterIP IP: 10.254.0.10 Port: dns 53/UDP Endpoints: Port: dns-tcp 53/TCP Endpoints: Session Affinity: None No events. I have double checked the serviceaccounts and that all seems configured correctly: kubectl get secrets --all-namespaces NAMESPACE NAME TYPE DATA AGE default default-token-z71xj kubernetes.io/service-account-token 2 1h kube-system default-token-wce74 kubernetes.io/service-account-token 2 1h kube-system token-system-controller-manager-master Opaque 1 1h kube-system token-system-dns Opaque 1 1h kube-system token-system-kubectl-master Opaque 1 1h kube-system token-system-kubelet-minion-1 Opaque 1 1h kube-system token-system-logging Opaque 1 1h kube-system token-system-monitoring Opaque 1 1h kube-system token-system-proxy-minion-1 Opaque 1 1h kube-system token-system-scheduler-master Opaque 1 1h The default secret for kube-system namespaces which matches the one the POD is using. kubectl describe secrets default-token-wce74 --namespace=kube-system Name: default-token-wce74 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name=default,kubernetes.io/service-account.uid=70da0a10-c096-11e5-aa7b-08002771c788 Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXdjZTc0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MGRhMGExMC1jMDk2LTExZTUtYWE3Yi0wODAwMjc3MWM3ODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.sykf8qmh9ekAEHnSPAMLPz04zebvDJhb72A2YC1Y8_BXoA57U7KRAVDVyyxQHrEUSlHsSfxzqHHOcLniPQbqWZxc0bK4taV6zdBKIgndEthz0HGJQJdfZJKxurP5dhI6TOIpeLYpUE6BN6ubsVQiJksVLK_Lfq_c1posqAUi8eXD-KsqRDA98JMUZyirRGRXzZfF7-KscIqys7AiHAURHHwDibjmXIdYKBpDwc6hOIATpS3r6rLj30R1hNYy4u2GkpNsIYo83zIt515rnfCH9Yq1syT6-qho0SaPnj3us-uT8ZXF0x_7SlChV9Wx5Mo6kW3EHg6-A6q6m3R0KlsHjQ ca.crt: 1387 bytes I have also kubectl exec into the kube2sky container and the ca.crt matches the one on the server.