prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>How does the communication between two different pods happen in Kubernetes?</p> <p>In my case I have two pods: <em>frontend</em> and <em>backend</em>, both have different containers. I want my frontend pod to communicate with the backend pod but I don't want to use backend pod's IP( i.e. hard coded). </p> <p>Is it possible through services? </p>
<blockquote> <p>Is it possible through services?</p> </blockquote> <p>Yes, <a href="https://kubernetes.io/docs/concepts/services-networking/" rel="noreferrer">services</a> are the recommended way to handle this. Once you have your services set up for each pod (or replication controller, as is recommended), you can find the service IP via the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="noreferrer">service environment variable</a>, e.g. <code>BACKEND_SERVICE_HOST</code> and <code>BACKEND_SERVICE_PORT</code> for a "backend" service.</p>
<p>Instead of <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/managing-deployments.md#organizing-resource-configurations" rel="nofollow">this</a> YAML file, I want to pass a JSON file. What is the equivalent JSON for it? I want to use it in the <code>kubectl create -f ...</code> command:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx --- apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre>
<p>There are a lot of online <code>YAML</code> to <code>JSON</code> (and vice versa) converters covering 1.1 and 1.2 spec. </p> <p>I haven't used <code>Kubernetes</code> before, but I can see that you can pass multiple documents. Basically the <code>YAML</code> structure that you use is a short version of two documents. <code>JSON</code> doesn't have an equivalent to this, so you have to break it into two separate documents (files). </p> <p>The three dashes in <code>YAML</code> is a way of defining multiple documents. So basically the above is not one <code>JSON</code> oblect/file, but two. </p> <p>The first </p> <pre><code>{ "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } } </code></pre> <p>And the second</p> <pre><code>{ "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } </code></pre> <p>As a side note, since this is not useful for your purpose, in order to represent them as one <code>JSON</code> object, then you need an array. But this would mean that the <code>YAML</code> would have to change too. So in order to have this</p> <pre><code>[ { "apiVersion": "v1", "kind": "Service", "metadata": { "name": "my-nginx-svc", "labels": { "app": "nginx" } }, "spec": { "type": "LoadBalancer", "ports": [ { "port": 80 } ], "selector": { "app": "nginx" } } }, { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "my-nginx" }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80 } ] } ] } } } } ] </code></pre> <p>The <code>YAML</code> equivalent would be this</p> <pre><code>--- - apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx - apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre>
<p>I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant</p> <p>I have in my service.yaml file:</p> <pre><code>spec: type: NodePort </code></pre> <p>When I type:</p> <pre><code>kubectl get services </code></pre> <p>I see:</p> <pre><code>NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR kubernetes 10.100.0.1 &lt;none&gt; 443/TCP &lt;none&gt; my-frontend 10.100.250.90 nodes 8000/TCP name=my-app </code></pre> <p>What is the "nodes" external IP? How do I access my-frontend externally?</p>
<p>In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster:</p> <ul> <li>Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Link to the documentation</a></li> <li>Use one of the latest features called "ingress". Here is description from manual <strong>"An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc."</strong>. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Link to the documentation</a></li> <li>If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". <ul> <li><a href="https://docs.openshift.org/latest/architecture/index.html" rel="nofollow noreferrer">Information about openshift origin</a>. </li> <li><a href="https://docs.openshift.org/latest/architecture/core_concepts/routes.html" rel="nofollow noreferrer">Information about openshift origin routes</a></li> </ul></li> </ul>
<p>Can I run multiple services on port 80 in Kubernetes on Google Container Engine? Each service has a dedicated external IP Address, so theoretically the router should be able to route to each service based off it's IP Address.</p> <p>So far I have created pods for "frontend-1" and "frontend-2" on Container Engine. I tried to create seperate services for them both running on Port 80 with unique External IPs, but it didn't work. Is there another way to accomplish this in Kubernetes without using a custom routing service?</p> <p>service-1.yaml:</p> <pre><code>id: service-1 port: 80 containerPort: 8080 selector: name: frontend-1 createExternalLoadBalancer: true </code></pre> <p>service-2.yaml:</p> <pre><code>id: service-2 port: 80 containerPort: 8081 selector: name: frontend-2 createExternalLoadBalancer: true </code></pre>
<p>Kubernetes 1.1 has an <code>Ingress</code> type, which allows you to route different dns names/ips to different services. From <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/ingress.md" rel="noreferrer">github</a></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80 </code></pre>
<p>What is detached mode in the docker world? I read this article <a href="https://docs.docker.com/engine/reference/run/#detached-vs-foreground" rel="noreferrer">Link</a>, but it does not explain exactly what detached mode mean.</p>
<p>You can start a docker container in detached mode with a <code>-d</code> option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands.</p> <p>The opposite of detached mode is foreground mode. That is the default mode, when <code>-d</code> option is not used. In this mode, the console you are using to execute <code>docker run</code> will be attached to standard input, output and error. That means your console is attached to the container's process.</p> <p>In detached mode, you can follow the standard output of your docker container with <code>docker logs -f &lt;container_ID&gt;</code>.</p> <p>Just try both options. I always use the detached mode to run my containers. I hope I could explain it a little bit clearer.</p>
<p>Kubernetes is an orchestration tool for the management of containers. Kubernetes creates pods which are containing containers, instead of managing containers directly.</p> <p>I read this about <a href="http://kubernetes.io/v1.0/docs/user-guide/pods.html" rel="noreferrer">pods</a></p> <p>I'm working with OpenShift V3 which is using pods. But in my apps, all demo's and all examples I see: One pod contains one containers (it's possible to contain more and that could be an advantage of using pods). But in an OpenShift environment I don't see the advantage of this pods.</p> <p>Can some explain me why OpenShift V3 is using kubernetes with pods and containers instead of an orchestration tool which is working with containers immediately (without pods).</p>
<p>There are many cases where our users want to run pods with multiple containers within OpenShift. A common use-case for running multiple containers is where a pod has a 'primary' container that does some job, and a 'side-car' container that does something like write logs to a logging agent.</p> <p>The motivation for pods is twofold -- to make it easier to share resources between containers, and to enable deploying and replicating groups of containers that share resources. You can read more about them in the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pods.md" rel="noreferrer">user-guide</a>.</p> <p>The reason we still use a Pod when only a single container is that containers do not have all the notions that are attached to pods. For example, pods have IP addresses. Containers do not -- they share the IP address associated with the pod's network namespace.</p> <p>Hope that helps. Let me know if you'd like more clarification, or we can discuss on slack.</p>
<p>When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address:</p> <pre><code>initially: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 80/TCP app=app-1 7s </code></pre> <p>and after about 30s:</p> <pre><code>NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s </code></pre> <p>But when I do it in one step providing the <code>Service</code> and the <code>ReplicationController</code> to the <code>kubectl create -f dir_with_2_files</code> the service gets created but it does not get and External IP:</p> <pre><code>NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.251.171 &lt;none&gt; 80/TCP app=app-1 2m </code></pre> <p>The <code>&lt;none&gt;</code> under External IP worries me.</p> <p>For the Service I use the JSON file:</p> <pre><code>{ "apiVersion": "v1", "kind": "Service", "metadata": { "name": "app-1" }, "spec": { "selector": { "app": "app-1" }, "ports": [ { "port": 80, "targetPort": 8000 } ] } } </code></pre> <p>and for the ReplicationController:</p> <pre><code>{ "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "app-1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "app-1" } }, "spec": { "containers": [ { "name": "service", "image": "gcr.io/sigma-cairn-99810/service:latest", "ports": [ { "containerPort": 8000 } ] } ] } } } } </code></pre> <p>and to expose the Service manually I use the command:</p> <pre><code>kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer" </code></pre>
<p>If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of <code>expose</code> you must:</p> <ol> <li>Make sure your Service selects pods from the RC via matching label selectors</li> <li>Make the Service <code>type=LoadBalancer</code></li> </ol>
<p>My OS is <code>ubuntu 14.04.3</code> Server, and I want to build <code>kubernetes</code>.</p> <p>Firstly, I use "<code>apt-get</code>" command install <code>Golang</code>, but the version is <code>1.2.1</code>, so I use <code>apt-get --purge autoremove</code> command to remove it.And install the newest <code>1.5.1</code> from golang website.</p> <p>But executing <code>make</code> command, it seems <code>kubernetes</code> always "think" current <code>golang</code> is <code>1.2.1</code>:</p> <pre><code>$ make hack/build-go.sh +++ [1203 06:20:30] Building go targets for linux/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/kubemark cmd/hyperkube cmd/linkcheck plugin/cmd/kube-scheduler cmd/kubectl cmd/integration cmd/gendocs cmd/genkubedocs cmd/genman cmd/mungedocs cmd/genbashcomp cmd/genconversion cmd/gendeepcopy cmd/genswaggertypedocs examples/k8petstore/web-server/src github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test +++ [1203 06:20:30] +++ Warning: stdlib pkg with cgo flag not found. +++ [1203 06:20:30] +++ Warning: stdlib pkg cannot be rebuilt since /usr/local/go/pkg is not writable by nan +++ [1203 06:20:30] +++ Warning: Make /usr/local/go/pkg writable for nan for a one-time stdlib install, Or +++ [1203 06:20:30] +++ Warning: Rebuild stdlib using the command 'CGO_ENABLED=0 go install -a -installsuffix cgo std' +++ [1203 06:20:30] +++ Falling back to go build, which is slower # k8s.io/kubernetes/pkg/util/yaml _output/local/go/src/k8s.io/kubernetes/pkg/util/yaml/decoder.go:26: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/ghodss/yaml.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/validation _output/local/go/src/k8s.io/kubernetes/pkg/util/validation/errors.go:23: import /home/nan/kubernetes/_output/local/go/pkg/linux_amd64/k8s.io/kubernetes/pkg/util/errors.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/api/resource _output/local/go/src/k8s.io/kubernetes/pkg/api/resource/quantity.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/speter.net/go/exp/math/dec/inf.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/spf13/cobra Godeps/_workspace/src/github.com/spf13/cobra/command.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/inconshreveable/mousetrap.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # k8s.io/kubernetes/pkg/util/iptables _output/local/go/src/k8s.io/kubernetes/pkg/util/iptables/iptables.go:27: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/coreos/go-semver/semver.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/prometheus/common/expfmt Godeps/_workspace/src/github.com/prometheus/common/expfmt/decode.go:23: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/prometheus/client_model/go.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] # github.com/emicklei/go-restful Godeps/_workspace/src/github.com/emicklei/go-restful/container.go:16: import /home/nan/kubernetes/Godeps/_workspace/pkg/linux_amd64/github.com/emicklei/go-restful/log.a: object is [linux amd64 go1.2.1 X:none] expected [linux amd64 go1.5.1 X:none] !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:376 'CGO_ENABLED=0 go build -o "${outfile}" "${goflags[@]:+${goflags[@]}}" -ldflags "${goldflags}" "${binary}"' exited with status 2 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:376 kube::golang::build_binaries_for_platform(...) 2: /home/nan/kubernetes/hack/lib/golang.sh:535 kube::golang::build_binaries(...) 3: hack/build-go.sh:26 main(...) Exiting with status 1 !!! Error in /home/nan/kubernetes/hack/lib/golang.sh:456 '( kube::golang::setup_env; local host_platform; host_platform=$(kube::golang::host_platform); local goflags goldflags; eval "goflags=(${KUBE_GOFLAGS:-})"; goldflags="${KUBE_GOLDFLAGS:-} $(kube::version::ldflags)"; local use_go_build; local -a targets=(); local arg; for arg in "$@"; do if [[ "${arg}" == "--use_go_build" ]]; then use_go_build=true; else if [[ "${arg}" == -* ]]; then goflags+=("${arg}"); else targets+=("${arg}"); fi; fi; done; if [[ ${#targets[@]} -eq 0 ]]; then targets=("${KUBE_ALL_TARGETS[@]}"); fi; local -a platforms=("${KUBE_BUILD_PLATFORMS[@]:+${KUBE_BUILD_PLATFORMS[@]}}"); if [[ ${#platforms[@]} -eq 0 ]]; then platforms=("${host_platform}"); fi; local binaries; binaries=($(kube::golang::binaries_from_targets "${targets[@]}")); local parallel=false; if [[ ${#platforms[@]} -gt 1 ]]; then local gigs; gigs=$(kube::golang::get_physmem); if [[ ${gigs} -ge ${KUBE_PARALLEL_BUILD_MEMORY} ]]; then kube::log::status "Multiple platforms requested and available ${gigs}G &gt;= threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in parallel"; parallel=true; else kube::log::status "Multiple platforms requested, but available ${gigs}G &lt; threshold ${KUBE_PARALLEL_BUILD_MEMORY}G, building platforms in serial"; parallel=false; fi; fi; if [[ "${parallel}" == "true" ]]; then kube::log::status "Building go targets for ${platforms[@]} in parallel (output will appear in a burst when complete):" "${targets[@]}"; local platform; for platform in "${platforms[@]}"; do ( kube::golang::set_platform_envs "${platform}"; kube::log::status "${platform}: go build started"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; kube::log::status "${platform}: go build finished" ) &amp;&gt; "/tmp//${platform//\//_}.build" &amp; done; local fails=0; for job in $(jobs -p); do wait ${job} || let "fails+=1"; done; for platform in "${platforms[@]}"; do cat "/tmp//${platform//\//_}.build"; done; exit ${fails}; else for platform in "${platforms[@]}"; do kube::log::status "Building go targets for ${platform}:" "${targets[@]}"; kube::golang::set_platform_envs "${platform}"; kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-}; done; fi )' exited with status 1 Call stack: 1: /home/nan/kubernetes/hack/lib/golang.sh:456 kube::golang::build_binaries(...) 2: hack/build-go.sh:26 main(...) Exiting with status 1 make: *** [all] Error 1 </code></pre> <p>But the <code>go</code> in "<code>PATH</code>" is <code>1.5.1</code>:</p> <pre><code>$ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin $ which go /usr/local/go/bin/go $ go version go version go1.5.1 linux/amd64 </code></pre> <p>So how can I fix this issue? </p>
<p>As explained in "<a href="https://askubuntu.com/a/151943/5470">How can you completely remove a package?</a>", a <code>sudo apt-get --purge autoremove</code> might have removed go completely.<br> But that doesn't mean it has cleaned what was compiled before</p> <p>Make sure that <code>/home/nan/kubernetes/Godeps/_workspace</code> and <code>/home/nan/kubernetes/_output</code> are deleted after a <strong><code>make clean</code></strong> (See <a href="https://github.com/kubernetes/kubernetes/issues/16771" rel="nofollow noreferrer">issue 16771</a>).<br> Make sure <code>/usr/local/go/pkg</code> is writable for the user <code>nan</code>. (See <a href="https://github.com/kubernetes/kubernetes/blob/a5100ef05726237e4e1f44997852963c955b608c/hack/lib/golang.sh#L323-L325" rel="nofollow noreferrer"><code>hack/lib/golang.sh</code></a>)</p> <p>Finally, <a href="https://github.com/kubernetes/kubernetes/issues/16229" rel="nofollow noreferrer">issue 16229</a> mentions:</p> <blockquote> <p>Would be nice to at least update the docs to indicate that you can't use go>1.4.</p> </blockquote> <p>So try and <strong>install go 1.4.x only</strong>.</p> <p>Update: the <a href="https://stackoverflow.com/users/2106207/nan-xiao">OP Nan Xiao</a> reports <a href="https://stackoverflow.com/questions/34080574/the-kubernetes-cant-find-correct-go-in-ubuntu/34081489#comment56305801_34081489">in the comments</a> having managed to build it with go 1.5.1 or 1.5.2 without any more issue.</p>
<p>I have a replication controller that keeps starting a pod but it's never up. How do I get to the replication controller logs so I can debug this? <code>$ kubectl describe rc</code>:</p> <pre><code>Name: jenkins-leader-restored Namespace: default Image(s): gcr.io/cloud-solutions-images/jenkins-gcp-leader:master-5ca73a6 Selector: name=jenkins,role=leader Labels: name=jenkins,role=leader Replicas: 0 current / 1 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 15m 15m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-xxr93 12m 12m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-1e44w 11m 11m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-y3llu 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-wfd70 8m 8m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-8ji09 5m 5m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-p4wbc 4m 4m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-tvreo 1m 1m 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-l6rpy 56s 56s 1 {replication-controller } SuccessfulCreate Created pod: jenkins-leader-restored-4asg5 </code></pre> <p>Using the Automated Image Builds with Jenkins, Packer, and Kubernetes <a href="https://github.com/GoogleCloudPlatform/kube-jenkins-imager" rel="nofollow">repo</a>, the 'Practice Restoring a Backup' section. </p>
<p>Prashanth B. identified the root cause of my issue which was that there were two replication controllers using the same selectors, with different replica values running at the same time.</p> <p>The log location for kubelets (which run the pod) on the Google Compute Instance is, <code>/var/log/kubelet.log</code>. Looking here would have helped point out that the pod was immediately being removed.</p> <p>My troubleshooting could have gone like this:</p> <ol> <li><p>Identify that pod isn't running as intended: <code>kubectl get pods</code></p></li> <li><p>Check the replication controller: <code>kubectl describe rc</code></p></li> <li><p>Search logs for the pod that was created, as seen in the previous command: <code>grep xxr93 /var/log/kubelet.log</code></p> <pre><code>user@gke-stuff-d9adf8e28-node-13cl:~$ grep xxr93 /var/log/kubelet.log I1203 16:59:09.337110 3366 kubelet.go:2005] SyncLoop (ADD): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345356 3366 kubelet.go:2008] SyncLoop (UPDATE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345423 3366 kubelet.go:2011] SyncLoop (REMOVE): "jenkins-leader-restored-xxr93_default" I1203 16:59:09.345503 3366 kubelet.go:2101] Failed to delete pod "jenkins-leader-restored-xxr93_default", err: pod not found I1203 16:59:09.483104 3366 manager.go:1707] Need to restart pod infra container for "jenkins-leader-restored-xxr93_default" because it is not found I1203 16:59:13.695134 3366 kubelet.go:1823] Killing unwanted pod "jenkins-leader-restored-xxr93" E1203 17:00:47.026865 3366 manager.go:1920] Error running pod "jenkins-leader-restored-xxr93_default" container "jenkins": impossible: cannot find the mounted volumes for pod "jenkins-leader-restored-xxr93_default" </code></pre></li> </ol>
<p>I am testing Openshift Origin v3. I installed it as a docker container following the instructions. I also deployed all the streams in roles/openshift_examples/files/examples/image-streams/image-streams-centos7.json.</p> <p>I am now testing the installation by deploying a dummy php application from Github. I am able to create the project and application. However the builds are stuck in status "pending". In the events tab, I see plenty of messages like this one:</p> <pre><code>"Unable to mount volumes for pod "hello-world-1-build_php1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/9377d3b4-9887- 11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-x2ijq- push"): file does not exist (5 times in the last 40 seconds)" </code></pre> <p>I tried also with a java application and the tomcat docker image, but got the same error messages. Looks like a Kubernetes configuration issue.</p> <p>Any ideas?</p> <p>Thanks for your help</p> <p>UPDATE1: logs from the origin container show a bit more information about the error:</p> <pre><code>Unable to mount volumes for pod "deployment-example-2-deploy_test1": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist; skipping pod E1202 09:12:24.269145 4396 pod_workers.go:113] Error syncing pod 70f69f8c-98d3-11e5-8d98-00215abe5482, skipping: IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/70f69f8c-98d3-11e5-8d98-00215abe5482/volumes/kubernetes.io~secret/deployer-token-8cfv8"): file does not exist W1202 09:12:34.229374 4396 kubelet.go:1690] Orphaned volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push" found, tearing down volume E1202 09:12:34.287847 4396 kubelet.go:1696] Could not tear down volume "ac11a2b5-9880-11e5-81fe-00215abe5482/builder-dockercfg-va0cl-push": IsLikelyNotMountPoint("/var/lib/origin/openshift.local.volumes/pods/ac11a2b5-9880-11e5-81fe-00215abe5482/volumes/kubernetes.io~secret/builder-dockercfg-va0cl-push"): file does not exist </code></pre> <p>The log entries of the start of the origin container:</p> <pre><code>202 09:12:13.992293 4396 start_master.go:278] assetConfig.loggingPublicURL: invalid value '', Details: required to view aggregated container logs in the console W1202 09:12:13.992442 4396 start_master.go:278] assetConfig.metricsPublicURL: invalid value '', Details: required to view cluster metrics in the console I1202 09:12:14.784026 4396 plugins.go:71] No cloud provider specified. I1202 09:12:14.981775 4396 start_master.go:388] Starting master on 0.0.0.0:8443 (v1.1-270-ge592c18) I1202 09:12:14.981825 4396 start_master.go:389] Public master address is https://192.168.178.55:8443 I1202 09:12:14.981855 4396 start_master.go:393] Using images from "openshift/origin-&lt;component&gt;:v1.1" 2015-12-02 09:12:15.574421 I | etcdserver: name = openshift.local 2015-12-02 09:12:15.574455 I | etcdserver: data dir = openshift.local.etcd 2015-12-02 09:12:15.574465 I | etcdserver: member dir = openshift.local.etcd/member 2015-12-02 09:12:15.574472 I | etcdserver: heartbeat = 100ms 2015-12-02 09:12:15.574480 I | etcdserver: election = 1000ms 2015-12-02 09:12:15.574489 I | etcdserver: snapshot count = 0 2015-12-02 09:12:15.574505 I | etcdserver: advertise client URLs = https://192.168.178.55:4001 2015-12-02 09:12:15.606296 I | etcdserver: restarting member 2041635cb479cd3a in cluster 6a5d0422e654089a at commit index 3846 2015-12-02 09:12:15.609623 I | raft: 2041635cb479cd3a became follower at term 2 2015-12-02 09:12:15.609663 I | raft: newRaft 2041635cb479cd3a [peers: [], term: 2, commit: 3846, applied: 0, lastindex: 3846, lastterm: 2] 2015-12-02 09:12:15.609815 I | etcdserver: set snapshot count to default 10000 2015-12-02 09:12:15.609829 I | etcdserver: starting server... [version: 2.1.2, cluster version: to_be_decided] I1202 09:12:15.611196 4396 etcd.go:68] Started etcd at 192.168.178.55:4001 2015-12-02 09:12:15.624029 N | etcdserver: added local member 2041635cb479cd3a [https://192.168.178.55:7001] to cluster 6a5d0422e654089a 2015-12-02 09:12:15.624349 N | etcdserver: set the initial cluster version to 2.1.0 I1202 09:12:15.645761 4396 run_components.go:181] Using default project node label selector: 2015-12-02 09:12:17.009875 I | raft: 2041635cb479cd3a is starting a new election at term 2 2015-12-02 09:12:17.009915 I | raft: 2041635cb479cd3a became candidate at term 3 2015-12-02 09:12:17.009970 I | raft: 2041635cb479cd3a received vote from 2041635cb479cd3a at term 3 2015-12-02 09:12:17.009995 I | raft: 2041635cb479cd3a became leader at term 3 2015-12-02 09:12:17.010011 I | raft: raft.node: 2041635cb479cd3a elected leader 2041635cb479cd3a at term 3 2015-12-02 09:12:17.059445 I | etcdserver: published {Name:openshift.local ClientURLs:[https://192.168.178.55:4001]} to cluster 6a5d0422e654089a W1202 09:12:17.111262 4396 controller.go:290] Resetting endpoints for master service "kubernetes" to &amp;{{ } {kubernetes default c10e12cf-98d0-11e5-8d98-00215abe5482 8 0 2015-12-02 08:43:26 +0000 UTC &lt;nil&gt; &lt;nil&gt; map[] map[]} [{[{192.168.178.55 &lt;nil&gt;}] [] [{https 8443 TCP} {dns 53 UDP} {dns-tcp 53 TCP}]}]} I1202 09:12:17.524735 4396 master.go:232] Started Kubernetes API at 0.0.0.0:8443/api/v1 I1202 09:12:17.524914 4396 master.go:232] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1 I1202 09:12:17.525038 4396 master.go:232] Started Origin API at 0.0.0.0:8443/oapi/v1 I1202 09:12:17.525049 4396 master.go:232] Started OAuth2 API at 0.0.0.0:8443/oauth I1202 09:12:17.525055 4396 master.go:232] Started Login endpoint at 0.0.0.0:8443/login I1202 09:12:17.525061 4396 master.go:232] Started Web Console 0.0.0.0:8443/console/ I1202 09:12:17.525067 4396 master.go:232] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/ 2015-12-02 09:12:18.523290 I | http: TLS handshake error from 192.168.178.21:50932: EOF 2015-12-02 09:12:18.537124 I | http: TLS handshake error from 192.168.178.21:50933: EOF 2015-12-02 09:12:18.549780 I | http: TLS handshake error from 192.168.178.21:50934: EOF 2015-12-02 09:12:18.556966 I | http: TLS handshake error from 192.168.178.21:50935: EOF 2015-12-02 09:12:20.117727 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0] 2015-12-02 09:12:20.117804 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0] I1202 09:12:20.217891 4396 run_components.go:176] DNS listening at 0.0.0.0:53 I1202 09:12:20.225439 4396 start_master.go:519] Controllers starting (*) E1202 09:12:20.702335 4396 serviceaccounts_controller.go:218] serviceaccounts "default" already exists I1202 09:12:21.505391 4396 nodecontroller.go:133] Sending events to api server. I1202 09:12:21.507690 4396 start_master.go:563] Started Kubernetes Controllers W1202 09:12:21.944254 4396 nodecontroller.go:572] Missing timestamp for Node intweb3. Assuming now as a timestamp. I1202 09:12:21.944570 4396 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"intweb3", UID:"intweb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node intweb3 event: Registered Node intweb3 in NodeController I1202 09:12:22.662116 4396 start_node.go:179] Starting a node connected to https://192.168.178.55:8443 I1202 09:12:22.670163 4396 plugins.go:71] No cloud provider specified. I1202 09:12:22.670239 4396 start_node.go:284] Starting node intweb3 (v1.1-270-ge592c18) W1202 09:12:22.681308 4396 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1 I1202 09:12:22.698136 4396 node.go:56] Connecting to Docker at unix:///var/run/docker.sock I1202 09:12:22.717904 4396 manager.go:128] cAdvisor running in container: "/docker/f80b92397b6eb9052cf318d7225d21eb66941fcb333f16fe2b0330af629f73dd" I1202 09:12:22.932096 4396 fs.go:108] Filesystem partitions: map[/dev/sda1:{mountpoint:/rootfs/boot major:8 minor:1 fsType: blockSize:0} /dev/mapper/intweb3--vg-root:{mountpoint:/rootfs major:252 minor:0 fsType: blockSize:0}] I1202 09:12:22.949204 4396 node.go:251] Started Kubernetes Proxy on 0.0.0.0 I1202 09:12:22.974678 4396 start_master.go:582] Started Origin Controllers I1202 09:12:22.999204 4396 machine.go:48] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I1202 09:12:22.999311 4396 manager.go:163] Machine: {NumCores:2 CpuFrequency:2667000 MemoryCapacity:1010421760 MachineID: SystemUUID:26A5835E-1781-DD11-BBDA-5ABE54820021 BootID:6cbd9dcc-5d4d-414d-96e7-c8a41de013f7 Filesystems:[{Device:/dev/mapper/intweb3--vg-root Capacity:156112113664} {Device:/dev/sda1 Capacity:246755328}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:158737629184 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:1044381696 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:160041885696 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:21:5a:be:54:82 Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:1010421760 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I1202 09:12:23.010686 4396 manager.go:169] Version: {KernelVersion:3.19.0-25-generic ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} I1202 09:12:23.011734 4396 server.go:820] Watching apiserver I1202 09:12:23.253556 4396 manager.go:191] Setting dockerRoot to /var/lib/docker I1202 09:12:23.270558 4396 plugins.go:56] Registering credential provider: .dockercfg I1202 09:12:23.363525 4396 server.go:779] Started kubelet E1202 09:12:23.363724 4396 kubelet.go:812] Image garbage collection failed: unable to find data for container / I1202 09:12:23.370771 4396 kubelet.go:833] Running in container "/kubelet" I1202 09:12:23.370860 4396 server.go:104] Starting to listen on 0.0.0.0:10250 I1202 09:12:23.734095 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154869743 +0000 UTC): [579.19167ms] [579.19167ms] Decoded 1 nodes [579.193136ms] [1.466Β΅s] END I1202 09:12:23.734149 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.154865413 +0000 UTC): [3.352Β΅s] [3.352Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [579.252571ms] [579.249219ms] Decoding dir /openshift.io/imagestreams/test1 END [579.255504ms] [2.933Β΅s] Decoded 1 nodes [579.257181ms] [1.677Β΅s] END I1202 09:12:23.734204 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.001854335 +0000 UTC): [1.676Β΅s] [1.676Β΅s] About to list directory [732.327694ms] [732.326018ms] List extracted [732.330138ms] [2.444Β΅s] END I1202 09:12:23.773150 4396 factory.go:236] Registering Docker factory I1202 09:12:23.779446 4396 factory.go:93] Registering Raw factory I1202 09:12:24.069082 4396 manager.go:1006] Started watching for new ooms in manager I1202 09:12:24.074624 4396 oomparser.go:183] oomparser using systemd I1202 09:12:24.111389 4396 kubelet.go:944] Node intweb3 was previously registered I1202 09:12:24.112362 4396 manager.go:250] Starting recovery of all containers I1202 09:12:24.166309 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155013407 +0000 UTC): [1.011259672s] [1.011259672s] Decoded 1 nodes [1.011261767s] [2.095Β΅s] END I1202 09:12:24.166422 4396 trace.go:57] Trace "decodeNodeList *[]api.ImageStream" (started 2015-12-02 09:12:23.155011032 +0000 UTC): [1.327Β΅s] [1.327Β΅s] Decoding dir /openshift.io/imagestreams/test1 START [1.01138385s] [1.011382523s] Decoding dir /openshift.io/imagestreams/test1 END [1.011386853s] [3.003Β΅s] Decoded 1 nodes [1.01138839s] [1.537Β΅s] END I1202 09:12:24.166561 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002949866 +0000 UTC): [3.142Β΅s] [3.142Β΅s] About to list etcd node [152.060049ms] [152.056907ms] Etcd node listed [1.163577016s] [1.011516967s] Node list decoded [1.163587911s] [10.895Β΅s] END I1202 09:12:24.166656 4396 trace.go:57] Trace "List *api.ImageStreamList" (started 2015-12-02 09:12:23.002947281 +0000 UTC): [1.188Β΅s] [1.188Β΅s] About to list directory [1.16368555s] [1.163684362s] List extracted [1.163687576s] [2.026Β΅s] END I1202 09:12:24.196265 4396 manager.go:255] Recovery completed I1202 09:12:24.215711 4396 manager.go:118] Starting to sync pod status with apiserver I1202 09:12:24.215792 4396 kubelet.go:2056] Starting kubelet main sync loop. </code></pre> <p>UPDATE2</p> <pre><code>$sudo docker -v Docker version 1.9.1, build a34a1d5 $ cat /etc/*release* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.3 LTS" NAME="Ubuntu" VERSION="14.04.3 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.3 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" </code></pre> <p>@Clayton: Can you share any link about the mount bug that is supposed to be fixed in 1.10? Thanks!</p>
<p>What OS and Docker version are you using? There are known issues with volume mounting on older versions of Docker - OpenShift/Kube mounts volumes into containers, but when Docker is running OpenShift the stock Docker version hides access to the root mounts. Red Hat OS's carry a patch that makes this works, supposedly Docker 1.10 will carry the right fix. The alternate is to download the binary directly and simply start it with <code>sudo openshift start</code>.</p>
<p>Any idea to view the log files of a crashed pod in kubernetes? My pod is listing it's state as "CrashLoopBackOff" after started the replicationController. I search the available docs and couldn't find any.</p>
<p>Assuming that your pod <strong>still exists:</strong></p> <p><code>kubectl logs &lt;podname&gt; --previous</code></p> <blockquote> <p>$ kubectl logs -h<br /> -p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod <strong>if it exists</strong>.</p> </blockquote>
<p>Is there a way I can enable CORS on Kubernetes API so that I can send ajax requests to Kubernetes API with a different domain?</p>
<p>Fixed by adding --cors-allowed-origins=["http://*"] argument to /etc/default/kube-apiserver file. Then restarted to kube-apiserver.</p>
<p>Anyone has success or pointer on using the kubernetes to create a pod that mounts ceph rbd within the docker container? </p> <p>The following example from kubernetes uses Fedora 21 with installation of ceph binaries, which won't work in CoreOS.</p> <p><a href="http://kubernetes.io/v1.0/examples/rbd/" rel="nofollow">http://kubernetes.io/v1.0/examples/rbd/</a> <br> or <br> <a href="http://www.sebastien-han.fr/blog/2015/06/29/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/" rel="nofollow">http://www.sebastien-han.fr/blog/2015/06/29/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/</a></p>
<p>the project has evolved quite a bit. You can find the docker containers for CoreOS at </p> <p><a href="https://github.com/ceph/ceph-docker" rel="nofollow">https://github.com/ceph/ceph-docker</a></p>
<p>Is this a thing?</p> <p>I have some legacy services which will never run in Kubernetes that I currently make available to my cluster by defining a service and manually uploading an endpoints object.</p> <p>However, the service is horizontally sharded and we often need to restart one of the endpoints. My google-fu might be weak, but i can't figure out if Kubernetes is clever enough to prevent the Service from repeatedly trying the dead endpoint?</p> <p>The ideal behavior is that the proxy should detect the outage, mark the endpoint as failed, and at some point when the endpoint comes back re-admit it into the full list of working endpoints.</p> <p>BTW, I understand that at present, liveness probes are HTTP only. This would need to be a TCP probe because it's a replicated database service that doesn't grok HTTP.</p>
<p>I think the design is for the thing managing the endpoint addresses to add/remove them based on liveness. For services backed by pods, the pod IPs are added to endpoints based on the pod's readiness check. If a pod's liveness check fails, it is deleted and its IP removed from the endpoint. </p> <p>If you are manually managing endpoint addresses, the burden is currently on you (or your external health checker) to maintain the addresses/notReadyAddresses in the endpoint. </p>
<p>How do I have a web app on HTTPS on google cloud container engine using HTTPS load balancing? I created SslCertificate resource. And, I created a kubernetes service that has port 443 externally open:</p> <pre><code>{ "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"app", "labels":{ "app":"app" } }, "spec":{ "ports": [ { "port":443, "name":"app-server" } ], "selector":{ "app":"app" }, "type": "LoadBalancer" } } </code></pre> <p>, but that's not enough, or right?</p>
<p>When you create a service externalized on Google's cloud with the "LoadBalancer" directive, it creates an <a href="https://cloud.google.com/compute/docs/load-balancing/network/" rel="nofollow">L3 load balancer</a>. You can also use the new ingress directive to create an <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow">L7 (e.g. HTTP) balancer</a>, but that doesn't yet support SSL. </p> <p>To enable SSL, you should follow the <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="nofollow">HTTP Load Balancing</a> instructions but create an HTTPS service (with an SSL certificate) when configuring the cloud load balancer. </p>
<p>After a quick search of the api docs I found out that in Kubernetes there is no rest api provided for kubectl rolling-update. Is there any other alternative for perform a rolling update by calling several apis or so? Thanks in advance. </p>
<p>I think the Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/deployments.md" rel="nofollow">Deployment</a> object is what you are looking for. It is an object in the Kubernetes REST API (as opposed to the client-side magic in <code>kubectl rolling-update</code>).</p> <p>You can specify <code>.spec.strategy.type==RollingUpdate</code> as your Deployment <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/deployments.md#strategy" rel="nofollow">Strategy</a> to get similar behavior to <code>kubectl rolling-update</code></p>
<p>I'm currently trying to configure a highly available master cluster. I followed the <a href="http://kubernetes.io/v1.1/docs/admin/high-availability.html" rel="nofollow">proper documentation</a> but i'm facing the following issue. My kubectl version is v1.1.2</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"} error: Failed to negotiate an api version. Server supports: map[v1beta1:{} v1beta2:{} v1beta3:{}]. Client supports: [v1 extensions/v1beta1]. </code></pre> <p>And my apiserver doesn't match the same version:</p> <pre><code>curl -ku kube:changeme https://10.115.99.31/version { "major": "0", "minor": "18+", "gitVersion": "v0.18.0-71-g0bb78fe6c53ce3-dirty", "gitCommit": "0bb78fe6c53ce38198cc3805c78308cdd4805ac8", "gitTreeState": "dirty" } </code></pre> <p>I didn't find a way to list the tags for the kube-apiserver docker images from the google repository. How can i do that please ?</p> <p>Regards, Smana</p>
<p>It seems like the documentation has an out-of-date kube-apiserver.yaml file. I ran into this issue with another deployment guide. You should file this as a bug on <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow">their github page</a>. </p> <p>The image for the api server </p> <pre><code>gcr.io/google_containers/kube-apiserver:9680e782e08a1a1c94c656190011bd02 </code></pre> <p>is at v0.18.0 from several months ago. </p> <p>you will need to replace the "image" line in kube-apiserver.yaml on each machine with the current image. I'm not sure what the current image is. But ill keep digging. </p>
<p>I'm working on OpenShift Origin 1.1 (which is using kubernetes as its orchestration tool for docker containers). I'm creating pods, but I'm unable to see the build-logs.</p> <pre><code>[user@ip master]# oc get pods NAME READY STATUS RESTARTS AGE test-1-build 0/1 Completed 0 14m test-1-iok8n 1/1 Running 0 12m [user@ip master]# oc logs test-1-iok8n Error from server: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout </code></pre> <p>My <code>/var/logs/messages</code> shows:</p> <pre><code>Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579794 32518 apiserver.go:440] apiserver was unable to write a JSON response: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout Dec 4 13:28:24 ip-10-0-x-x origin-master: E1204 13:28:24.579822 32518 errors.go:62] apiserver received an error that is not an unversioned.Status: Get https://ip-10-0-x-x.compute.internal:10250/containerLogs/test/test-1-iok8n/test: dial tcp 10.0.x.x:10250: i/o timeout </code></pre> <p>My versions are:</p> <pre><code>origin v1.1.0.1-1-g2c6ff4b kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2 </code></pre>
<p>I forgot to open port 10250 (tcp) (in my aws security group). This was the only issue for me.</p>
<p>I have created a kubernetes service: </p> <pre><code>[root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. </code></pre> <p>However, am unable to connect to connect to the Endpoint, not even from the shell on the node host:</p> <pre><code> [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. </code></pre> <p>Calling <code>wget localhost:31982</code> on the NodePort also gives a <code>Recv failure: Connection reset by peer</code> and the kube-proxy logs error messages:</p> <pre><code> Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. </code></pre> <p>What could be the reason for this failure? </p> <p>Here is my service configuration file <a href="http://pastebin.com/RriYPRg7" rel="nofollow">http://pastebin.com/RriYPRg7</a>, a slight modification of <a href="https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml" rel="nofollow">https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml</a></p>
<p>It's actually the Pod or Replication Controller that is having the issue because it is not forwarding to the service. Perhaps post that config or make sure it has port specified and its containers' processes are listening to the right port</p> <p><strong>Original</strong></p> <p>It's the <code>NodePort</code> that is actually exposed outside of the pod. <code>Port</code> is the port on the NAT network within the node and <code>Port</code> is what the process inside the container should bind to, the usually using service discovery. Other pods will talk to that pod on the <code>NodePort</code>. If you want to set the <code>NodePort</code> explicitly for say a web server, then in you Pod's definition or a replication controller or service definition, explicitly set <code>NodePort</code> to the desired port.</p> <p>There for <code>Port: 80</code> would be say nginx inside a container listening on port 80, then <code>NodePort: 4980</code> would be the exposed port. So you would <code>wget &lt;Node IP&gt;:4980</code>.</p> <p>As far as fixing your particular situation, I recommend not complicating it as much and explicitly set <code>TargetPort</code> and <code>NodePort</code>.</p>
<p>Is it possible to somehow send alerts (to email / slack) based on events that occur within a Kubernetes cluster?</p> <p>In particular, it would be useful to get an alert if a pod has restarted unexpectedly or if a pod cannot start. Similarly it would be useful to know if a pod's CPU usage was over a certain threshold and get an alert.</p> <p>We have Heapster (with InfluxDB / Grafana backend) installed. While this gives useful data, it unfortunately does not provide us with alerting.</p>
<p>Both <a href="http://thenewstack.io/sysdig-adds-kubernetes-container-monitoring-cloud/" rel="nofollow">sysdig</a> and <a href="http://docs.datadoghq.com/integrations/kubernetes/" rel="nofollow">Datadog</a> provide this functionality as well.</p>
<p>Whether prompt, What linked in kubernetes this message:</p> <pre><code>The Service "skudns" is invalid:spec.clusterIP: invalid value '': the provided range does not match the current range </code></pre> <p>At that that <code>DNS_SERVICE_IP = 10.3.0.10</code>, and <code>SERVICE_IP_RANGE = 10.3.0.0/16</code></p> <p>My ip-address:</p> <pre><code>K8S_SERVICE_IP: 10.3.0.1 MASTER_HOST: 192.168.0.139 ETCD_ENDPOINT=ETCD_CLUSTER=http://192.169.0.139:2379,http://192.168.0.107:2379 POD_NETWORK: 10.2.0.0/16 SERVICE_IP_RANGE: 10.3.0.0/24 DNS_SERVICE_IP: 10.3.0.10 ADVERTISE_IP: 192.168.0.139 </code></pre>
<p><code>/16</code> <a href="http://doc.m0n0.ch/quickstartpc/intro-CIDR.html" rel="nofollow noreferrer">means a subnet mask</a> of 255.255.0.0 (instead of 255.255.255.0 with <code>/24</code>)</p> <p>The error message comes from <a href="https://github.com/kubernetes/kubernetes/blob/b9cfab87e33ea649bdd13a1bd243c502d76e5d22/pkg/registry/service/ipallocator/allocator_test.go#L196-L198" rel="nofollow noreferrer"><code>pkg/registry/service/ipallocator/allocator_test.go#L196-L198</code></a></p> <pre class="lang-go prettyprint-override"><code>if !network.IP.Equal(cidr.IP) || network.Mask.String() != cidr.Mask.String() { t.Fatalf("mismatched networks: %s : %s", network, cidr) } </code></pre> <p>It might be possible the host network mask (seen in <code>ipconfig</code> if the host is Windows, or <a href="https://raw.githubusercontent.com/Juniper/contrail-kubernetes/vrouter-manifest/cluster/provision_minion.sh" rel="nofollow noreferrer"><code>ifconfig</code> as in this script</a>) might be different from the cidr mask used by kubernetes.<br> Try with <code>/24</code> just for testing.<br> See also <a href="https://github.com/tdeheurles/homecores/issues/5#issue-109825651" rel="nofollow noreferrer">issue 5 (Network comportment)</a></p> <p>In the end, the <a href="https://stackoverflow.com/users/3197412/batazor">OP batazor</a> confirms <a href="https://stackoverflow.com/questions/34173253/kubernetes-spec-clusterip-invalid-value/34173688#comment56136110_34173688">in the comments</a> an issue on Kubernetes side:</p> <blockquote> <p>kubernetes updated from version 1.0.3 to 1.0.6 and got <code>docker0</code> mask to 255.255.255.0 This is some sort of magic.</p> </blockquote>
<p>I want access container engine REST API's given here - <a href="http://kubernetes.io/third_party/swagger-ui/#/" rel="nofollow">http://kubernetes.io/third_party/swagger-ui/#/</a></p> <p>To access above given API's, I did following- </p> <p>1) I have created container cluster with project Id - virtual-cycling-11111 with zone- us-central1 and API name - serverconfig</p> <p>2) I created OAuth2.0 client Id and secret key. I am using following method to generate access token:</p> <pre><code>curl -H "Content-Type: application/json" -d' { "client_id": "757054420263-09g36ip2jdt6kcl6cvlfl17faaaaaaa.apps.googleusercontent.com", "client_secret": "NyZ0YwvEQAMaeNTD4dfgtht", "refresh_token": "1/6BMfW9j53gdGIasdfUH5kU5RsR4zwI9lUVX-tqf8JXQ", "grant_type": "refresh_token" } ' https://www.googleapis.com/oauth2/v4/token </code></pre> <p>How do generate access token and use it to access REST API? Also can I use this in browser to get output?</p>
<p>Google has developer documentation for using <a href="https://developers.google.com/identity/protocols/OAuth2" rel="nofollow">OAuth 2.0 to Access Google APIs</a> (and the Google Container Engine is one such API). It explains the various authentication flows and how to get access tokens. </p>
<p>We had a GKE cluster with 3 nodes.</p> <p>On those nodes one ReplicationController was set to run 3 pods of type A and another ReplicationController was set to run 4 pods of type B.</p> <p>We set up an instance group manager to autoscale the nodes on CPU.</p> <p>Since there was no load on the cluster it scaled down to 1 node. Now that node was running only 2 pods of type B and 0 of type A.</p> <p>I was kinda expecting it to at least have 1 pod of A and 1 of B left after the scale down, but that didn't happen. Is there a way to configure Kubernetes (or GKE) to always have at least 1 of each pod?</p>
<p>The cluster autoscaler generally sets the number of nodes based on the <a href="https://cloud.google.com/compute/docs/autoscaler/#target_utilization_level" rel="nofollow">target utilization level of your VMs</a>. It doesn't know anything about what you are running on the VMs (pods or otherwise) and only looks at the utilization. </p> <p>The Google Container Engine / Kubernetes scheduler looks at the resource requests for each pod and finds an available node on which to run the pod. If there isn't space available, then the pod will stay in the Pending state rather than start running. </p> <p>It sounds like you are experiencing a situation where the pods that are running aren't using sufficient CPU to cause the autoscaler to add new nodes to your cluster, but the existing nodes don't have enough capacity for the pods that you want to schedule. </p> <p>When configuring the VM autoscaler, you can set the minimum number of VMs (see <a href="https://cloud.google.com/compute/docs/reference/latest/autoscalers#resource" rel="nofollow">https://cloud.google.com/compute/docs/reference/latest/autoscalers#resource</a>) based on the minimum pod footprint that you want to always be running in your cluster. Then the autoscaler won't delete the VMs that are necessary for all of your pods to run. </p> <p>You can also look at the <a href="http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaling/README.html" rel="nofollow">Horizontal Pod Autoscaler</a> in Kubernetes 1.1 to increase the number of pod replicas in your replication controller based on their observed CPU usage. </p>
<p>With the understanding that <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md" rel="nofollow">Ubernetes</a> is designed to fully solve this problem, is it <strong>currently</strong> possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple <em>internal</em> corporate datacententers?</p> <p>Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent.</p> <p>Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's.</p> <p>Has someone implemented something like this as a stop gap solution before Ubernetes drops and if so, how has it worked and what would be some considerations to take into account on running like this?</p>
<blockquote> <p>is it currently possible (not necessarily recommended) to span a single K8/OpenShift cluster across multiple internal corporate datacententers?</p> </blockquote> <p>Yes, it is currently possible. Nodes are given the address of an apiserver and client credentials and then register themselves into the cluster. Nodes don't know (or care) of the apiserver is local or remote, and the apiserver allows any node to register as long as it has valid credentials regardless of where the node exists on the network. </p> <blockquote> <p>Additionally assuming that latency between data centers is relatively low and that infrastructure across the corporate data centers is relatively consistent.</p> </blockquote> <p>This is important, as many of the settings in Kubernetes assume (either implicitly or explicitly) a high bandwidth, low-latency network between the apiserver and nodes. </p> <blockquote> <p>Example: Given 3 corporate DC's, deploy 1..* masters at each datacenter (as a single cluster) and have 1..* nodes at each DC with pods/rc's/services/... being spun up across all 3 DC's.</p> </blockquote> <p>The downside of this approach is that if you have one global cluster you have one global point of failure. Even if you have replicated, HA master components, data corruption can still take your entire cluster offline. And a bad config propagated to all pods in a replication controller can take your entire service offline. A bad node image push can take all of your nodes offline. And so on. This is one of the reasons that we encourage folks to use a cluster per failure domain rather than a single global cluster. </p>
<p>I'm currently building a Kubernetes cluster. I plan on using Nginx containers as a server for static content, and to act as a web socket proxy. If you restart Nginx, you lose your web socket connection, so I do not want to restart the containers. But I will want to update the content within the container.</p>
<p>I do that same exact thing in my Kubernetes cluster. Our solution is for application to handle the web socket disconnect with consistent state kept intact. </p> <p>However, other options you have are mount a volume to serve from the host; however, you cannot guarantee all nginx pods will have that volume on multi hosts, unless you use a kubernetes' persistent volume <a href="http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes.html" rel="nofollow">http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes.html</a>. </p> <p>Another option you have is to have your static content on an object store like S3, Google Cloud Storage or Ceph, and then proxy the object store through nginx along with the websocket.</p>
<p>i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...</p>
<p>There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.</p> <p>Here are a couple of alternatives (that may solve your problem):</p> <ol> <li><p>Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.</p></li> <li><p>Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).</p></li> <li><p>Put the configuration file into a <a href="http://kubernetes.io/docs/user-guide/volumes/" rel="noreferrer">volume</a>. Mount this volume into your pod and read the configuration file from the volume. </p></li> <li><p>Use a <a href="http://kubernetes.io/docs/user-guide/secrets/" rel="noreferrer">secret</a>. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod. </p></li> </ol>
<p>I'm using preStop command to gracefully shutdown my server application when I delete a pod. What is the state of the pod/ container when it runs preStop command? For example, does it stop the network interfaces before running the preStop command? </p> <pre><code>lifecycle: preStop: exec: command: ["kill", "-SIGTERM", "`pidof java`"] </code></pre>
<p>The state of the pod doesn't change while preStop hooks are run -- the preStop hook is run in the container, and then the container is stopped.</p>
<p>Trying to setup a pilot in GCE to try out GKE. I'm trying to create a new instance template from a copy of the one created with by the "gcloud container clusters create" with more space per instance and the create is just hanging??? Is there something obvious that I'm not doing?</p> <p><a href="http://i.stack.imgur.com/uixOC.png" rel="nofollow">enter image description here</a></p>
<p>It looks like this bug has been fixed. I just copied an instance template for my Google Container Engine cluster in the UI, modified the only the startup script field, and created a new instance template. Please try this again, as I believe it will now work for you as you expect. </p>
<p>All of the nodes in our AWS kubernetes cluster (Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}) are getting the following messages sent to /var/log/syslog which are filling the disk very quickly (32GB in about 24 hours). </p> <pre><code>Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.961584 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.164:58063 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961775 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.961888 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962104 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962275 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962299 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. Dec 4 03:13:36 ubuntu kube-proxy[15171]: I1204 03:13:36.962380 15171 proxysocket.go:130] Accepted TCP connection from 172.30.0.87:29540 to 172.30.0.39:33570 Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962630 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962746 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.962958 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.0.7:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963084 15171 proxysocket.go:99] Dial failed: dial tcp 10.244.2.9:5000: connection refused Dec 4 03:13:36 ubuntu kube-proxy[15171]: E1204 03:13:36.963105 15171 proxysocket.go:133] Failed to connect to balancer: failed to connect to an endpoint. </code></pre> <p>We created the cluster using <code>export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash</code> if that is relevant.</p> <p>Can anyone point me into the right direction as to the cause?</p>
<p>port 5000 is used by the local docker registry usually. It is an add-on though. Is your cluster pulling images from that local registry? If so, is it working? how is it setup?</p> <p>this link may help figure your config issues:</p> <p><a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry</a></p>
<p>I have a Cassandra image that worked with a GKE cluster v1.0.7 but has occassional issues starting on a new GKE cluster at v1.1.1 (no changes to the image or how it is created with kubectl just pointing to a new cluster).</p> <p>I am using kubernetes-cassandra.jar from the kubernetes Cassandra example on github.</p> <p>I see the following in kubectl logs.</p> <pre><code>INFO 21:57:01 Getting endpoints from https://kubernetes.default.cluster.local/api/v1/namespaces/default/endpoints/cassandra ERROR 21:57:01 Fatal error during configuration loading java.lang.NullPointerException: null at io.k8s.cassandra.KubernetesSeedProvider.getSeeds(KubernetesSeedProvider.java:129) ~[kubernetes-cassandra.jar:na] at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:659) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.config.DatabaseDescriptor.&lt;clinit&gt;(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.11.jar:2.1.11] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.11.jar:2.1.11] null Fatal error during configuration loading; unable to start. See log for stacktrace. $ kubectl get pods NAME READY STATUS RESTARTS AGE cassandra 0/1 CrashLoopBackOff 8 13m </code></pre> <p>Has anyone seen this error or have ideas on how to troubleshoot?</p>
<p>It happens when service endpoint is not ready. To verify that, please check output of <code>kubectl get endpoints</code> against Cassandra service. If it is blank, then it means <code>KubernetesSeedProvider</code> is not able to deserialize the output received from Kubernetes API server as the endpoint's address is in <code>notReadyAddresses</code> state.</p> <p>One possible work-around to this problem is to create a Cassandra pod before creating Cassandra service.</p>
<p>I have created a kubernetes service: </p> <pre><code>[root@Infra-1 kubernetes]# kubectl describe service gitlab Name: gitlab Namespace: default Labels: name=gitlab Selector: name=gitlab Type: NodePort IP: 10.254.101.207 Port: http 80/TCP NodePort: http 31982/TCP Endpoints: 172.17.0.4:80 Port: ssh 22/TCP NodePort: ssh 30394/TCP Endpoints: 172.17.0.4:22 Session Affinity: None No events. </code></pre> <p>However, am unable to connect to connect to the Endpoint, not even from the shell on the node host:</p> <pre><code> [root@Infra-2 ~]# wget 172.17.0.4:80 --2015-12-08 20:22:27-- http://172.17.0.4:80/ Connecting to 172.17.0.4:80... failed: Connection refused. </code></pre> <p>Calling <code>wget localhost:31982</code> on the NodePort also gives a <code>Recv failure: Connection reset by peer</code> and the kube-proxy logs error messages:</p> <pre><code> Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973209 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973294 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973376 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973482 26410 proxysocket.go:100] Dial failed: dial tcp 172.17.0.4:80: connection refused Dec 08 20:13:41 Infra-2 kube-proxy[26410]: E1208 20:13:41.973494 26410 proxysocket.go:134] Failed to connect to balancer: failed to connect to an endpoint. </code></pre> <p>What could be the reason for this failure? </p> <p>Here is my service configuration file <a href="http://pastebin.com/RriYPRg7" rel="nofollow">http://pastebin.com/RriYPRg7</a>, a slight modification of <a href="https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml" rel="nofollow">https://github.com/sameersbn/docker-gitlab/blob/master/kubernetes/gitlab-service.yml</a></p>
<p>In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster. Maybe they will be more "natural" and easy:</p> <ul> <li>Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox for example, but I think it will be good to know about that feature. In that case you will get not only "internal cluster-only" ip address for your service but also externally configured load balancer to access it (in aws/gce etc.) <a href="http://kubernetes.io/v1.1/docs/user-guide/services.html#type-loadbalancer" rel="nofollow">Link to the documentation</a></li> <li>Use one of the latest features called "ingress". Here is description from manual <strong>"An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc."</strong>. <a href="http://kubernetes.io/v1.1/docs/user-guide/ingress.html" rel="nofollow">Link to the documentation</a></li> <li>If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router". <ul> <li><a href="https://docs.openshift.org/latest/architecture/index.html" rel="nofollow">Information about openshift origin</a>. </li> <li><a href="https://docs.openshift.org/latest/architecture/core_concepts/routes.html" rel="nofollow">Information about openshift origin routes</a></li> </ul></li> </ul>
<p>Need to pass command line arguments for the docker containers appContainer1 &amp; appContainer2 in the pod.yaml.</p> <p>pod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: microservices labels: app: apps spec: containers: - name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 - name: appContainer2 image: b.gcr.io/mybucket/appContainerImage2 ports: - containerPort: 9090 </code></pre> <p>In docker, I can pass the command line arguments via environment variable(-e)</p> <pre><code>docker run --rm -it -p 9090:9090 -e spring.profiles.dynamic=local applicationimage1 </code></pre> <p>Similarly, I need to pass command line arguments when the containers run inside kubernetes. </p>
<p>It sounds like you don't actually want command line arguments, but environment variables - and you can use <code>env</code> for that:</p> <pre><code>- name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 env: - name: spring.profiles.dynamic value: local </code></pre> <p>You <em>can</em> use command line arguments:</p> <pre><code>- name: appContainer1 image: gcr.io/mybucket/appContainerImage1 ports: - containerPort: 8080 args: - foo - bar - "String containing:colons:" </code></pre>
<p>Uploading to GCE from a pod inside GKE takes really long. I hoped the upgrade to kubernetes 1.1 would help, but it didn't. It is faster, but not as fast as it should be. I made some benchmarks, uploading a single file with 100MiB:</p> <ul> <li><p>docker 1.7.2 local</p> <p>took {20m51s240ms}, that's about ~{0.07993605115907274}MB/s</p></li> <li><p>docker 1.8.3 local</p> <p>took {3m51s193ms}, that's about ~{0.4329004329004329}MB/s</p></li> <li><p>docker 1.9.0 local</p> <p>took {3m51s424ms}, that's about ~{0.4329004329004329}MB/s</p></li> <li><p>kubernetes 1.0</p> <p>took {1h10s952ms}, that's about ~{0.027700831024930747}MB/s</p></li> <li><p>kubernetes 1.1.2 (docker 1.8.3)</p> <p>took {32m11s359ms}, that's about ~{0.05178663904712584}MB/s</p></li> </ul> <p>As you can see the thruput doubles with kubernetes 1.1.2, but is still really slow. If I want to upload 1GB I have to wait for ~5 hours, this can't be the expected behaviour. GKE runs inside the Google infrastructure, so I expect that it should be faster or at least as fast as uploading from local.</p> <p>I also noted a very high CPU load (70%) while uploading. It was tested with a <code>n1-highmem-4</code> machine-type and a single RC/pod that was doing nothing then the upload.</p> <p>I'm using the java client with the GAV coordinates <code>com.google.appengine.tools:appengine-gcs-client:0.5</code></p> <p>The relevant code is as follows:</p> <pre><code>InputStream inputStream = ...; // 100MB RandomData from RAM StorageObject so = new StorageObject().setContentType("text/plain").setName(objectName); AbstractInputStreamContent content = new InputStreamContent("text/plain", inputStream); Stopwatch watch = Stopwatch.createStarted(); storage.objects().insert(bucket.getName(), so, content).execute(); watch.stop(); </code></pre> <p>Copying a 100MB file using a manually installed gcloud with <code>gsutil cp</code> took nearly no time (3 seconds). So it might be an issue with the java-library? The question still remains, how to improve the upload time using the java-library?</p>
<p>Solution is to enable "DirectUpload", so instead of writing</p> <pre><code>storage.objects().insert(bucket.getName(), so, content).execute(); </code></pre> <p>you have to write:</p> <pre><code> Storage.Objects.Insert insert = storage.objects().insert(bucket.getName(), so, content); insert.getMediaHttpUploader().setDirectUploadEnabled(true); insert.execute(); </code></pre> <p>Performance I get with this solution:</p> <ul> <li>took {13s515ms}, that's about ~{7.6923076923076925}MB/s</li> </ul> <p>JavaDoc for the <code>setDirectUploadEnabled</code>:</p> <blockquote> <p>Sets whether direct media upload is enabled or disabled.</p> <p>If value is set to true then a direct upload will be done where the whole media content is uploaded in a single request. If value is set to false then the upload uses the resumable media upload protocol to upload in data chunks.</p> <p>Direct upload is recommended if the content size falls below a certain minimum limit. This is because there's minimum block write size for some Google APIs, so if the resumable request fails in the space of that first block, the client will have to restart from the beginning anyway.</p> <p>Defaults to false.</p> </blockquote>
<p>I have a pod with the following config:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: name: demo name: demo spec: containers: - name: demo image: ubuntu:14.04 command: - sleep - "3600" </code></pre> <p>When I try to stop it, the SIGTERM is ignored by the sleep command, and it takes 30 seconds (the full default grace period) to stop. I can also get on the pod and send the signal to the process (pid 1) manually, and it does not kill the pod. How can I get sleep to die when a signal is sent to it?</p>
<p>Bash <a href="https://www.gnu.org/software/bash/manual/html_node/Signals.html" rel="noreferrer">ignores SIGTERM when there are no traps</a>. You can <a href="http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html" rel="noreferrer">trap</a> SIGTERM to force an exit. For example, <code>trap 'exit 255' SIGTERM; sleep 3600</code></p>
<p>I'm struggling to get Kubernetes to work with my private hub.docker.com registry image.</p> <p>I am using kubectl version: <code>Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}</code></p> <p>and Vagrant <code>1.7.4</code> on Mac OS X <code>Yosemite 10.10.5</code></p> <p>I followed the instructions given here: <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images</a></p> <p>In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting <code>.docker/config.json</code>, and use that in a yaml document as follows:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg </code></pre> <p>Then feed that to kubectl. I then used the resulting key (here called <code>myregistrykey</code>) in my pod definition:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey </code></pre> <p>and <code>kubectl create</code>d it.</p> <p>However, kubectl keeps failing to retrieve the image:</p> <pre><code>[root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m </code></pre> <p>docker pull on the Kubernetes master worked however.</p> <p>What am I missing?</p> <h1>UPDATE</h1> <p><b>In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: <code>image: docker.io/intrinsic/authorities-backend:latest</code> However, the problem persists. Doing <code>kubectl get events -w</code> gets me: <code> 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) </code> I know the secret has been properly registered, as I have it under <code>kubectl get secrets</code>: <code> NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m </code></p> <p>Still confused...</p> <p></b></p> <p>Candide</p>
<p>The documentation is out of date, in that it refers to <code>.dockercfg</code> instead of <code>.docker/config.json</code>. I will update it.</p> <p>When you use the new <code>.docker/config.json</code> format, you need to set <code>type: kubernetes.io/dockerconfigjson</code> instead of <code>type: kubernetes.io/.dockercfg</code>. </p> <p>Support for <code>type: kubernetes.io/dockerconfigjson</code> was added in v1.1.0 so it is supported by your server, but is not supported by your client (which is v1.1.0-alpha which predates v1.1.0). </p> <p>When you use <code>type: kubernetes.io/dockerconfigjson</code>, it should validate your secret contents.</p> <p>With <code>type: kubernetes.io/dockerconfigjson</code>, you do want to keep the <code>auths</code> wrapper. </p>
<p>I need to migrate my kubernetes master to another server. How can we do that with all datas of the current running services, pod, rc ... What do we need to backup , because kubernetes doc don't talk about this. Thank you.</p>
<p>The two things that contain the master's identity are its IP address and a few different pieces of data from its local disk.</p> <p>To make upgrades work on Google Compute Engine, <a href="https://github.com/kubernetes/kubernetes/blob/e264db43499cb14a319f6a78185b018b0958d314/cluster/gce/configure-vm.sh#L239" rel="nofollow">we mount the four important directories listed below to a persistent disk</a> and reserve the IP address of the VM, then detach the disk and IP and put them on a new VM. The directories saved are:</p> <pre><code>/var/etcd /srv/kubernetes /srv/sshproxy /srv/salt-overlay </code></pre>
<p>I'm struggling to get Kubernetes to work with my private hub.docker.com registry image.</p> <p>I am using kubectl version: <code>Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.1588+e44c8e6661c931", GitCommit:"e44c8e6661c931f7fd434911b0d3bca140e1df3a", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}</code></p> <p>and Vagrant <code>1.7.4</code> on Mac OS X <code>Yosemite 10.10.5</code></p> <p>I followed the instructions given here: <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images</a></p> <p>In a nutshell, it says you should login to the registry then base64 encode the contents of the resulting <code>.docker/config.json</code>, and use that in a yaml document as follows:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: myregistrykey data: .dockercfg: eyAiYXV0aHMiOiB7ICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7ICJhdXRoIjogImFXNTBjbWx1YzJsak9tSTJVVTR5Z...h1YkBpbnRyaW5zaWMud29ybGQiIH0gfSB9Cg== type: kubernetes.io/dockercfg </code></pre> <p>Then feed that to kubectl. I then used the resulting key (here called <code>myregistrykey</code>) in my pod definition:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: authorities-backend spec: containers: - name: authorities-backend image: intrinsic/authorities-backend:latest imagePullSecrets: - name: myregistrykey </code></pre> <p>and <code>kubectl create</code>d it.</p> <p>However, kubectl keeps failing to retrieve the image:</p> <pre><code>[root@kubernetes-master intrinsic]# kubectl get pods NAME READY STATUS RESTARTS AGE authorities-backend 0/1 PullImageError 0 7m </code></pre> <p>docker pull on the Kubernetes master worked however.</p> <p>What am I missing?</p> <h1>UPDATE</h1> <p><b>In the pod definition above, I had omitted to specify the registry host, i.e. docker.io. Fixing it, it becomes: <code>image: docker.io/intrinsic/authorities-backend:latest</code> However, the problem persists. Doing <code>kubectl get events -w</code> gets me: <code> 6s 0s 2 authorities-backend Pod spec.containers{authorities-backend} Failed {kubelet 10.245.1.3} Failed to pull image "docker.io/intrinsic/authorities-backend": image pull failed for docker.io/intrinsic/authorities-backend, this may be because there are no credentials on this request. details: (Error: image intrinsic/authorities-backend:latest not found) </code> I know the secret has been properly registered, as I have it under <code>kubectl get secrets</code>: <code> NAME TYPE DATA AGE default-token-a7s5n kubernetes.io/service-account-token 2 51m myregistrykey kubernetes.io/dockercfg 1 50m </code></p> <p>Still confused...</p> <p></b></p> <p>Candide</p>
<p>So, I kept researching the web for an answer to my problem and eventually found this:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/7954#issuecomment-115241561" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/7954#issuecomment-115241561</a></p> <p>At the very end of the thread, jjw27 has nailed it. The <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/images.md#pre-pulling-images" rel="nofollow">kubernetes documentation</a> mentions the <code>.dockercfg.json</code> file just to say that its contents needs to be base64-encoded. There are actually two issues with this file:</p> <ol> <li>it looks like it morphed into another file actually, i.e. <code>.docker/config.json</code></li> <li>the auth info in this file is wrapped by an additional <code>auths</code> objects, which you have to get rid of.</li> </ol> <p>Quoting jjw27</p> <p>Did not work:</p> <pre><code>{ "auths": { "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } } </code></pre> <p>Worked:</p> <pre><code>{ "hub.example.com:1024": { "auth": "asdf=", "email": "example@example.com" } } </code></pre> <p>Google, please update this doc!!</p> <p>Message to Kubernetes devs #2: Also, not complaining with a malformed base64-encoded secret is very misleading. Please validate user input and complain if it contains errors.</p>
<p>I got some error when scheduling pod though ReplicationController:</p> <pre><code>failedSync {kubelet 10.9.8.21} Error syncing pod, skipping: API error (500): Cannot start container 20c2fe3a3e5b5204db4475d1ce6ea37b3aea6da0762a214b9fdb3d624fd5c32c: [8] System error: Activation of org.freedesktop.systemd1 timed out </code></pre> <p>The pod is scheduled but cannot run unless I re-deploy it with another image.</p> <p>I'm using kubelet 1.0.1, CoreOS v773.1.0</p>
<p>The part that says <code>Error syncing pod, skipping: API error</code> means that kubelet got an error when trying to start a container for your Pod.</p> <p>Since you use CoreOS, I think you are using rkt, not docker.</p> <p>I think that rkt uses systemd to start containers.</p> <p>And I think systemd crashes when the "unit" name starts with an underscore: <a href="https://github.com/coreos/go-systemd/pull/49" rel="nofollow">https://github.com/coreos/go-systemd/pull/49</a></p> <p>So, maybe one of your pods or containers has a name that starts with an underscore. Change that.</p>
<p>I run pods with replication controller, now i want to edit config like change value of environment and keep name of rc.</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: backend spec: replicas: 3 template: spec: containers: - name: backend image: myproject/backend ports: - containerPort: 8080 env: - name: USER_ENDPOINT value: "http://10.0.7.29:10000" </code></pre> <p>For example I move service to new server and just want change value of env <code>USER_ENDPOINT</code> to <code>http://10.0.7.30:30100</code></p> <p>Now I just know follow delete rc and recreate rc, but in production i don't stop it.</p> <p>I try <code>rolling-update</code> but not work because i want keep name of replication controller.</p> <p>What can i do ?</p> <p>Please suggest a solution, thanks.</p>
<p>You can use <code>kubectl edit</code> to edit a resource:</p> <pre><code>Usage: kubectl edit (RESOURCE/NAME | -f FILENAME) [flags] Examples: # Edit the service named 'docker-registry': $ kubectl edit svc/docker-registry # Use an alternative editor $ KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Edit the service 'docker-registry' in JSON using the v1 API format: $ kubectl edit svc/docker-registry --output-version=v1 -o json </code></pre> <p>^^ from the kubectl help</p>
<p>This was discussed by k8s maintainers in <a href="https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195</a>:</p> <blockquote> <blockquote> <p>Allowing users to ask for a specific PV breaks the separation between them</p> </blockquote> <p>I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason.</p> </blockquote> <p>How did it end? What's the intended way to have &gt;1 PV's and PVC's like the one in <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/nfs" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/nfs</a>?</p> <p>We use NFS, and PersistentVolume is a handy abstraction because we can keep the <code>server</code> IP and the <code>path</code> there. But a PersistentVolumeClaim gets <em>any</em> PV with sufficient size, preventing <code>path</code> reuse.</p> <p>Can set <code>volumeName</code> in a PVC <code>spec</code> block (see <a href="https://github.com/kubernetes/kubernetes/pull/7529" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/7529</a>) but it makes no difference.</p>
<p>There is a way to pre-bind PVs to PVCs today, here is an example showing how:</p> <ol> <li>Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create: <pre><code> $ kubectl create -f pv.yaml persistentvolume &quot;pv0003&quot; created </code></pre> where <code>pv.yaml</code> contains: <pre><code> apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: storageClassName: &quot;&quot; capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: namespace: default name: myclaim nfs: path: /tmp server: 172.17.0.2 </code></pre> </li> <li>Then create the PVC with the same name: <pre><code> kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: storageClassName: &quot;&quot; accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> </li> <li>The PV and PVC should be bound immediately: <pre><code> $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE myclaim Bound pv0003 5Gi RWO 4s $ ./cluster/kubectl.sh get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0003 5Gi RWO Bound default/myclaim 57s </code></pre> </li> </ol>
<p>We're currently running a Kubernetes 1.0 cluster on AWS in production, and we'd like to spin up a second cluster to test out 1.1. Based on the <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/aws/util.sh#L91-L98" rel="nofollow">AWS helper functions</a>, it looks like multiple clusters aren't supported, but I wanted to be sure. There's <a href="http://kubernetes.io/v1.1/docs/admin/multi-cluster.html" rel="nofollow">a doc</a> that describes running multiple clusters, but it's fairly brief.</p> <p>In general, we'd like to have a second cluster continuously running for testing purposes. It seems like this would be a fairly common need.</p>
<p>You should be able to run a second cluster by setting <code>INSTANCE_PREFIX</code> before running <code>kube-up</code>. That variable in turn sets <code>CLUSTER_ID</code> which should parameterize everything in the <code>cluster/aws/*</code> scripts.</p>
<h1>Initial Post</h1> <p>I have the same docker image running on two different CoreOS servers. (They're in a Kubernetes cluster, but I think that is irrelevant to the current problem.)</p> <p>They both are running image hash <code>01e95e0a93af</code>. They both should have curl. One does not. This seems... impossible.</p> <p><em>Good Server</em></p> <pre><code>core@ip-10-0-0-61 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 [cut] Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-61 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@d29cb8783830:/app/bundle# curl curl: try 'curl --help' or 'curl --manual' for more information root@d29cb8783830:/app/bundle# </code></pre> <p><em>Bad Server</em></p> <pre><code>core@ip-10-0-0-212 ~ $ docker pull gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a [cut] Digest: sha256:5d8bf456ad2d08ce3cd15f05b62fddc07fda3955267ee0d3ef73ee1a96b98e68 Status: Image is up to date for gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a core@ip-10-0-0-212 ~ $ docker run -it --rm gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a /bin/bash root@fe6a536393f8:/app/bundle# curl bash: curl: command not found root@fe6a536393f8:/app/bundle# </code></pre> <p><a href="https://gist.github.com/iameli/72506721b70dcba6d5b2" rel="nofollow noreferrer">Full logs available on this gist</a>. I took the bad server out of our production cluster but still have it running if anyone wants me to do any other research.</p> <h1>Added 2015-12-04</h1> <p>I've run <code>docker tag gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a weird-image</code> on both servers to make everything more readable.</p> <h2>which curl</h2> <blockquote> <p>Can you do a which curl in the first component to check where it finds its curl? And see if that file exists in the second component. – VonC</p> </blockquote> <p>Seems to not exist at all on the bad server.</p> <p><em>Good Server</em></p> <pre><code>core@ip-10-0-0-61 ~ $ docker run -it --rm weird-image /bin/bash root@529b8f20a610:/app/bundle# which curl /usr/bin/curl </code></pre> <p><em>Bad Server</em></p> <pre><code>core@ip-10-0-0-212 ~ $ docker run -it --rm weird-image /bin/bash root@ff98c850dbaa:/app/bundle# ls /usr/bin/curl ls: cannot access /usr/bin/curl: No such file or directory root@ff98c850dbaa:/app/bundle# </code></pre> <h2>alias docker</h2> <blockquote> <p>Any chance you have set up an alias on the bad box? Run alias docker to check – morloch</p> </blockquote> <p>Nope.</p> <p><em>Good Server</em></p> <pre><code>core@ip-10-0-0-61 ~ $ alias docker -bash: alias: docker: not found </code></pre> <p><em>Bad Server</em></p> <pre><code>core@ip-10-0-0-212 ~ $ alias docker -bash: alias: docker: not found </code></pre> <h2>time</h2> <p>More weirdness: it takes a lot longer to run the container on the bad server.</p> <p><em>Good Server</em></p> <pre><code>core@ip-10-0-0-61 ~ $ time docker run weird-image echo &quot;Done&quot; Done real 0m0.422s user 0m0.015s sys 0m0.015s </code></pre> <p><em>Bad Server</em></p> <pre><code>core@ip-10-0-0-212 ~ $ time docker run weird-image echo &quot;Done&quot; Done real 0m4.602s user 0m0.010s sys 0m0.010s </code></pre>
<p>I've seen lots of cases where Docker images on-disk get random bits of corruption (causing weird inconsistencies like the one you describe here), and deleting and re-pulling the image "fixes" the problem.</p> <p>To test this, you'll want to make sure you not only <code>docker rmi gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a</code> (which will minimally output <code>Untagged: gcr.io/surveyadmin-001/wolfgang:commit_e78e07eb6ce5727af6ffeb4ca3e903907e3ab83a</code>), but also delete the individual layers (and any other tags they may have) so that they're forced to be re-pulled.</p>
<p>I need to add a REST API for Kubernetes rolling update feature. Is there a way we can write extensions for Kubernetes API? If that so is there any documentation about writing them?</p>
<p>The <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/extending-api.md" rel="nofollow">Extending the API</a> doc describes how you can add resources to the Kubernetes API.</p> <p>If you just want a REST API for rolling update, you should check out the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/deployments.md" rel="nofollow">Deployment</a> object.</p>
<p>Kubernetes API request <code>curl https://192.168.0.139 --cacert /home/mongeo/ku-certs/ca.pem</code> return <code>Unauthorized</code></p> <p>Request <code>curl localhost:8080</code> worked good.</p> <p>My kube-proxy and kube-apiserver standart (<a href="https://coreos.com/kubernetes/docs/latest/deploy-master.html" rel="noreferrer">coreos+k8s tutorial</a>)</p> <p>How do I get data on HTTPS?</p>
<p>Did you specify <code>--token-auth-file=&lt;file&gt;</code> and/or <code>--basic-auth-file=&lt;otherfile&gt;</code> or one of the other authentication modes? I don't know that https endpoint will work without one of these (maybe it should, but it doesn't, apparently). Check out <a href="https://kubernetes.io/docs/admin/authentication/" rel="noreferrer">https://kubernetes.io/docs/admin/authentication/</a></p>
<p>I want to run/build Kubernetes from source. I normally use Intelij idea to open sources but I can't see that there is a source support for 'Go' in my ide. My main concern is to write an extension for Kubernetes API. How can I easily setup source in idea to develop and test that extension? I also have a locally installed Kubernetes API.</p>
<p>Install Golang plugin for IDEA: <a href="https://github.com/go-lang-plugin-org/go-lang-idea-plugin/wiki/Documentation" rel="nofollow">https://github.com/go-lang-plugin-org/go-lang-idea-plugin/wiki/Documentation</a></p>
<p>I have installed K8S on OpenStack following <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/coreos_multinode_cluster.md" rel="nofollow">this guide</a>. </p> <p>The installation went fine and I was able to run pods but after some time my applications stops working. I can still create pods but request won't reach the services from outside the cluster and also from within the pods. Basically, something in networking gets messed up. The <strong>iptables -L -vnt nat</strong> still shows the proper configuration but things won't work.</p> <p>To make it working, I have to rebuild cluster, removing all services and replication controllers doesn't work.</p> <p>I tried to look into the logs. Below is the journal for kube-proxy:</p> <pre><code>Dec 20 02:12:18 minion01.novalocal systemd[1]: Started Kubernetes Proxy. Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.269784 1030 proxier.go:487] Opened iptables from-containers public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.278952 1030 proxier.go:498] Opened iptables from-host public port for service "default/opensips:sipt" on TCP port 5060 Dec 20 03:05:11 minion01.novalocal kube-proxy[1030]: W1220 03:05:11.806927 1030 api.go:224] Got error status on WatchEndpoints channel: &amp;{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1433/544]) [2432] Reason: Details:&lt;nil&gt; Code:0} Dec 20 03:06:08 minion01.novalocal kube-proxy[1030]: W1220 03:06:08.177225 1030 api.go:153] Got error status on WatchServices channel: &amp;{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1476/207]) [2475] Reason: Details:&lt;nil&gt; Code:0} .. .. .. Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448570 1030 proxier.go:161] Failed to ensure iptables: error creating chain "KUBE-PORTALS-CONTAINER": fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.448749 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448868 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/kubernetes:" Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448906 1030 proxier.go:176] Failed to ensure portal for "default/kubernetes:": error checking rule: fork/exec /usr/sbin/iptables: too many open files: Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.449006 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.449133 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/repo-client:" </code></pre> <p>I found few posts relating to "failed to install iptables" but they don't seem to be relevant as initially everything works but after few hours it gets messed up.</p>
<p>What version of Kubernetes is this? A long time ago (~1.0.4) we had a bug in the kube-proxy where it leaked sockets/file-descriptors.</p> <p>If you aren't running a 1.1.3 binary, consider upgrading.</p> <p>Also, you should be able to use <code>lsof</code> to figure out who has all of the files open. </p>
<p>Currently kubernetes rolling update creates a new pod to a terminated pod and add it to the service. At the moment of rolling update there could be two types of pods registered (old ones and new ones) for a service. However I need to enforce the consistency. For example when a rolling update request comes to Kubernetes, first it creates a new rc but pods created under that rc is not added to the service. Once all replications of that rc becomes available, all the traffic came to the service is routed to that rc. Finally the old rc is deleted. Can we currently do this using Kubernetes? If not is there a way I can write an extension to Kubernetes to implement this functionality? </p>
<p>If the new pods have labels matching the service's label selector, they should be added to the service as soon as they come up.</p> <p>If you want to experiment with different logic for a rolling update, you can write a client-side controller using the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/devel/client-libraries.md" rel="nofollow">Kubernetes API client libraries</a>, or create a server-side object by <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/extending-api.md" rel="nofollow">extending the API</a>.</p>
<p>Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations.</p> <p>Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.</p>
<p>The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod)</p> <pre><code>env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP </code></pre> <p>Recreate the pod/rc and then try</p> <pre><code>echo $MY_POD_IP </code></pre> <p>also run <code>env</code> to see what else kubernetes provides you with.</p>
<p>There are multiple admins who accesses k8s clusters. What is the recommended way to share the config file?</p> <p>I know,</p> <pre><code>kubectl config view --minify </code></pre> <p>but certification part is REDACTED by this command.</p>
<p>You can add the --flatten flag, which is described in the <a href="https://github.com/kubernetes/kubernetes/blob/25ddd24f9e3ac53a1944e02109db95787c7f1b4a/docs/user-guide/kubectl/kubectl_config_view.md" rel="noreferrer">document</a> to "flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files)"</p>
<p>I have just started with Kubernetes and I am confused about the difference between NodePort and LoadBalancer type of service.</p> <p>The difference I understand is that LoadBalancer does not support UDP but apart from that whenever we create a service either <code>Nodeport</code> or <code>Loadbalancer</code> we get a service IP and port, a NodePort, and endpoints.</p> <p>From Kubernetes docs:</p> <blockquote> <p><strong>NodePort:</strong> on top of having a cluster-internal IP, expose the service on a port on each node of the cluster (the same port on each node). You'll be able to contact the service on any <em><strong>NodeIP:NodePort</strong></em> address.</p> </blockquote> <blockquote> <p><strong>LoadBalancer:</strong> on top of having a cluster-internal IP and exposing service on a NodePort also, ask the cloud provider for a load balancer which forwards to the Service exposed as a <em><strong>NodeIP:NodePort</strong></em> for each Node.</p> </blockquote> <p>So, I will always access service on NodeIP:NodePort. My understanding is, whenever we access the node:NodePort, the kubeproxy will intercept the request and forward it to the respective pod.</p> <p>The other thing mentioned about LoadBalancer is that we can have an external LB which will LB between the Nodes. What prevents us to put a LB for services created as nodeport?</p> <p>I am really confused. Most of the docs or tutorials talk only about LoadBalancer service therefore I couldn't find much on internet.</p>
<p>Nothing prevents you from placing an external load balancer in front of your nodes and use the NodePort option.</p> <p>The LoadBalancer option is only used to additionally ask your cloud provider for a new software LB instance, automatically in the background.</p> <p>I'm not up to date which cloud providers are supported yet, but i saw it working for Compute Engine and OpenStack already.</p>
<p>I am trying to deploy a web application using Kubernetes and google container engine. My application requires different types of machine. In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster, and it reduces to wasting resource or money to mix different pods in single cluster because I need to match machine type with maximum requirement.</p> <p>Let's say database requires 8 CPUs and 100GB ram, and application servers needs 2 CPUs and 4GB ram. I have to have at least 8 cpu / 100GB machine in the cluster for database pods to be scheduled. Kubernetes will schedule 4 application pods on each machine, and it will waste 84GB of ram of the machine.</p> <p>Is it correct? If it is, how can I solve the problem? Do I need to run separate clusters for different requirement? Connecting services between different clusters doesn't seem to be s trivial problem either.</p>
<blockquote> <p>In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster.... Do I need to run separate clusters for different requirement?</p> </blockquote> <p>Yes, this is currently true. We are working on relaxing this restriction, but in the mean time you can <a href="https://stackoverflow.com/questions/31302233/resize-instance-types-on-container-engine-cluster/31303169#31303169">copy the instance template</a> to create another set of nodes with a different size. </p>
<p>I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:</p> <p>kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16</p> <p>flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}</p> <p>docker subnet setting: --bip=10.0.0.1/24</p> <p>Hostnode IP: 192.168.4.57</p> <p>I've got the nginx service running and I've tried to expose it like so:</p> <pre><code>[root@kubemaster ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-px6uy 1/1 Running 0 4m [root@kubemaster ~]# kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) AGE kubernetes component=apiserver,provider=kubernetes &lt;none&gt; 172.16.0.1 443/TCP 31m nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m </code></pre> <p>and then I exposed the service like this:</p> <pre><code>kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort NAME LABELS SELECTOR IP(S) PORT(S) AGE nginx run=nginx run=nginx 9000/TCP 292y </code></pre> <p>I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(</p> <p>Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?</p>
<p>So the issue here was that there's a missing piece of the puzzle when you use nodePort.</p> <p>I was also making a mistake with the commands.</p> <p>Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:</p> <pre><code>kubectl expose rc nginx --port=80 --type=NodePort </code></pre> <p>Secondly, you need to use <code>kubectl describe svc nginx</code> and it'll show you the NodePort it's assigned on each node:</p> <pre><code>[root@kubemaster ~]# kubectl describe svc nginx Name: nginx Namespace: default Labels: run=nginx Selector: run=nginx Type: NodePort IP: 172.16.92.8 Port: &lt;unnamed&gt; 80/TCP NodePort: &lt;unnamed&gt; 32033/TCP Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80 Session Affinity: None No events. </code></pre> <p>You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.</p>
<p>I have an application running in a Kubernetes pod that is replicated using a replication controller. However I need to some critical tasks that should be done by a single application (one replication) at a time. Previously I used zookeeper to get a cluster lock to do that task. Is there a way in Kubernetes to get a cluster lock for a particular replication controller?</p>
<p>Kubernetes doesn't have a cluster lock object, but you can use an <a href="http://kubernetes.io/v1.1/docs/user-guide/annotations.html" rel="noreferrer">annotation</a> on the replication controller to specify the lock holder and TTL.</p> <p>For example, each pod could read the the annotation key <code>"lock"</code>, and if empty (or if TTL expired), try to write <code>"lock": "pod-xyz: 2015-12-22T18:39:12+00:00"</code>. If multiple writes are attempted, kubernetes will accept one, and reject the others w/ a 409 because the resource version will not be correct. The lock holder would then continue updating the annotation to refresh the TTL.</p> <p>If you have a service that corresponds to this replication controller, it might make sense to put the lock annotation on the service instead of the RC. Then the locking semantics would survive software upgrades (e.g. rolling-update). The annotation can go on any object, so there's some flexibility to figure out what works best for you.</p> <p><a href="https://github.com/kubernetes/contrib/blob/master/pod-master/podmaster.go#L49" rel="noreferrer">podmaster.go</a> had a good example of the logic you might use to implement this. It is running directly against etcd, which you could also do if you don't mind introducing another component.</p>
<p>I followed kubernetes documentation to manage secrets of my applications.</p> <p><a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html" rel="nofollow">http://kubernetes.io/v1.1/docs/user-guide/secrets.html</a></p> <p>When pod starts it kubernetes mounts secret at the right place, but application is unable to read secret data as it described in documentation.</p> <pre><code>root@quoter-controller-whw7k:/etc/quoter# whoami root root@quoter-controller-whw7k:/etc/quoter# ls -l ls: cannot access local.py: Permission denied total 0 -????????? ? ? ? ? ? local.py root@quoter-controller-whw7k:/etc/quoter# cat local.py cat: local.py: Permission denied </code></pre> <p>What is wrong with that?</p> <hr> <p>SELinux configured with enforcing mode</p> <p><code>SELINUX=enforcing</code></p> <p>Docker started with the following command</p> <pre><code>/usr/bin/docker daemon --registry-mirror=http://mirror.internal:5000 --selinux-enabled --insecure-registry registry.internal:5555 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/atomicos-docker--pool --bip=10.16.16.1/24 --mtu=8951 </code></pre>
<p>There is a known issue with SELinux and Kubernetes Secrets as per the Atomic issue tracker, see <a href="https://github.com/projectatomic/adb-atomic-developer-bundle/issues/117" rel="nofollow">ISSUE-117</a>.</p>
<p>I'm trying to set up a kubernetes cluster on 2 nodes , centos 7.1 using this <a href="http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services" rel="nofollow noreferrer">guide</a>. However when I attempt to start the services on the minion like so:</p> <pre><code>for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done </code></pre> <p>I get the following error:</p> <pre><code>-- Logs begin at Wed 2015-12-23 13:00:41 UTC, end at Wed 2015-12-23 16:03:54 UTC. -- Dec 23 16:03:47 sc-test2 systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE Dec 23 16:03:47 sc-test2 systemd[1]: Failed to start Docker Storage Setup. -- Subject: Unit docker-storage-setup.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker-storage-setup.service has failed. -- -- The result is failed. Dec 23 16:03:47 sc-test2 systemd[1]: Unit docker-storage-setup.service entered failed state. Dec 23 16:03:48 sc-test2 flanneld[36477]: E1223 16:03:48.187350 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:49 sc-test2 flanneld[36477]: E1223 16:03:49.189860 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:50 sc-test2 flanneld[36477]: E1223 16:03:50.192894 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:51 sc-test2 flanneld[36477]: E1223 16:03:51.194940 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:52 sc-test2 flanneld[36477]: E1223 16:03:52.197222 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:53 sc-test2 flanneld[36477]: E1223 16:03:53.199248 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) Dec 23 16:03:54 sc-test2 flanneld[36477]: E1223 16:03:54.201160 36477 network.go:53] Failed to retrieve network config: 100: Key not found (/atomic.io) </code></pre> <p>I'm sure I set the key on the master with : etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'</p> <p>By far installation seems to be the hardest bit on using kubernetes :(</p>
<p>Today's christmas but I spent the whole day trying to get this to work :) This is what I did:</p> <h2>#1 FLANNEL</h2> <p>As mentioned I'd set the flannel etcd key on the master with:</p> <p><code>etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'</code></p> <p>but I got this error when trying to start flannel on the minion:</p> <p><code>Failed to retrieve network config: 100: Key not found (/atomic.io)</code></p> <p>So I edited the <code>/etc/sysconfig/flanneld</code> file on the minion from:</p> <pre><code># Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/coreos.com/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" to: # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD="http://master:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_KEY="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" </code></pre> <p>i.e. changed the FLANNEL_ETCD key.</p> <p>After this <code>systemctl start flanneld</code> worked.</p> <h2>#2 DOCKER</h2> <p>I didn't find a way to make the version installed as a dependency by kubernetes work so I uninstalled it and following the docker docs for Centos installed docker-engine and manually created a docker.service file for systemctl.</p> <p><code>cd /usr/lib/systemd/system</code></p> <p>and the contents of the docker.service:</p> <pre><code>[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket Requires=flanneld.service After=flanneld.service [Service] EnvironmentFile=/etc/sysconfig/flanneld ExecStart=/usr/bin/docker daemon -H fd:// --bip=${FLANNEL_SUBNET} Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target </code></pre> <p>then start and enable the daemon with systemctl as well as query the status.</p> <pre><code>systemctl restart docker systemctl enable docker systemctl status docker </code></pre>
<p>Is it possible to create a Pod in the Google Container Engine where two ports are exposed: port 8080 is listening for incoming content and port 80 distributes this content to clients?</p> <p>The following command to create a Pod is given as example by Google:</p> <pre><code>kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node --port=8080 </code></pre> <p>I can't seem to define a listening port, and when adding a second "--port=" switch only one port is exposed. Is there a way to expose a second port or am I limited to one port per container?</p>
<p>No, you cannot specify multiple ports in <code>kubectl run</code>. But you can use <code>kubectl create</code> to create a replication controller, and specify multiple ports for the container.</p> <p><a href="https://github.com/kubernetes/examples/blob/master/cassandra/cassandra-statefulset.yaml" rel="noreferrer">https://github.com/kubernetes/examples/blob/master/cassandra/cassandra-statefulset.yaml</a> has an example:</p> <pre><code>ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql </code></pre>
<p>I have the following mysql.yaml file:</p> <pre><code>apiVersion: v1beta3 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 1 image: mysql name: mysql env: - name: MYSQL_ROOT_PASSWORD # change this value: yourpassword ports: - containerPort: 3306 name: mysql </code></pre> <p>Running <code>kubectl create -f mysql.yaml gives the error</code>:</p> <pre><code>Error from server: error when creating "mysql.yaml": Pod "Unknown" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account </code></pre> <p>I have a master and a node both centos 7.1.</p>
<p>To get your setup working, you can do the same thing local-up-cluster.sh is doing:</p> <ol> <li>Generate a signing key: </li> </ol> <p><code>openssl genrsa -out /tmp/serviceaccount.key 2048</code></p> <ol start="2"> <li>Update <code>/etc/kubernetes/apiserver</code>:</li> </ol> <p><code>KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"</code></p> <ol start="3"> <li>Update <code>/etc/kubernetes/controller-manager</code>:</li> </ol> <p><code>KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key"</code></p> <p>From <a href="https://github.com/kubernetes/kubernetes/issues/11355#issuecomment-127378691" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/11355#issuecomment-127378691</a></p>
<p>I would like to use persistent disk in my replication controller but if i use a <code>gcePersistentDisk</code> the console returns me an error</p> <blockquote> <p>ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.</p> </blockquote>
<p>From the error message and from the <a href="http://kubernetes.io/v1.1/docs/user-guide/volumes.html#gcepersistentdisk" rel="nofollow">docs</a>:</p> <blockquote> <p>A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed. </p> </blockquote> <p>So you have two options to fix this:</p> <ol> <li>Set the <code>replicas</code> in your ReplicationController to 1</li> <li><p>Make the volume <code>readOnly: true</code></p> <pre><code>gcePersistentDisk: pdName: my-data-disk fsType: ext4 readOnly: true </code></pre></li> </ol>
<p>I have the following services hosted in my Kubernetes cluster on AWS.</p> <ul> <li>An nginx server, on ports 80 and 443.</li> <li>A Minecraft server, at port 25565.</li> </ul> <p>Both are working great. I currently have both of them set to <code>type: LoadBalancer</code>, so they both have Elastic Load Balancers that are providing ingress to the cluster. </p> <p>I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP.</p> <p>I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. <a href="https://gist.github.com/iameli/1c8f82ecabeeacb870a3" rel="noreferrer">Here's the setup on a gist</a>. When I try and <code>curl</code> on the allocated <code>nodePort</code> from inside the cluster it just hangs.</p> <p>Is there a way to have one service balance to multiple services?</p>
<p>The Ingress resource, which was added in version 1.1.0, was designed specifically for this use case. It allows you to put multiple services behind a single IP address, routing to them based on HTTP path. Check out <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/ingress.md" rel="noreferrer">the user guide on it</a> for more details, but feel free to ask if you have more questions about it!</p> <p>edit: For a non-HTTP(S) service, you can have to find a way to make sure all necessary ports get load balanced by the ELB and then properly routed by Kubernetes. On GCE, you could manually create the load balancer with the ports you need, and then put the load balancer's IP in the <code>externalIPs</code> field for each service. My memory's a little fuzzy, but I don't believe that'll work with an ELB due to its packet rewriting. You might instead want to create each service as a <code>NodePort</code> service, then configure your ELB to forward the packets from the correct external port to the node port for each service.</p>
<p>What is the difference between Amazon ECS and Kubernetes implementation architecture?</p> <p>I need to decide to pick a technology for container management in cloud. What is the deciding factor while picking any of these technology?</p> <p>I am using Docker for container creation and execution.</p>
<p>I had a huge experience working with containers and different container solutions, including Amazon ECS and Kubernetes, and I have found, that Kubernetes as one of the most useful solutions for managing containers in the different environments.</p> <p>The main benefit of Kubernetes - that it is a mature solution, originally developed by Google, but it is completely open-source! That means that anyone may look under the hood and (if necessary), modify and update the source code up to his purposes. </p> <p>Another huge benefit of Kubernetes - it is completely free. That mean that you may install it and run on your own infrastructure, without paying any additional costs for Kubernetes itself.</p> <p>You may run Kubernetes on the huge amount of <a href="http://kubernetes.io/v1.1/docs/getting-started-guides/README.html" rel="noreferrer">different providers</a>. It doesn't matter on what environment do you run the Kubernetes cluster - you should only take care of the Kubernetes cluster itself. That allows you to run, for example, the development cluster locally on Vagrant, as make the distributed production environment on the public cloud like AWS or GCE and the private cloud like OpenStack or simply using some libvirt solutions (using CoreOS for example). Again, from the point of view of Kubernetes - it doesn't matter what infrastructure solution do you use - the only one requirement for it - to be Kubernetes-enabled.</p> <p>Speaking about the Amazon ECS - that is a proprietary and vendor-locked solution. It also may give you the same performance as Kubernetes, but it won't give you the same flexibility.</p> <p>So, globally one may compare both Amazon ECS and Kubernetes, but Kubernetes is much more flexible and ready-to-customize solution.</p>
<p>I'm trying to execute command in a contianer (in a Kubernetes POD on GKE with kubernetes 1.1.2). </p> <p>Reading documentation I understood that I can use GET or POST query to open websocket connection on API endpoint to execute command. When I use GET, it does not work completly, returns error. When I try to use POST, something like that could work probably (but it's not):</p> <pre><code>curl 'https://admin:xxx@IP/api/v1/namespaces/default/pods/hello-whue1/exec?stdout=1&amp;stderr=1&amp;command=ls' -H "Connection: upgrade" -k -X POST -H 'Upgrade: websocket' </code></pre> <p>repsponse for that is</p> <pre><code>unable to upgrade: missing upgrade headers in request: http.Header{"User-Agent":[]string{"curl/7.44.0"}, "Content-Length":[]string{"0"}, "Accept":[]string{"*/*"}, "Authorization":[]string{"Basic xxx=="}, "Connection":[]string{"upgrade"}, "Upgrade":[]string{"websocket"}} </code></pre> <p>Looks like that should be enough to upgrade post request and start using websocket streams, right? What I'm missing?</p> <p>I was also pointed that opening websocket with POST is probably violation of websocket protocol (only GET should work?).</p> <p>Also</p>
<p>You'll probably have the best time using the <a href="https://github.com/kubernetes/kubernetes/tree/release-1.1/pkg/client/unversioned" rel="nofollow">Kubernetes client library</a>, which is the same code the Kubectl uses, but if for some reason that isn't an option, than my best suggestion is to look through <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/pkg/client/unversioned/remotecommand/remotecommand.go" rel="nofollow">the client library's code for executing remote commands</a> and seeing what headers it sets.</p>
<p>I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site.</p> <p>How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses? </p> <p>I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet:</p> <pre><code>kind: Ingress metadata: name: example-ingress spec: rules: - host: example.org http: paths: - backend: serviceName: example-web servicePort: 80 path: / </code></pre> <p>I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - port: 80 targetPort: 80 loadBalancerIP: "10.10.10.10" type: LoadBalancer </code></pre> <p>I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer.</p> <p>This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?</p>
<p>As you are running in google-container-engine you could set up a <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow">Compute Engine HTTP Load Balancer</a> for your static IP. The <a href="https://cloud.google.com/compute/docs/load-balancing/http/target-proxies" rel="nofollow">Target proxy</a> will add <code>X-Forwarded-</code> headers for you.</p> <p>Set up your kubernetes service with type <a href="http://kubernetes.io/v1.1/docs/user-guide/services.html#type-nodeport" rel="nofollow">NodePort</a> and add a <code>nodePort</code> field. This way <code>nodePort</code> is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: 'example-web' spec: selector: app: example-web ports: - nodePort: 30080 port: 80 targetPort: 80 type: NodePort </code></pre> <p>Create a <a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service" rel="nofollow">backend service</a> with HTTP health check on port 30080 for your instance group (nodes).</p>
<p>Hi I am running kubernetes cluster where I run Logstash container. </p> <p>But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:</p> <pre><code>docker run --log-driver=gelf logstash -f /config-dir/logstash.conf </code></pre> <p>But I need to run it via Kubernetes pod. My pod looks like:</p> <pre><code>spec: containers: - name: logstash-logging image: "logstash:latest" command: ["logstash", "-f" , "/config-dir/logstash.conf"] volumeMounts: - name: configs mountPath: /config-dir/logstash.conf </code></pre> <p>How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.</p>
<p>Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in <a href="https://github.com/kubernetes/kubernetes/issues/15478" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/15478</a>, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.</p>
<p>We're using Kubernetes 1.1.3 with its default fluentd-elasticsearch logging.</p> <p>We also use LivenessProbes on our containers to make sure they operate as expected.</p> <p>Our problem is that lines we send out to the STDOUT from the LivenessProbe does not appear to reach Elastic Search.</p> <p>Is there a way to make fluentd ship LivenessProbes output like it does to regular containers in a pod?</p>
<p>The output from the probe is swallowed by the Kubelet component on the node, which is responsible for running the probes (<a href="https://github.com/kubernetes/kubernetes/blob/5828836e7c4abb4666a845adb607f1b2a65a1a76/pkg/kubelet/prober/prober.go#L89" rel="noreferrer">source code, if you're interested</a>). If a probe fails, its output will be recorded as an event associated with the pod, which should be accessible through the API.</p> <p>The output of successful probes isn't recorded anywhere <a href="https://github.com/kubernetes/kubernetes/blob/5828836e7c4abb4666a845adb607f1b2a65a1a76/pkg/probe/exec/exec.go#L38" rel="noreferrer">unless your Kubelet has a log level of at least --v=4, in which case it'll be in the Kubelet's logs</a>.</p> <p>Feel free to file a feature request in a Github issue if you have ideas of what you'd like to be done with the output :)</p>
<p>I have provisioned kubernetes cluster in Azure Cloud(using CoreOS) using the guide <a href="http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/azure/README.html" rel="nofollow">http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/azure/README.html</a></p> <p>Its working fine, now I want to run kubectl commands from my local machine(I use Mac). for that I installed kubernetes-cli with brew, but I am not able to connect to the remote kubernetes cluster. When I run "kubectl version",</p> <pre><code>user$ kubectl version Client Version: version.Info{Major:"1", Minor:"1",GitVersion:"v1.1.2+3085895",GitCommit:"3085895b8a70a3d985e9320a098e74f545546171",GitTreeState:"not a git tree"} error: couldn't read version from server: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused </code></pre> <p>how to connect to the kubernetes cluster via SSH or so?</p> <p>Note: manually I am able to ssh to the kubernetes nodes and run kubectl commands there. </p>
<p>The mechanism for copying the configuration file necessary to get remote access to your cluster is described in <a href="https://kubernetes.io/docs/user-guide/sharing-clusters/" rel="nofollow noreferrer">Sharing Cluster Access</a>. You may need to manually tweak <code>~/.kube/config</code> after copying if it contains an endpoint that isn't remotely reachable (a non-routable IP) and/or open up firewall access to your apiserver running in Azure. Otherwise, once you get the local config file in place you should be all set. </p>
<p>I have a few basic questions on scaling Docker containers:</p> <p>I have 5 different apps. They are not connected to each other. Before having containers I would run 1 app per VM and scale them up and down individually in the cloud.</p> <p>Now with containers I get the isolation on top of a VM, so now I can potentially run one host with 5 docker containers where each app is isolated in its own container.</p> <p>As long as I have enough resources on my host I can scale up and down those containers individually as my traffic grows or shrinks. e.g. I have 3 containers running app 1, but only 1 container running app 2.</p> <p>At peak times app 3 gets a lot of traffic and I need to launch a 2nd host which runs only containers for app 3.</p> <p>My first question is if the above makes sense what I say or if I have misunderstood something. My second question is what technology is currently available to get this all done in an automated way. I need a load balancer and an auto scaling group which is capable of the above scenario without me having to do manual interventions.</p> <p>I looked into AWS ECS and am not quite sure if it can satisfy my needs as I outlined it above.</p> <p>Does anyone know how to achieve this, or is there a better way of managing and scaling my 5 apps which I am missing?</p> <p><strong>UPDATE:</strong></p> <p>Via Twitter I have been pointed to <a href="http://kubernetes.io/">Kubernetes</a> and specifically to the docs on the <a href="http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html">Horizontal Pod Autoscaler</a>.</p> <p>Might be useful for others as well. I will update this question as I learn more.</p>
<p>There are several options, but none that I know that does it all: you will need 2 things: autoscaling hosts according to signals, then autoscale containers on the hosts.</p> <p>The following are the solutions to deploy and scale <em>containers</em> on the hosts (not necessarily <em>auto</em>-scale though):</p> <p><strong>Kubernetes</strong> is an orchestration tool which allows to schedule and (with the optional autoscaler) to autoscale pods (groups of containers) in the cluster. It makes sure your containers are running somewhere if a host fails. Google Container Engine (GKE) offers this as a service, however i am not sure they have the same functionalities to autoscale the number of VMs in the cluster as AWS does.</p> <p><strong>Mesos</strong>: somewhat similar to Kubernetes but not dedicated to running containers.</p> <p><strong>Docker Swarm</strong>: the Docker multi-host deployment solution, allows you control many hosts as if they were a single Docker host. I don't believe it has any kind of 'autoscaling' capability, and I don't believe it takes care of making sure pods are always running somewhere: it's basically docker for cluster. </p> <p><strong>[EDIT] Docker supports restarting failing containers with the <code>restart=always</code> option, also, as of Docker 1.11 Docker Swarm is a mode in Docker Daemon, and supports rescheduling containers on node failure: it will restart containers on a different node if a node is no longer available.</strong></p> <p><strong>Docker 1.11+ is becoming a lot like Kubernetes in terms of functionalities. It has some nice features (like TLS between nodes by default), but still lacks things like static IPs and storage provisioning</strong></p> <p>None of these solutions will autoscale the number of hosts for you, but they can scale the number of containers on the hosts. </p> <p>For autoscaling hosts, solutions are specific to your cloud provider, so these are dedicated solution. The key part for you is to integrate the two: AWS allows deployment of Kubernetes on CoreOS; I don't think they offer this as a service, so you need to deploy your own CoreOS cluster and Kubernetes.</p> <p>Now my personal opinion (and disclaimer) </p> <p>I have mostly used Kubernetes on GKE and bare-metal, as well as Swarm a about 6 months ago, and i run an infra with ~35 services on GKE:</p> <p>Frankly, GKE with Kubernetes as a Service offers most of what you want, but it's not AWS. Scaling hosts is still a bit tricky and will require some work.</p> <p>Setting up your own Kubernetes or Mesos on AWS or bare metal is very feasible, but there is quite a learning curve: it all depends if you really strongly feel about being on AWS and are willing to spend the time.</p> <p>Swarm is probably the easiest to get working with, but more limited, however homebuilt script can well do the job core job: use AWS APIs to scale hosts, and Swarm to deploy. The availability guarantee though would require you monitoring and take care of re-launching containers if a node fails.</p> <p>Other than that, there are also container hosting providers that may do the job for you: </p> <ul> <li><p>Scalingo is one i know of but there are others. <a href="https://scalingo.com/" rel="nofollow">https://scalingo.com/</a></p></li> <li><p>OVH Sail Above has this service in alpha. <a href="https://www.runabove.com/sailabove.xml" rel="nofollow">https://www.runabove.com/sailabove.xml</a></p></li> </ul>
<p>There are applications and services in enterprises that do not need to run all the time and that have a limited user base (say a handful of people). </p> <p>These applications can be shut down and started either based on scheduling or even better user activity. So, we are talking about on-demand service (say wrapped by a container) and node start-up and shut down. </p> <p>Now, first to mention that the reason why I mention authenticated user activity is because is makes sense to startup and shutdown on that basis (i.e. not based on lower level network traffic). One can imagine corporate SSO (say OAuth 2 based) being involved.</p> <p>So, my question is whether anyone has attempted to implement what I have described using Consul or Kubernetes? </p> <p>In the case of Consul, it could be that the key-value store could be used to give "Micro" (i.e. small user base) class applications a TTL, each time an authenticated user requests access to a given "Micro" class application it's TTL is updated. During the TTL window we want to check the health of the node(s), containers and services - outside of the window we don't (since we want to save on op ex).</p> <p>This question is similar to <a href="https://stackoverflow.com/questions/33556707/dynamic-scalable-and-adaptive-architecture">this autoscaling question</a>, however different in the sense that this use case is about scaling from 0 nodes and then down to 0 based on an authenticated user base (most likely using SSO).</p>
<p>In the case of <a href="http://kubernetes.io" rel="nofollow">Kubernetes</a>, the <a href="http://kubernetes.io/v1.1/docs/design/horizontal-pod-autoscaler.html#autoscaling-algorithm" rel="nofollow">Horizontal Pod Autoscaling documentation</a> lists the exact use case described under <strong><em>Next steps</em></strong> (i.e. the feature is on the backlog and may be implemented after v1.1. of Kubernetes). The cited feature description (<a href="https://github.com/kubernetes/kubernetes/pull/3247" rel="nofollow">Unidling proposal</a>) is as follows:</p> <p><em>Scale the number of pods starting from 0. All pods can be turned-off, and then turned-on when there is a demand for them. When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler to create a new pod.</em></p> <p>So basically, it may be possible to do what I've described in future using Kubernetes, but it is not possible right now. This in itself does not address the requirement to only scale from 0 based on authenticated user activity.</p> <p>It's worth noting, as a cluster-agnostic aside, <a href="https://developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/" rel="nofollow">on-demand container activation based on systemd</a>. This solution will of course not scale back down to 0 without a controlling process, but it's still worth noting.</p>
<p>I have the following services hosted in my Kubernetes cluster on AWS.</p> <ul> <li>An nginx server, on ports 80 and 443.</li> <li>A Minecraft server, at port 25565.</li> </ul> <p>Both are working great. I currently have both of them set to <code>type: LoadBalancer</code>, so they both have Elastic Load Balancers that are providing ingress to the cluster. </p> <p>I would like to have only one ELB -- they cost money, and there's no reason not to have the Minecraft server and the HTTP(S) server on the same external IP.</p> <p>I tried to create a service without a selector, then tried to manually create an Endpoints object referencing that service, but it doesn't appear to be working. <a href="https://gist.github.com/iameli/1c8f82ecabeeacb870a3" rel="noreferrer">Here's the setup on a gist</a>. When I try and <code>curl</code> on the allocated <code>nodePort</code> from inside the cluster it just hangs.</p> <p>Is there a way to have one service balance to multiple services?</p>
<p>You could also simply use nginx as a proxy for your minecraft server, and forward traffic from ingress port 25565 to the minecraft server. That way all traffic goes through one Service</p>
<p>I used the node.yaml and master.yaml files here: <a href="http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/coreos_multinode_cluster.html" rel="nofollow">http://kubernetes.io/v1.1/docs/getting-started-guides/coreos/coreos_multinode_cluster.html</a> to create a multi-node cluster on 3 bare-metal machines running CoreOS. However, pods on different nodes can’t communicate with each other. I’d appreciate any pointers or suggestions. I’m at a loss.</p> <p>I have three pods running rabbitmq:</p> <pre><code>thuey:~ thuey$ kbg pods | grep rabbitmq rabbitmq-bootstrap 1/1 Running 0 3h rabbitmq-jz2q7 1/1 Running 0 3h rabbitmq-mrnfc 1/1 Running 0 3h </code></pre> <p>Two of the pods are on one machine:</p> <pre><code>kbd node jolt-server-3 | grep rabbitmq thuey rabbitmq-bootstrap 0 (0%) 0 (0%) 0 (0%) 0 (0%) thuey rabbitmq-jz2q7 0 (0%) 0 (0%) 0 (0%) 0 (0%) </code></pre> <p>And the other pod is on another machine:</p> <pre><code>thuey:~ thuey$ kbd node jolt-server-4 | grep rabbitmq thuey rabbitmq-mrnfc 0 (0%) 0 (0%) 0 (0%) 0 (0%) </code></pre> <p>I can successfully ping from rabbitmq-bootstrap to rabbitmq-jz2q7:</p> <pre><code>root@rabbitmq-bootstrap:/# ping 172.17.0.5 PING 172.17.0.5 (172.17.0.5) 56(84) bytes of data. 64 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.035 ms 64 bytes from 172.17.0.5: icmp_seq=3 ttl=64 time=0.064 ms 64 bytes from 172.17.0.5: icmp_seq=4 ttl=64 time=0.055 ms ^C --- 172.17.0.5 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.035/0.053/0.064/0.010 ms </code></pre> <p>But I can't ping rabbitmq-mrnfc:</p> <pre><code>root@rabbitmq-bootstrap:/# ping 172.17.0.8 PING 172.17.0.8 (172.17.0.8) 56(84) bytes of data. From 172.17.0.2 icmp_seq=1 Destination Host Unreachable From 172.17.0.2 icmp_seq=2 Destination Host Unreachable From 172.17.0.2 icmp_seq=3 Destination Host Unreachable From 172.17.0.2 icmp_seq=4 Destination Host Unreachable ^C --- 172.17.0.8 ping statistics --- 5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4000ms pipe 4 </code></pre>
<p>The guide you use don't include instructions for bare-metal machines. You need networking (e.g., flannel, calico) that implements Kubernetes's <a href="https://github.com/kubernetes/kubernetes/blob/4ca66d2aefa20c27b670b2fa890052daadc05294/docs/admin/networking.md" rel="nofollow">networking model</a>. You can check the <a href="https://github.com/kubernetes/kubernetes/blob/4ca66d2aefa20c27b670b2fa890052daadc05294/docs/getting-started-guides/README.md#table-of-solutions" rel="nofollow">table of solutions</a> for getting-started guides for different IaaS/OS/Network combinations.</p>
<p>I'm trying to read Images using the kubernetes API, but am not seeing an API for that. Is there an API to Read Images List from my google cloud account?</p>
<p>To list all images in your gcr.io private registry, you can use the <a href="https://docs.docker.com/engine/reference/commandline/search/">docker search</a> command, pointing at your registry, using your Google credentials:</p> <pre><code>gcloud docker search gcr.io/your-registry </code></pre> <p>Or in two steps, configuring <code>docker</code> to use your Google credentials:</p> <pre><code>gcloud docker -a docker search gcr.io/your-registry </code></pre>
<p>I followed docker instructions to install and verify the docker installation (from <a href="http://docs.docker.com/linux/step_one/">http://docs.docker.com/linux/step_one/</a>).</p> <p>I tried on 2 Ubuntu 14.04 machines and on both I got following error when starting docker daemon:</p> <pre><code>$ sudo docker daemon INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] [graphdriver] using prior storage driver "aufs" INFO[0000] Option DefaultDriver: bridge INFO[0000] Option DefaultNetwork: bridge WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found. , error: exit status 1 INFO[0000] Firewalld running: false WARN[0000] Your kernel does not support cgroup memory limit: mountpoint for memory not found WARN[0000] mountpoint for cpu not found FATA[0000] Error starting daemon: Devices cgroup isn't mounted </code></pre> <p>I appreciate any help to resolve this issue.</p>
<p>I resolved this issue by starting the docker deamon manually using:</p> <p><code>sudo service docker start</code></p>
<p>Is it possible to enable autoscaling of nodes (minions) in Kubernetes running on CoreOS in OpenStack?</p> <p>I only read about AWS and GCE.</p>
<p>You'd probably need to deploy Kubernetes with Heat and use autoscale in your template. Example:</p> <p><a href="http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat" rel="nofollow">http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat</a></p> <p>and a template for Kubernetes to build on:</p> <p><a href="https://github.com/metral/corekube/blob/master/corekube-openstack.yaml" rel="nofollow">https://github.com/metral/corekube/blob/master/corekube-openstack.yaml</a></p>
<p>Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status."</p> <p>I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a ephemeral external IP address for the LB and an associated Firewall rule that allows incoming traffic is created.</p> <p>My issue is that the Kubernetes auto-generated firewall rule description is "KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1" (IP is the LB external IP). In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually. </p> <p>I have proper firewall rules so that any IP address may contact TCP 443, 80 on any instance within my pool, so that's not the issue.</p> <p>Can someone explain this to me because it makes me think that the LB is passing HTTP requests to both instances despite only one of those instances having the NGINX pod running on it.</p>
<blockquote> <p>Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening?</p> </blockquote> <p>All hosts (that are currently running a functional kube-proxy process) are capable of receiving and handling incoming requests for the externalized service. The requests will land on an arbitrary node VM in your cluster, match an iptables rule and be forwarded (by kube-proxy process) to a pod that has a label selector that matches the service. </p> <p>So the case where a healthchecker would prevent requests from being dropped is if you had a node VM that was running in a broken state. The VM would still have the target tag matching the forwarding rule but wouldn't be able to handle the incoming packets. </p> <blockquote> <p>In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP.</p> </blockquote> <p>This is working as intended. Each service can use any port that is desires, meaning that multiple services can use ports 80 and 443. If a packet arrives on the host IP on port 80, the host has no way to know which of the (possibly many) services using port 80 the packet should be forwarded to. The iptables rules for services handle packets that are destined to the virtual internal cluster service IP and the external service IP, but not the host IP. </p> <blockquote> <p>This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually.</p> </blockquote> <p>If you want to set up a healthcheck to verify that a node is working properly, you can healthcheck the kubelet process that is running on port <code>10250</code> by installing a firewall rule:</p> <pre><code>$ gcloud compute firewall-rules create kubelet-healthchecks \ --source-ranges 130.211.0.0/22 \ --target-tags $TAG \ --allow tcp:10250 </code></pre> <p>(check out the <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="nofollow">Container Engine HTTP Load Balancer</a> documentation to help find what you should be using for <code>$TAG</code>).</p> <p>It would be better to health check the kube-proxy process directly, but it only <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-proxy/app/options/options.go#L62" rel="nofollow">binds to localhost</a>, whereas the kubelet process <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go#L134" rel="nofollow">binds to all interfaces</a> so it is reachable by the health checkers and it should serve as a good indicator that the node is healthy enough to serve requests to your service. </p>
<p>I installed Kubernetes on linux using the steps <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">here</a>.</p> <p>Everything worked fine until I exited the terminal and opened a new terminal session.</p> <p>I got a permission denied error and after restarting my machine I get the following error </p> <pre><code>&gt; kubectl get pod&lt;br/&gt; error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused </code></pre> <p>I am just getting started with Kubernetes any help would be appreciated. </p>
<p>seems like a TCP problem. try to isolate the problem by checking if TCP/8080 is open by issue </p> <blockquote> <p>telnet 127.0.0.1 8080</p> </blockquote> <p>if you got a 'connection refused' - you should probably look at the firewall/security setting of your machine.</p>
<p>I have a python service running in kubernetes container and writing logs to stdout. I can see the logs in Cloud Logging Console, but they are not structured, meanining: 1. I can't filter log levels 2. Log record with multiple lines interpreted as multiple log records 3. Dates are not parse etc.</p> <p>How can I address this problem? Can I configure flunetd deamon somehow? Or should I write in a specific format?</p> <p>Thanks</p>
<p>If you're running at least version 1.1.0 of Kubernetes (you most likely are), then if the logs you write are JSON formatted, they'll show up as structured logs in the Cloud Logging console.</p> <p>Then certain JSON keys are interpreted specially when imported into Cloud Logging, for example 'severity' will be used to set the log level in the console, or 'timestamp' can be used to set the time.</p>
<p>I am trying to set up a single node kubernetes cluster for demo and testing purposes, and I want it to behave like a 'full blown' k8s cluster (like google container engine). My client has their own k8s installation, which for this discussion we can assume acts pretty much like google container engine's k8s installation.</p> <p><strong>Getting the Ingress IP on Full Blown K8s</strong></p> <p>I am creating a wordpress pod and exposing it as a service, as described in this tutorial: <a href="https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress" rel="nofollow">https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress</a></p> <p>If you want to replicate the issue, just can just copy paste the commands below, which I lifted from the tutorial: (This assumes you have a project called 'stellar-access-117903'.. if not please set to name of your Google Container Engine project.) </p> <pre><code># set up the cluster (this will take a while to provision) # gcloud config set project stellar-access-117903 gcloud config set compute/zone us-central1-b gcloud container clusters create hello-world \ --num-nodes 1 \ --machine-type g1-small # Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress </code></pre> <p>The output of the describe command contains a line 'LoadBalancer Ingress: {some-ip-address}' which is exactly what I'd expect. Now, when I do the same thing with the single node cluster setup <em>i don't</em> get that line. I am able to hit the wordpress service at the IP that appears in the output of the 'describe service' command.. But in 'single node' mode, the IP that is printed out is the >cluster IP&lt; of the service, which typically (as I understand it) is not publicly accessible. For some reason it is publicly accessible in single node mode. We can replicate this with the following steps.</p> <p><strong>NOT Getting the Ingress IP on Single Node K8s</strong></p> <p>First setup single node k8s, as described in this tutorial: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md</a> </p> <p>For easy reproducibility, I have included all the commands below, so you can just copy/paste:</p> <pre><code>K8S_VERSION=1.1.1 sudo docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v${K8S_VERSION} \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v${K8S_VERSION} /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 # set your context to use the locally running k8s API server # kubectl config set-cluster dev --server=http://localhost:8080 kubectl config set-context dev --cluster=dev --namespace=$NS kubectl config use-context dev </code></pre> <p>Now, execute the very same commands that you performed against Google Container Engine's k8s</p> <pre><code># Create the pod, and expose it as a service # kubectl run wordpress --image=tutum/wordpress --port=80 kubectl expose rc wordpress --type=LoadBalancer # Describe the service kubectl describe services wordpress </code></pre> <p>The output of the last command (which you will see has no 'Ingress' information) is:</p> <pre><code>Name: wordpress Namespace: default Labels: run=wordpress Selector: run=wordpress Type: LoadBalancer IP: 10.0.0.61 Port: &lt;unnamed&gt; 80/TCP NodePort: &lt;unnamed&gt; 31795/TCP Endpoints: 172.17.0.30:80 Session Affinity: None No events. </code></pre> <p>In google container engine's k8s, I see events like ' Creating load balancer ', ' Load balancer created '. But nothing like that happens in the single node instance. </p> <p>I am wondering ... is there some configuration I need to do to get them to work identically ? It is very important that they work identically... differing only in their scalability, because we want to run tests against the single node version, and it will be very confusing if it behaves differently.</p> <p>Thanks in advance for your help -chris</p>
<p>Here is the solution we came up with. When we are running against single node Kubernetes we realized by trial and error that when you expose a service the external IP does not come back via IngressIP; rather, it comes back via the clusterIP, which as mentioned above is publicly viewable. So, we just modified our code to work with that. We use the clusterIP in the single node case. Here is the code we use to establish a watch on the service to figure out when k8s has allocated our externally visible IP:</p> <p>First we use the fabric8 API to create the service configuration:</p> <pre><code> case "Service" =&gt; val serviceConf = mapper.readValue(f, classOf[Service]) val service = kube.services().inNamespace(namespaceId).create(serviceConf) watchService(service) </code></pre> <p>The 'watchService' method is defined below:</p> <pre><code> private def watchService(service: Service) = { val namespace = service.getMetadata.getNamespace val name = service.getMetadata.getName logger.debug("start -&gt; watching service -&gt; namespace: " + namespace + " name: " + name) val kube = createClient() try { @volatile var complete = false val socket = kube.services().inNamespace(namespace).withName(name).watch(new Watcher[Service]() { def eventReceived(action: Action, resource: Service) { logger.info(action + ":" + resource) action match { case Action.MODIFIED =&gt; if (resource.getMetadata.getName == name) { complete = isServiceComplete(resource) } // case Action.DELETED =&gt; // complete = true case _ =&gt; } } }) while (!complete) { Thread.sleep(5000) complete = isServiceComplete(kube.services().inNamespace(namespace).withName(name).get) } logger.info("Closing socket connection") socket.close() } finally { logger.info("Closing client connection") kube.close() } logger.debug("complete -&gt; watching services , namespace: " + namespace + " name: " + name) } </code></pre> <p>The key hack we introduced was in the method 'isServiceComplete' .. when using single node k8s the value of 'isUsingMock' is true. so that makes us use clusterIP to determine if service configuration has completed or not.</p> <pre><code> private def isServiceComplete(service: Service) = { !service.getStatus.getLoadBalancer.getIngress.isEmpty || mockServiceComplete(service) } def mockServiceComplete(service: Service): Boolean = { val clusterIP = service.getSpec.getClusterIP logger.trace(s"mockServiceComplete: $isUsingMock / $clusterIP / $KUBE_SERVER" ) isUsingMock &amp;&amp; ! clusterIP.isEmpty } </code></pre> <p>Sorry if there is not a lot of extra context here. Eventually our project should be open source and we can post a complete solution.</p> <p>-chris</p>
<p>I start a kubernetes replication controller. When the corresponding container in the single pod in this replication controller has a <code>gcePersistentDisk</code> specified the pods will start very slow. After 5 minutes the pod is still in the <code>Pending</code> state.</p> <p><code>kubectl get po</code> will tell me:</p> <pre><code>NAME READY STATUS RESTARTS AGE app-1-a4ni7 0/1 Pending 0 5m </code></pre> <p>Without the <code>gcePersistentDisk</code> the pod is <code>Running</code> in max 30 seconds.</p> <p>(I am using a 10 GB Google Cloud Storage disk and I know that these disks have <a href="https://stackoverflow.com/q/23683271/454103">lower performance</a> for lower capacities, but I am not sure this is the issue.)</p> <p>What could be the cause of this?</p>
<p>We've seen the GCE PD attach calls take upwards of 10 minutes to complete, so this is more or less expected. For example see <a href="https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655">https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655</a>, where PD tests were timing out before GCE PD attach/detach calls could complete. We're working with the GCE team to improve performance and reduce latency.</p> <p>If the pod never gets out of pending state, then you might've hit a bug. In that case, grab your kubelet log and open an issue at <a href="https://github.com/kubernetes/kubernetes/">https://github.com/kubernetes/kubernetes/</a></p>
<p>In GKE every cluster has a single master endpoint, which is managed by Google Container Engine. Is this master node high available?</p> <p>I deploy a beautiful cluster of redundant nodes with kubernetes but what happen if the master node goes down? How can i test this situation?</p>
<p>In Google Container Engine the master is managed for you and kept running by Google. According to the <a href="https://cloud.google.com/container-engine/sla" rel="nofollow">SLA for Google Container Engine</a> the master should be available at least 99.5% of the time. </p>
<p>I have a project with Appengine part and Google Containers cluster. Appengine app needs to make http calls to a Service deployed to Google Containers. </p> <p>I know that I can assign an external IP to the Service, hardcode it into my Appengine app, and then make UrlFetch requests agains such IP. That works. But I don't want to use public network for such communication.</p> <p>I wondering maybe I can also get access like it's done inside Kubernetes cluster between Pods? by specifying a service host name, that resolves to an internal IP in 10.x.x.x range. </p> <p>Is it possible to do same from Appengine? is there a special naming schema resolvable to GKE services?</p>
<p>There isn't currently a way to inject packets from an App Engine application into the private GCP network where your Kubernetes cluster is running (e.g. coming from a 10.0.0.0/8 address). So the only way to connect your application to your Container Engine service is to use the external IPs (as you are currently doing). </p>
<p>I have a health check with a 1-second check interval (<a href="http://s.drollette.com/0B2A3Z1w2X1G" rel="nofollow">http://s.drollette.com/0B2A3Z1w2X1G</a>). It was created by the GLBC Ingress controller in kubernetes. But looking at the logs it appears that it is generating 3 requests per second (<a href="http://s.drollette.com/2U432C2f1d2f" rel="nofollow">http://s.drollette.com/2U432C2f1d2f</a>). Is this expected behavior from a Google Compute Health Check? Nothing else is configured to be hitting that route.</p>
<p>Yes, this is the expected behavior. More than one health checker is used to check the service, and each health checker independently obeys the interval specified in your configuration.</p>
<p>tried to config one master node following the guide (<a href="http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#master-node" rel="nofollow">http://kubernetes.io/v1.1/docs/getting-started-guides/docker-multinode.html#master-node</a>) script master.sh ran successfully, but the api server failed to boot up. Software Version:</p> <pre><code>K8S_VERSION=1.1.3 ETCD_VERSION=2.2.1 FLANNEL_VERSION=0.5.5 </code></pre> <p>OS Version:</p> <pre><code>VERSION="2015.09" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2015.09" PRETTY_NAME="Amazon Linux AMI 2015.09" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2015.09:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" </code></pre> <p>Docker: 1.7.1</p> <p>Kernel Version:</p> <pre><code>Linux ip-172-0-11-22 4.1.10-17.31.amzn1.x86_64 #1 SMP Sat Oct 24 01:31:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>log message of kubelet:</p> <pre><code>I0113 15:44:42.517777 7987 server.go:770] Started kubelet E0113 15:44:42.517812 7987 kubelet.go:756] Image garbage collection failed: unable to find data for container / E0113 15:44:42.518437 7987 event.go:197] Unable to write event: 'Post http://localhost:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping) I0113 15:44:42.518460 7987 server.go:89] Starting to listen read-only on 0.0.0.0:10255 I0113 15:44:42.518885 7987 server.go:72] Starting to listen on 0.0.0.0:10250 I0113 15:44:42.524222 7987 kubelet.go:777] Running in container "/kubelet" I0113 15:44:42.696510 7987 factory.go:239] Registering Docker factory I0113 15:44:42.698516 7987 factory.go:93] Registering Raw factory I0113 15:44:42.698837 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.698862 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.698871 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.699523 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.829361 7987 manager.go:1006] Started watching for new ooms in manager I0113 15:44:42.830001 7987 oomparser.go:183] oomparser using systemd I0113 15:44:42.842667 7987 manager.go:250] Starting recovery of all containers I0113 15:44:42.868829 7987 manager.go:255] Recovery completed I0113 15:44:42.880876 7987 container_manager_linux.go:215] Configure resource-only container /docker-daemon with memory limit: 2903034265 I0113 15:44:42.880910 7987 manager.go:104] Starting to sync pod status with apiserver I0113 15:44:42.880963 7987 kubelet.go:1960] Starting kubelet main sync loop. I0113 15:44:42.881004 7987 kubelet.go:2012] SyncLoop (ADD): "k8s-master-localhost_default" E0113 15:44:42.881457 7987 kubelet.go:1915] error getting node: node 'localhost' is not in cache E0113 15:44:42.884752 7987 kubelet.go:1356] Failed creating a mirror pod "k8s-master-localhost_default": Post http://localhost:8080/api/v1/namespaces/default/pods: dial tcp 127.0.0.1:8080: connection refused E0113 15:44:42.884780 7987 kubelet.go:1361] Mirror pod not available I0113 15:44:42.884839 7987 manager.go:1707] Need to restart pod infra container for "k8s-master-localhost_default" because it is not found W0113 15:44:42.885688 7987 manager.go:108] Failed to updated pod status: error updating status for pod "k8s-master-localhost_default": Get http://localhost:8080/api/v1/namespaces/default/pods/k8s-master-localhost: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.900665 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:42.900693 7987 kubelet.go:2300] Recording NodeReady event message for node localhost I0113 15:44:42.900751 7987 kubelet.go:869] Attempting to register node localhost I0113 15:44:42.901194 7987 kubelet.go:872] Unable to register localhost with the apiserver: Post http://localhost:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: connection refused I0113 15:44:42.977270 7987 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider I0113 15:44:42.977458 7987 docker.go:159] Pulling image gcr.io/google_containers/pause:0.8.0 without credentials I0113 15:44:43.302487 7987 kubelet.go:2300] Recording NodeHasSufficientDisk event message for node localhost I0113 15:44:43.302552 7987 kubelet.go:2300] Recording NodeReady event message for node localhost </code></pre>
<p>First, I would upgrade to a newer Docker Version.</p> <p>But I think the Problem has something to do with your kubelet configuration.</p> <p>The parameter --hostname-override allows you to override the hostname. I'm not 100% sure, but I think your node has to be accessible via the hostname from the kube-api server. If you api-server is on another node, localhost won't work. </p> <p>See: <a href="http://kubernetes.io/v1.1/docs/admin/kubelet.html" rel="nofollow">http://kubernetes.io/v1.1/docs/admin/kubelet.html</a> for more Information</p>
<p>I recently had cause to restart a fluentd-elasticsearch pod for all my nodes. Out of 7 nodes where the pods were deleted only 1 of them deleted and came back as "Running". Is there a way to completely purge a pod in k8s?</p>
<p><code>fluentd-elasticsearch</code> pods are <a href="https://github.com/kubernetes/kubernetes/blob/4ca66d2aefa20c27b670b2fa890052daadc05294/docs/admin/static-pods.md" rel="nofollow">static pods</a> which are created via placing pod manifest files (<code>fluentd-es.yaml</code>) in a directory watched by Kubelet. The corresponding pod (a.k.a. the <em>mirror pod</em>) with the same name and namespace in the API server is created automatically for the purpose of introspection -- it reflects the status of the static pod.</p> <p>Kubernetes treats the static pod (the pod manifest file) in the directory as the source of the truth; operations (deletion/update, etc) on the mirror pod will <em>not</em> have any effect on the static pod.</p> <p>You are encouraged to move away from static pods and use <a href="https://github.com/kubernetes/kubernetes/blob/4ca66d2aefa20c27b670b2fa890052daadc05294/docs/admin/daemons.md" rel="nofollow">DaemonSet</a>, except for a few particular use cases (e.g., standalone Kubelets). The system add-on pods such as <code>fluentd-elasticsearch</code> will be converted to <code>DaemonSet</code> eventually. </p>
<p>Most of the articles online regarding setting up Docker containers seem to be written around the idea of breaking an application into microservices and allocating them into various containers and deploying them into a cluster.</p> <p>I would like to find out the best way to handle databases (e.g. MySQL) for multiple unrelated applications, written for different clients, and deployed into the same cluster.</p> <p>Say I have 10 unrelated small applications (like WordPress), all requiring access to MySQL database. I could:</p> <ol> <li><p>Deploy the applications as containers into the cluster, containing just the application code, and setting up a dedicated MySQL server or a Google Cloud SQL instance and asking each of the application containers to connect to the database as 3rd party services.</p> </li> <li><p>Deploy the applications as containers into the cluster. For each applications, also deploy a separate database container into the cluster and link the two.</p> </li> <li><p>Deploy a separate database container into the cluster and link this container to the various application containers in the cluster.</p> </li> </ol> <p>Which of these solutions is the best in terms of application architecture design and which of these is the best use of computer resources? I have the feeling that deploying multiple MySQL containers (one for each application) may be the best design but it might not be the most resource-efficient as we will have a bunch of MySQL containers running.</p>
<blockquote> <p>Containerising db for each app seems to be "the docker way" and provide better isolation and portability</p> </blockquote> <p>The docker way isn't a db per app but a service per container. MySQL is a service at soon as you don't run in the mysql container an another service (app/ssh/monitoring...) it's the way to go. </p> <p><strong>So the decision between one db per app or one db for all is up to you.</strong> </p> <p>My personal choice is the third: </p> <blockquote> <ol start="3"> <li>Deploy a separate database container into the cluster and link this container to the various application containers in the cluster.</li> </ol> </blockquote> <p>I'm using kubernetes with a postgres container that is used as a DB server for all applications.</p> <p>I prefer this choice because it's easier as an OP point of view to backup/replicate/apply maintenance than having 30 differents db servers + 30*slaves + 30*external pool + 30*monitoring tools etc... Also in my case I have a better hw resources usage.</p> <p>But I conserve the possibility to move a database to another dedicated db-server container in case an application is using too much resource or if too many app are already using the DB.</p>
<p>I'm composing yaml file for scripts running in docker and orchestrated by kubernetes. Is there a way to evaluate the resource utilization for a specicific command or docker, or what's the best practice to set the limit of cpu and mem for pods?</p> <p><strong>Edit</strong></p> <p>Most of these scripts will run in a short time, so it's hard to get the resource info. I just wanna try to find a tool to get the maximum usage of cpu and mem, the tool works like <code>time</code>, to print out the execution time.</p>
<p>You can view statistics for container(s) using the <code>docker stats</code> command.</p> <p>For example;</p> <pre><code>docker stats containera containerb CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O containera 0.00% 24.15 MB / 1.041 GB 2.32% 1.8 MB / 79.37 kB 0 B / 81.92 kB containerb 0.00% 24.95 MB / 1.041 GB 2.40% 1.798 MB / 80.72 kB 0 B / 81.92 kB </code></pre> <p>Or, see processes running in a container using <code>docker top &lt;container&gt;</code></p> <pre><code>docker top containera UID PID PPID C STIME TTY TIME CMD root 4558 2850 0 21:13 ? 00:00:00 sh -c npm install http-server -g &amp;&amp; mkdir -p /public &amp;&amp; echo "welcome to containera" &gt; /public/index.html &amp;&amp; http-server -a 0.0.0.0 -p 4200 root 4647 4558 0 21:13 ? 00:00:00 node /usr/local/bin/http-server -a 0.0.0.0 -p 4200 </code></pre> <h3>Limiting resources</h3> <p>Docker compose (like docker itself) allows you to set limits on resources for a container, for example, limiting the maximum amount of memory used, cpu-shares, etc.</p> <p>Read this section in the <a href="https://docs.docker.com/compose/compose-file/#cpu-shares-cpuset-domainname-entrypoint-hostname-ipc-mac-address-mem-limit-memswap-limit-privileged-read-only-restart-stdin-open-tty-user-working-dir" rel="nofollow">docker-compose yaml reference</a>, and the docker run reference on <a href="https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources" rel="nofollow">"Runtime constraints on resources"</a></p>
<p>I've introduced a bunch of readiness and liveness checks in our Kubernetes pods. And apart from currently being fairly CPU-heavy. They appear to work as expected.</p> <p>But then we started to run some load-testing on our solution. And almost immediately pods gets killed and event like this show up:</p> <p><code>Liveness probe errored: read tcp 10.244.27.123:8080: use of closed network connection</code></p> <p>There appear to have been an issue with keep-alive and the http probe (<a href="https://github.com/kubernetes/kubernetes/issues/15643" rel="nofollow">issue 15643</a>). But that also appears to have been fixed by disabling keep-alive in the probe in Kubernetes 1.1.1 (which is what we are running)</p> <p>So does anyone have any idea what could be going on?</p>
<p>I have seen this error when the liveness probe is timing out. Try lengthening the timeoutSeconds on your livenessProbe and see if the problem goes away.</p>
<p>I have multiple volumes and one claim. How can I tell the claim to which volume to bind to?</p> <p>How does a <code>PersistentVolumeClaim</code> know to which volume to bind? Can I controls this using some other parameters or metadata?</p> <p>I have the following <code>PersistentVolumeClaim</code>:</p> <pre><code>{ "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "default-drive-claim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } { "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "default-drive-disk", "labels": { "name": "default-drive-disk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } </code></pre> <p>If I create the claim and the volume using:</p> <pre><code>kubectl create -f pvc.json -f pv.json </code></pre> <p>I get the following listing of the volumes and claims:</p> <pre><code>NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE default-drive-disk name=default-drive-disk 10Gi RWO Bound default/default-drive-claim 2s NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE default-drive-claim &lt;none&gt; Bound default-drive-disk 10Gi RWO 2s </code></pre> <p>How does the claim know to which volume to bind?</p>
<p>The current implementation does not allow your PersistentVolumeClaim to target specific PersistentVolumes. Claims bind to volumes based on its capabilities (access modes) and capacity.</p> <p>In the works is the next iteration of PersistentVolumes, which includes a PersistentVolumeSelector on the claim. This would work exactly like a NodeSelector on Pod works. The volume would have to match the label selector in order to bind. This is the targeting you are looking for.</p> <p>Please see <a href="https://github.com/kubernetes/kubernetes/pull/17056" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/17056</a> for the proposal containing PersistentVolumeSelector.</p>
<p>i met a problem by using requests.get() on kubernetes api</p> <pre><code>url = 'http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/' json = requests.get(url) print json.content </code></pre> <p>error code 404 will be returned as: {"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "pods \"tas-core\" not found","reason": "NotFound","details": {"name": "tas-core","kind": "pods"},"code": 404}</p> <p>but if i use GET/curl, the response could be returned successfully:</p> <pre><code>curl http://10.69.117.136:8080/api/v1/namespaces/"default"/pods/tas-core/ </code></pre> <p>{"kind": "Pod","apiVersion": "v1","metadata": {"name": "tas-core","namespace":"default","selfLink": "/api/v1/namespaces/default/pods/tas-core","uid": "a264ce8e-a956-11e5-8293-0050569761f2","resourceVersion": "158546","creationTimestamp": "2015-12-23T09:22:06Z","labels": {"app": "tas-core"},"annotations": {"ctrl": "dynamic","oam": "dynamic"}},"spec": {"volumes":[ ...</p> <p>further more shorter url works fine</p> <pre><code>url = 'http://10.69.117.136:8080/api/v1/namespaces/' json = requests.get(url) print json.content </code></pre> <p>{"kind":"NamespaceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/","resourceVersion":"220452"},"items":[{"metadata":{"name":"default","selfLink":"/api/v1/namespaces/default","uid":"74f89440-a94a-11e5-9afd-0050569761f2","resourceVersion":"6","creationTimestamp":"2015-12-23T07:54:55Z"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}]}</p> <p>where did i wrong?</p>
<p>Making the request from <code>requests</code> and from command line sends it to different urls.</p> <p>The <code>requests</code> request from Python code really tries to use url including the quotes.</p> <p><code>curl</code> from command line does strip the quotes (in other cases it escapes the quotes).</p> <p>I am unable to test your real url for real requests, but I guess, that following might work:</p> <pre><code>url = 'http://10.69.117.136:8080/api/v1/namespaces/default/pods/tas-core/' json = requests.get(url) print json.content </code></pre>
<p>I am currently experimenting with Kubernetes and have installed a small cluster on ESX infra I had running here locally. I installed two slave nodes with a master node using Project Atomic with Fedora. The cluster is all installed fine and seems to be running. However I first want to get a MySQL container up and running, but no matter what I try i cannot get it to run.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - resources: limits : cpu: 0.5 image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: myPassw0rd ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage nfs: server: 10.0.0.2 path: "/export/mysql" </code></pre> <p>For the volume I already tried all kinds of solutions, I tried using persistent volume with and without claim. I tried using host volume and emptyDir, but I always end up with this error when the container starts:</p> <p>chown: changing ownership of '/var/lib/mysql/': Operation not permitted</p> <p>I must be doing something stupid, but no idea what to do here?</p>
<p>Ok it seems I can answer my own question, the problem was lying in the NFS share that was being used as the persistent volume. I had it set to 'squash_all' in the export but it needs to have a 'no_root_squash' to allow root in case of docker container to chown on the nfs bound volume.</p>
<p>I have a kubernetes pod to which I attach a GCE persistent volume using a persistence volume claim. (For the even worse issue without a volume claim see: <a href="https://stackoverflow.com/q/34769946/454103">Mounting a gcePersistentDisk kubernetes volume is very slow</a>)</p> <p>When there is no volume attached, the pod starts in no time (max 2 seconds). But when the pod has a GCE persistent volume mount, the <code>Running</code> state is reached somewhere between 20 and 60 seconds. I was testing with different disk sizes (10, 200, 500 GiB) and multiple pod creations and the size does not seem to be correlated with the delay.</p> <p>And this delay is <strong>not only</strong> happening in the <strong>beginning</strong> but also when <strong>rolling updates</strong> are performed with the replication controllers or when the <strong>code crashes</strong> during runtime.</p> <p>Below I have the kubernetes specifications:</p> <p>The replication controller</p> <pre><code>{ "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "a1" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "a1" } }, "spec": { "containers": [ { "name": "a1-setup", "image": "nginx", "ports": [ { "containerPort": 80 }, { "containerPort": 443 } ] } ] } } } } </code></pre> <p>The volume claim</p> <pre><code>{ "apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": { "name": "myclaim" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "10Gi" } } } } </code></pre> <p>And the volume</p> <pre><code>{ "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "name": "mydisk", "labels": { "name": "mydisk" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "gcePersistentDisk": { "pdName": "a1-drive", "fsType": "ext4" } } } </code></pre> <p>Also </p>
<p>GCE (along with AWS and OpenStack) must first attach a disk/volume to the node before it can be mounted and exposed to your pod. The time required for attachment is dependent on the cloud provider.</p> <p>In the case of pods created by a ReplicationController, there is an additional detach operation that has to happen. The same disk cannot be attached to more than one node (at least not in read/write mode). Detaching and pod cleanup happen in a different thread than attaching. To be specific, Kubelet running on a node has to reconcile the pods it currently has (and the sum of their volumes) with the volumes currently present on the node. Orphaned volumes are unmounted and detached. If your pod was scheduled on a different node, it must wait until the original node detaches the volume. </p> <p>The cluster eventually reaches the correct state, but it might take time for each component to get there. This is your wait time.</p>
<p>We are having issues with our openshift aws deployment when trying to use persistent volumes. </p> <p>These are some of the there errors when trying to deploy the mysql-persistent instance.</p> <p>-Unable to mount volumes for pod "mysql-4-uizxn_persistent-test": Cloud provider does not support volumes -Error syncing pod, skipping: Cloud provider does not support volumes</p> <p>We added the following on each of our nodes node-config.yaml</p> <pre><code>kubeletArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" </code></pre> <p>and also added the following to our master-config.yaml</p> <pre><code>kubernetesMasterConfig: apiServerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" controllerArguments: cloud-provider: - "aws" cloud-config: - "/etc/aws/aws.conf" </code></pre> <p>Not sure if we are just missing something or if there is a known issue/work around.</p> <p>Also a question is how does openshift or kubernetes know that the config files have been changed?</p> <p>Also just to give you some context we used <a href="https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md" rel="nofollow">openshift-ansible</a> to deploy our environment.</p>
<p>The way the documentation states to export environment variables is a bit inaccurate. They need to be added to the environment that is referenced by the systemd unit file or the node needs to be granted appropriate IAM permissions.</p> <p>For configuring the credentials in the environment for the node, add the following to /etc/sysconfig/origin-node (assuming Origin 1.1):</p> <pre><code>AWS_ACCESS_KEY_ID=&lt;key id&gt; AWS_SECRET_ACCESS_KEY=&lt;secret key&gt; </code></pre> <p>Alternatively, the nodes can be assigned an IAM role with the appropriate permissions. The following cloudformation resource snippet creates a role with the appropriate permissions for a node:</p> <pre><code>"NodeIAMRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Policies": [ { "PolicyName": "demo-node-1", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" } ] } }, { "PolicyName": "demo-node-2", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:AttachVolume", "Resource": "*" } ] } }, { "PolicyName": "demo-node-3", "PolicyDocument": { "Version" : "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:DetachVolume", "Resource": "*" } ] } } ] } } </code></pre>
<p>I'm trying to build a Kubernetes cluster with following services inside:</p> <ul> <li>Docker-registry (which will contain my django Docker image)</li> <li>Nginx listenning both on port 80 and 443</li> <li>PostgreSQL</li> <li>Several django applications served with gunicorn</li> <li><a href="http://blog.ployst.com/development/2015/12/22/letsencrypt-on-kubernetes.html" rel="nofollow">letsencrypt</a> container to generate and automatically renew signed SSL certificates</li> </ul> <p>My problem is a chicken and egg problem that occurs during the creation of the cluster:</p> <p>My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)</p> <p>So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...</p> <p>So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443</p> <p>-> This kind of look like a waste of resources in my opinion, but why not.</p> <p>Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.</p> <p>So in my nginx configuration, I'll have a docker-registry.conf file looking like:</p> <pre><code>upstream docker-registry { server registry:5000; } server { listen 443; server_name docker.thedivernetwork.net; # SSL ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486) chunked_transfer_encoding on; location /v2/ { # Do not allow connections from docker 1.5 and earlier # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) { return 404; } # To add basic authentication to v2 use auth_basic setting plus add_header auth_basic "registry.localhost"; auth_basic_user_file /etc/nginx/conf.d/registry.password; add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always; proxy_pass http://docker-registry; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; } } </code></pre> <p>The important part is the proxy_pass that redirect toward the registry container.</p> <p>The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:</p> <pre><code>upstream django { server django:5000; } server { listen 443 ssl; server_name example.com; charset utf-8; ssl on; ssl_certificate /etc/nginx/conf.d/cacert.pem; ssl_certificate_key /etc/nginx/conf.d/privkey.pem; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; client_max_body_size 20M; location / { # checks for static file, if not found proxy to app try_files $uri @proxy_to_django; } location @proxy_to_django { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; #proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 65; proxy_read_timeout 65; proxy_pass http://django; } } </code></pre> <p>So nginx will successfully start only under 3 conditions:</p> <ul> <li>secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)</li> <li>registry service is started</li> <li>django service is started</li> </ul> <p>The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.</p> <p>I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them</p> <p>The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:</p> <ul> <li>I start docker registry service</li> <li>I start Nginx with only the registry.conf</li> <li>I create my django rc and service</li> <li>I reload nginx with both registry.conf and django.conf</li> </ul> <p>If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.</p> <p>How can I cleanly achieve this setup?</p> <p>Thanks for your help</p> <p>Thibault</p>
<p>Are you using Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/services.md" rel="nofollow">Services</a> for your applications? </p> <p>With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.</p> <p>So you start the Services, then start nginx and whatever Pod you want in the order you want.</p>
<p>I'm setting up a Kubernetes cluster and am testing a small container. This is my YAML file for the pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: example spec: restartPolicy: Never containers: - name: node image: 'node:5' command: ['node'] args: ['-e', 'console.log(1234)'] </code></pre> <p>I deploy it with <code>kubectl create -f example.yml</code> and sure enough it runs as expected:</p> <pre><code>$ kubectl logs example 1234 </code></pre> <p>However, the pod's status is "Error":</p> <pre><code>$ kubectl get po example NAME READY STATUS RESTARTS AGE example 0/1 Error 0 16m </code></pre> <p>How can I investigate why the status is "Error"?</p>
<p><code>kubectl describe pod example</code> will give you more info on what's going on</p> <p>also</p> <p><code>kubectl get events</code> can get you more details too although not dedicated to the given pod.</p>
<p>When you create a Google Container Engine (GKE) cluster you specify what the number and what types of machines you want to use in the cluster. </p> <ol> <li>Is it possible to auto-scale the number of cluster machines based on (for example) CPU load?</li> <li>If this is not supported, is there a reason why or is Google working on something like this for the future?</li> </ol>
<p>Yes, it is. To attach an autoscaler to your existing GKE cluster:</p> <ol> <li><p>Find the name of your cluster's instance group:</p> <pre><code>$ gcloud compute instance-groups managed list NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED gke-buildlets-69898e2d-group us-central1-f gke-buildlets-69898e2d-node 1 1 gke-buildlets-69898e2d-1-1-3 yes </code></pre> <p>Here I have a GKE cluster named <em>buildlets</em>, and its instance group is named <em>gke-buildlets-6989e2d-group</em></p></li> <li><p>Enable autoscaling. This particular example will scale on a target CPU utilization of 70%:</p> <pre><code>gcloud compute instance-groups managed set-autoscaling YOUR_INSTANCE_GROUP_NAME \ --zone=YOUR_INSTANCE_GROUP_ZONE \ --min-num-replicas=1 \ --max-num-replicas=8 \ --scale-based-on-cpu \ --target-cpu-utilization=.7 </code></pre></li> </ol> <p>You can also use <a href="https://cloud.google.com/deployment-manager/overview">Google Cloud Deployment manager</a> to create your GKE cluster, and create/attach an autoscaler right along with it:</p> <pre><code>resources: - name: buildlets type: container.v1.cluster properties: zone: us-central1-f cluster: initial_node_count: 1 network: "default" logging_service: "logging.googleapis.com" monitoring_service: "monitoring.googleapis.com" node_config: machine_type: n1-standard-1 oauth_scopes: - "https://www.googleapis.com/auth/cloud-platform" master_auth: username: admin password: password123 - name: autoscaler type: compute.v1.autoscaler properties: zone: us-central1-f name: buildlets target: "$(ref.buildlets.instanceGroupUrls[0])" autoscalingPolicy: minNumReplicas: 2 maxNumReplicas: 8 coolDownPeriodSec: 600 cpuUtilization: utilizationTarget: .7` </code></pre>
<p>I used the instructions in the official getting started guide (<a href="http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html" rel="nofollow">http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html</a>) to get started with kubernetes on vagrant with the vmware fusion provider on OS X.</p> <p>When running</p> <pre><code>export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash </code></pre> <p>everything seems to work fine, but in the end i get the following error:</p> <pre><code>Validating minion-1 ...... Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: net/http: TLS handshake timeout </code></pre> <p>I've found the following github issues:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/issues/13382" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/13382</a></li> <li><a href="https://github.com/kubernetes/kubernetes/issues/17426" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/17426</a></li> </ul> <p>Because it seems that both do not post the question on SO as recommended, i decided to do so.</p> <p>My Environment:</p> <ul> <li>OS X 10.11.1</li> <li>Vagrant 1.7.4</li> <li>VMWare Fusion 7.1.3</li> </ul> <p>I'm new to Kubernetes, if you need more information i will provide them.</p>
<p>I was also getting the same error - <br> "Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get <a href="https://10.245.1.2/api" rel="nofollow">https://10.245.1.2/api</a>: net/http: TLS handshake timeout" I just tried <br>"./cluster/kube-push.sh" and this time cluster was created and validated successfully. <br>Environment details: <br>Host Machine- Ubuntu 14.04 <br>Vagrant - 1.8.1 <br>virtualbox - 5.0.14 <br>kubernetes- 1.1.4</p> <p>just to add, after setting up the cluster with default vm memory(1024MB) I was not able to run any pod(I tried NGNIX), it was always in pending state. So I increased the memory and restarted &amp; now it runs fine. </p>
<p>We are running a Jetty service on the Google container engine. This one service runs just fine in a pod with a rc. We can shut it down, rebuild it and do all manner of things to it and it will still work.</p> <p>Now we want to extend our infrastructure with a debian image that runs something else. Locally, the docker works fine and we can access the debian commandline. Once we try to run the pod in the cloud, we get issues.</p> <p>The Dockerfile we use contains: FROM debian:latest Then we run the next commands:</p> <pre><code>docker build -t eu.gcr.io/project_id/debstable:stable . gcloud docker push eu.gcr.io/project_id/debstable:stable kubectl run debstable --image=eu.gcr.io/project_id/debstable:stable </code></pre> <p>The pod receives the CrashLoopBackOff STATUS and keeps on restarting. Part of the logs show this: </p> <pre><code>I0120 14:19:58.438979 3479 kubelet.go:2012] SyncLoop (ADD): "debstable-blvdi_default" I0120 14:19:58.478235 3479 manager.go:1707] Need to restart pod infra container for "debstable-blvdi_default" because it is not found I0120 14:20:00.025467 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (100.384Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40650] I0120 14:20:05.017006 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (56.159Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40694] I0120 14:20:10.015072 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (66.802Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40708] I0120 14:20:15.017521 3479 server.go:944] GET /stats/default/debstable-blvdi/e2ab2ffc-bf80-11e5-a1d8-42010af001a5/debstable: (32.91Β΅s) 404 [[Go 1.1 package http] 10.0.0.3:40566] I0120 14:20:18.530030 3479 manager.go:2022] Back-off 10s restarting failed container=debstable pod=debstable-blvdi_default </code></pre> <p>The docker info shows the versions, which may be relevant:</p> <pre><code>$ sudo docker info Containers: 24 Images: 68 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 116 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.16.0-0.bpo.4-amd64 Operating System: Debian GNU/Linux 7 (wheezy) CPUs: 1 Total Memory: 3.625 GiB WARNING: No swap limit support </code></pre> <p>Thanks and good day</p>
<p>Prashanth B was right! Having no commands causes the pod to restart in the crashloop.</p> <p>Thanks and good day.</p>
<p>I spinned a <a href="https://github.com/kylemanna/docker-openvpn"><code>docker-openvpn</code></a> container in my (local) <strong>Kubernetes</strong> cluster to access my Services securely and debug <em>dependent services</em> locally.</p> <p>I can connect to the cluster via the <strong>openVPN</strong> server. However I can't resolve my <strong>Services</strong> via <strong>DNS</strong>.</p> <p>I managed to get to the point where after setting routes on the VPN server:</p> <ul> <li>I can ping a <strong>Pod</strong> <em>by IP</em> (<code>subnet 10.2.0.0/16</code>)</li> <li>I can ping a <strong>Service</strong> <em>by IP</em> (<code>subnet 10.3.0.0/16</code> like the DNS which is at <code>10.3.0.10</code>)</li> <li>I can <code>curl</code> to a <strong>Services</strong> <em>by IP</em> and get the data I need.</li> </ul> <p>but when i <code>nslookup kubernetes</code> or any <strong>Service</strong>, I get:</p> <pre><code>nslookup kubernetes ;; Got recursion not available from 10.3.0.10, trying next server ;; Got SERVFAIL reply from 10.3.0.10, trying next server </code></pre> <p>I am still missing something for the data to return from the DNS server, but can't figure what I need to do.</p> <p>How do I debug this <code>SERVFAIL</code> issue in <strong>Kubernetes DNS</strong>?</p> <p><strong>EDIT:</strong></p> <p>Things I have noticed and am looking to understand:</p> <ul> <li><code>nslookup</code> works to resolve Service name in any pod except the openvpn Pod</li> <li>while <code>nslookup</code> works in those other Pods, <code>ping</code> does not.</li> <li>similarly <code>traceroute</code> in those other Pods leads to the flannel layer <code>10.0.2.2</code> and then stops there.</li> </ul> <p>from this I guess ICMP must be blocked at the flannel layer, and that doesn't help me figure where DNS is blocked.</p> <p><strong>EDIT2:</strong></p> <p>I finally figured how to get nslookup to work: I had to push the DNS search domain to the client with </p> <pre><code>push "dhcp-option DOMAIN-SEARCH cluster.local" push "dhcp-option DOMAIN-SEARCH svc.cluster.local" push "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" </code></pre> <p>add with the <code>-p</code> option in the <code>docker-openvpn</code> image</p> <p>so i end up with</p> <pre><code>docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -d \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" </code></pre> <p>Now, <code>nslookup</code> works but <strong><code>curl</code> still does not</strong> </p>
<p>finally my config looks like this:</p> <pre><code>docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \ -u udp://192.168.10.152:1194 \ -n 10.3.0.10 \ -n 192.168.10.1 \ -n 8.8.8.8 \ -n 75.75.75.75 \ -n 75.75.75.76 \ -s 10.8.0.0/24 \ -N \ -p "route 10.2.0.0 255.255.0.0" \ -p "route 10.3.0.0 255.255.0.0" \ -p "dhcp-option DOMAIN-SEARCH cluster.local" \ -p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \ -p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local" </code></pre> <p><code>-u</code> for the VPN server address and port</p> <p><code>-n</code> for all the DNS servers to use</p> <p><code>-s</code> to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already)</p> <p><code>-d</code> to disable NAT</p> <p><code>-p</code> to push options to the client</p> <p><code>-N</code> to enable NAT: it seems critical for this setup on Kubernetes</p> <p>the last part, pushing the search domains to the client, was the key to getting <code>nslookup</code> etc.. to work.</p> <p>note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.</p>
<p>I'm trying to create 3 instances of Kafka and deploy it a local Kubernetes setup. Because each instance needs some specific configuration, I'm creating one RC and one service for each - eagerly waiting for <a href="https://github.com/kubernetes/kubernetes/pull/18016" rel="noreferrer">#18016</a> ;)</p> <p>However, I'm having problems because Kafka can't establish a network connection to itself when it uses the service IP (a Kafka broker tries to do this when it is exchanging replication messages with other brokers). For example, let's say I have two worker hosts (172.17.8.201 and 172.17.8.202) and my pods are scheduled like this:</p> <ul> <li><p>Host 1 (172.17.8.201)</p> <ul> <li><code>kafka1</code> pod (10.2.16.1)</li> </ul></li> <li><p>Host 2 (172.17.8.202)</p> <ul> <li><code>kafka2</code> pod (10.2.68.1)</li> <li><code>kafka3</code> pod (10.2.68.2)</li> </ul></li> </ul> <p>In addition, let's say I have the following service IPs:</p> <ul> <li><code>kafka1</code> cluster IP: 11.1.2.96</li> <li><code>kafka2</code> cluster IP: 11.1.2.120</li> <li><code>kafka3</code> cluster IP: 11.1.2.123</li> </ul> <p>The problem happens when the <code>kafka1</code> pod (container) tries to send a message (to itself) using the <code>kafka1</code> cluster IP (11.1.2.96). For some reason, the connection cannot established and the message is not sent.</p> <p>Some more information: If I manually connect to the <code>kafka1</code> pod, I can correctly telnet to <code>kafka2</code> and <code>kafka3</code> pods using their respective cluster IPs (11.1.2.120 / 11.1.2.123). Also, if I'm in the <code>kafka2</code> pod, I connect to both <code>kafka1</code> and <code>kafka3</code> pods using 11.1.2.96 and 11.1.2.123. Finally, I can connect to all pods (from all pods) if I use the pod IPs.</p> <p>It is important to emphasize that I shouldn't tell the kafka brokers to use the pod IPs instead of the cluster IPs for replication. As it is right now, Kafka uses for replication whatever IP you configure to be "advertised" - which is the IP that your client uses to connect to the brokers. Even if I could, I believe this problem may appear with other software as well.</p> <p>This problem seems to happen only with the combination I am using, because the exact same files work correctly in GCE. Right now, I'm running:</p> <ul> <li>Kubernetes 1.1.2</li> <li>coreos 928.0.0</li> <li>network setup with flannel</li> <li>everything on vagrant + VirtualBpx</li> </ul> <p>After some debugging, I'm not sure if the problem is in the workers iptables rules, in kube-proxy, or in flannel. </p> <p>PS: I posted this question originally as an <a href="https://github.com/kubernetes/kubernetes/issues/19930" rel="noreferrer">Issue</a> on their github, but I have been redirected to here by the Kubernetes team. I reword the text a bit because it was sounding like it was a "support request", but actually I believe it is some sort of bug. Anyway, sorry about that Kubernetes team! </p> <hr> <p><strong>Edit:</strong> This problem has been confirmed as a bug <a href="https://github.com/kubernetes/kubernetes/issues/20391" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/20391</a></p>
<p>for what you want to do you should be using a <strong>Headless Service</strong> <a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services" rel="noreferrer">http://kubernetes.io/v1.0/docs/user-guide/services.html#headless-services</a></p> <p>this means setting </p> <p><code>clusterIP: None</code></p> <p>in your <strong>Service</strong></p> <p>and that means there won't be an IP associated with the service but it will return all IPs of the Pods selected by the <code>selector</code></p>
<p>There's two kinds of status code of one-shot pods, running from API or the command:</p> <p><code>kubectl run --restart=Never --image test:v0.1 ...</code>.</p> <p>The pods produce output files to a NFS server, and I've got files successfully. </p> <p><code>kubectl get pods -ao wide</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE test-90 0/1 ExitCode:0 0 23m 192.168.1.43 test-91 0/1 ExitCode:0 0 23m 192.168.1.43 test-92 0/1 ExitCode:0 0 23m 192.168.1.43 test-93 0/1 ExitCode:0 0 23m 192.168.1.43 test-94 0/1 Error 0 23m 192.168.1.46 test-95 0/1 Error 0 23m 192.168.1.46 test-96 0/1 Error 0 23m 192.168.1.46 test-97 0/1 Error 0 23m 192.168.1.46 test-98 0/1 Error 0 23m 192.168.1.46 test-99 0/1 ExitCode:0 0 23m 192.168.1.43 </code></pre> <p>the description of <code>ExitCode:0</code> pod:</p> <pre><code>Name: test-99 Namespace: default Image(s): test:v0.1 Node: 192.168.1.43/192.168.1.43 Status: Succeeded Replication Controllers: &lt;none&gt; Containers: test: State: Terminated Exit Code: 0 Ready: False Restart Count: 0 </code></pre> <p>the description of <code>Error</code> pod:</p> <pre><code>Name: test-98 Namespace: default Image(s): test:v0.1 Node: 192.168.1.46/192.168.1.46 Status: Succeeded Replication Controllers: &lt;none&gt; Containers: test: State: Terminated Reason: Error Exit Code: 0 Ready: False Restart Count: 0 </code></pre> <p>Their NFS volumes:</p> <pre><code>Volumes: input: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/input ReadOnly: false output: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.1.46 Path: /srv/nfs4/output ReadOnly: false default-token-nmviv: Type: Secret (a secret that should populate this volume) SecretName: default-token-nmviv </code></pre> <p><code>kubectl logs</code> returns none, since the container just produces output files.</p> <p>Thanks in advance!</p>
<p><code>ExitCode 0</code> means it terminated normally</p> <p>Exit codes can be used if you pipe to another process, so the process knows what to do next (if previous process failed do this, else do something with the data passed...)</p>
<p>I am new to Kubernetes and been looking at it as an option for a specific solution. </p> <ul> <li>We have a scenario where we have 100+ physical machines running RHEL distributed across different locations.</li> <li>There is a plan to deploy and manage docker based containers on each of these machines. Let's group these containers as Pod A.</li> <li>Now each of these machines require an instance of Pod A running on them and automatically synchronise if there are any changes.</li> <li>Over time new machines maybe added and they will need to automatically get Pod A running on them as well.</li> </ul> <p>I understand the idea behind Kubernetes is to abstract the machine and OS layer but in this case we can't do that. So I guess I have a few questions around this: -</p> <ul> <li>Is Kubernetes the correct choice here? Are we breaking the fundamental concept behind it?</li> <li>Is it possible to tag each machine as an identifiable node?</li> <li>Target a specific Pod to a subset of nodes?</li> </ul> <p>Are there any similar examples available?</p>
<p>There is a resource named <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/daemon.md" rel="nofollow">daemonset</a> that spawn at least a pod per node, on a new node added the pod will be spawn on it automatically. </p> <p>About update, change the image in the resource daemonset and all pods will be updated. </p>
<p>Whenever DNS gets ran on a kubelet other than the one that resides on the master node then the Liveness and Readiness probes for skydns keep failing. I am deploying the add ons as a service similar to what is used in the salt cluster. I have configured my system to use tokens and have verified that a token gets generated for system:dns and gets configured correctly for the kubelet. Is there something additional I need to do inside the skydns rc/svc yamls as well because of this? </p> <p>Salt Cluster: <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase/salt/kube-addons" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase/salt/kube-addons</a></p> <p>Ansible Deployment: <a href="https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons/files" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons/files</a></p> <p>I am using the standard skydns rc/svc yamls.</p> <p>Pod Description:</p> <pre><code>Name: kube-dns-v10-pgqig Namespace: kube-system Image(s): gcr.io/google_containers/etcd:2.0.9,gcr.io/google_containers/kube2sky:1.12,gcr.io/google_containers/skydns:2015-10-13-8c72f8c,gcr.io/google_containers/exechealthz:1.0 Node: minion-1/172.28.129.2 Start Time: Thu, 21 Jan 2016 08:54:50 -0800 Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v10 Status: Running Reason: Message: IP: 18.16.18.9 Replication Controllers: kube-dns-v10 (1/1 replicas created) Containers: etcd: Container ID: docker://49216f478c99fcd3c25763e99bb18861d31025a0cadd538f9590295e78846f69 Image: gcr.io/google_containers/etcd:2.0.9 Image ID: docker://b6b9a86dc06aa1361357ca1b105feba961f6a4145adca6c54e142c0be0fe87b0 Command: /usr/local/bin/etcd -data-dir /var/etcd/data -listen-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -advertise-client-urls http://127.0.0.1:2379,http://127.0.0.1:4001 -initial-cluster-token skydns-etcd QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: kube2sky: Container ID: docker://4cbdf45e1ba0a6a820120c934473e61bf74af49d1ff42a0da01abd593516f4ee Image: gcr.io/google_containers/kube2sky:1.12 Image ID: docker://b8f3273706d3fc51375779110828379bdbb663e556cca3925e87fbc614725bb1 Args: -domain=cluster.local -kube_master_url=http://master:8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: memory: 50Mi cpu: 100m Requests: memory: 50Mi cpu: 100m State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: skydns: Container ID: docker://bd3103f514dcc4e42ff2c126446d963d03ef1101833239926c84d5c0ba577929 Image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c Image ID: docker://763c92e53f311c40a922628a34daf0be4397463589a7d148cea8291f02c12a5d Args: -machines=http://127.0.0.1:4001 -addr=0.0.0.0:53 -ns-rotate=false -domain=cluster.local. QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Running Started: Thu, 21 Jan 2016 09:13:50 -0800 Last Termination State: Terminated Reason: Error Exit Code: 2 Started: Thu, 21 Jan 2016 09:13:14 -0800 Finished: Thu, 21 Jan 2016 09:13:50 -0800 Ready: False Restart Count: 28 Environment Variables: healthz: Container ID: docker://b46d2bb06a72cda25565b4f40ce956f252dce5df7f590217b3307126ec29e7c7 Image: gcr.io/google_containers/exechealthz:1.0 Image ID: docker://4f3d04b1d47b64834d494f9416d1f17a5f93a3e2035ad604fee47cfbba62be60 Args: -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 &gt;/dev/null -port=8080 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 10m memory: 20Mi Requests: cpu: 10m memory: 20Mi State: Running Started: Thu, 21 Jan 2016 08:54:51 -0800 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: etcd-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-62irv: Type: Secret (a secret that should populate this volume) SecretName: default-token-62irv Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Created Created container with docker id 49216f478c99 19m 19m 1 {scheduler } Normal Scheduled Successfully assigned kube-dns-v10-pgqig to minion-1 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Pulled Container image "gcr.io/google_containers/etcd:2.0.9" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Created Created container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Started Started container with docker id 4cbdf45e1ba0 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id fdb1278aaf93 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz:1.0" already present on machine 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Created Created container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{healthz} Normal Started Started container with docker id b46d2bb06a72 19m 19m 1 {kubelet minion-1} spec.containers{etcd} Normal Started Started container with docker id 49216f478c99 19m 19m 1 {kubelet minion-1} spec.containers{kube2sky} Normal Pulled Container image "gcr.io/google_containers/kube2sky:1.12" already present on machine 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id fdb1278aaf93: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 70474f1ca315 18m 18m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 70474f1ca315 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 70474f1ca315: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 8e18a0b404dd 17m 17m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 8e18a0b404dd 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 8e18a0b404dd: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 00b4e2a46779 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 3df9a304e09a 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 00b4e2a46779: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 16m 16m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 3df9a304e09a 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 3df9a304e09a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Created Created container with docker id 4b3ee7fccfd2 15m 15m 1 {kubelet minion-1} spec.containers{skydns} Normal Started Started container with docker id 4b3ee7fccfd2 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 4b3ee7fccfd2: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 14m 14m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id d1100cb0a5be: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 13m 13m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id 19e2bbda4f80: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 12m 12m 1 {kubelet minion-1} spec.containers{skydns} Normal Killing Killing container with docker id c424c0ad713a: pod "kube-dns-v10-pgqig_kube-system(af674b6a-c05f-11e5-9e37-08002771c788)" container "skydns" is unhealthy, it will be killed and re-created. 19m 1s 29 {kubelet minion-1} spec.containers{skydns} Normal Pulled Container image "gcr.io/google_containers/skydns:2015-10-13-8c72f8c" already present on machine 12m 1s 19 {kubelet minion-1} spec.containers{skydns} Normal Killing (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Created (events with common reason combined) 14m 1s 23 {kubelet minion-1} spec.containers{skydns} Normal Started (events with common reason combined) 18m 1s 30 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 503 18m 1s 114 {kubelet minion-1} spec.containers{skydns} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>(etcd)</p> <pre><code>$ kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:23:03 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:03 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:23:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:21 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:25 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:29 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:33 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:37 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:23:41 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" [vagrant@kubernetes-master ~]$ kubectl logs kube-dns-v10-0biid etcd --namespace=kube-system 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:2380 2016/01/21 23:28:10 etcd: listening for peers on http://localhost:7001 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:2379 2016/01/21 23:28:10 etcd: listening for client requests on http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: datadir is valid for the 2.0.1 format 2016/01/21 23:28:10 etcdserver: name = default 2016/01/21 23:28:10 etcdserver: data dir = /var/etcd/data 2016/01/21 23:28:10 etcdserver: member dir = /var/etcd/data/member 2016/01/21 23:28:10 etcdserver: heartbeat = 100ms 2016/01/21 23:28:10 etcdserver: election = 1000ms 2016/01/21 23:28:10 etcdserver: snapshot count = 10000 2016/01/21 23:28:10 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001 2016/01/21 23:28:10 etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001 2016/01/21 23:28:10 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001 2016/01/21 23:28:10 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 0 2016/01/21 23:28:10 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] 2016/01/21 23:28:10 raft: 6a5871dbdd12c17c became follower at term 1 2016/01/21 23:28:10 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c is starting a new election at term 1 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became candidate at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 raft: 6a5871dbdd12c17c became leader at term 2 2016/01/21 23:28:12 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2 2016/01/21 23:28:12 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a </code></pre> <p>(kube2sky)</p> <pre><code>I0121 23:28:19.352170 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001 I0121 23:28:20.354200 1 kube2sky.go:503] Using https://10.254.0.1:443 for kubernetes master I0121 23:28:20.354248 1 kube2sky.go:504] Using kubernetes API &lt;nil&gt; </code></pre> <p>(skydns)</p> <pre><code> kubectl logs kube-dns-v10-0biid skydns --namespace=kube-system 2016/01/22 00:27:43 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns) [2] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:43 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/01/22 00:27:49 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:53 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:27:57 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:01 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:05 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:09 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:13 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" 2016/01/22 00:28:17 skydns: failure to forward request "read udp 10.0.2.3:53: i/o timeout" </code></pre> <p>The service endpoint IP does NOT seem to be getting set:</p> <pre><code>kubectl describe svc kube-dns --namespace=kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS Selector: k8s-app=kube-dns Type: ClusterIP IP: 10.254.0.10 Port: dns 53/UDP Endpoints: Port: dns-tcp 53/TCP Endpoints: Session Affinity: None No events. </code></pre> <p>I have double checked the serviceaccounts and that all seems configured correctly:</p> <pre><code> kubectl get secrets --all-namespaces NAMESPACE NAME TYPE DATA AGE default default-token-z71xj kubernetes.io/service-account-token 2 1h kube-system default-token-wce74 kubernetes.io/service-account-token 2 1h kube-system token-system-controller-manager-master Opaque 1 1h kube-system token-system-dns Opaque 1 1h kube-system token-system-kubectl-master Opaque 1 1h kube-system token-system-kubelet-minion-1 Opaque 1 1h kube-system token-system-logging Opaque 1 1h kube-system token-system-monitoring Opaque 1 1h kube-system token-system-proxy-minion-1 Opaque 1 1h kube-system token-system-scheduler-master Opaque 1 1h </code></pre> <p>The default secret for kube-system namespaces which matches the one the POD is using. </p> <pre><code>kubectl describe secrets default-token-wce74 --namespace=kube-system Name: default-token-wce74 Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=default,kubernetes.io/service-account.uid=70da0a10-c096-11e5-aa7b-08002771c788 Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXdjZTc0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MGRhMGExMC1jMDk2LTExZTUtYWE3Yi0wODAwMjc3MWM3ODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.sykf8qmh9ekAEHnSPAMLPz04zebvDJhb72A2YC1Y8_BXoA57U7KRAVDVyyxQHrEUSlHsSfxzqHHOcLniPQbqWZxc0bK4taV6zdBKIgndEthz0HGJQJdfZJKxurP5dhI6TOIpeLYpUE6BN6ubsVQiJksVLK_Lfq_c1posqAUi8eXD-KsqRDA98JMUZyirRGRXzZfF7-KscIqys7AiHAURHHwDibjmXIdYKBpDwc6hOIATpS3r6rLj30R1hNYy4u2GkpNsIYo83zIt515rnfCH9Yq1syT6-qho0SaPnj3us-uT8ZXF0x_7SlChV9Wx5Mo6kW3EHg6-A6q6m3R0KlsHjQ ca.crt: 1387 bytes </code></pre> <p>I have also <code>kubectl exec</code> into the kube2sky container and the ca.crt matches the one on the server.</p>
<p>It seems that there were two problems I had:</p> <h3>Cert Creation</h3> <p>My implementation is based off the ansible deployment found here: <a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ansible</a></p> <p>This deployment seems to generates the certs for all networking interfaces. It also adds <code>IP:</code> in front of them and then in the script that generates the certs(make-ca-cert.sh) it prepends IP again. Not 100% sure if that is okay. However, I changed it to just generate certs for the networking interface and removed the addition <code>IP:</code> and that seems to of resolved the issue.</p> <p>Very good thread explaining certs, how to create them and how they work with Kubernetes: <a href="https://github.com/kubernetes/kubernetes/issues/11000" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/11000</a></p> <h3>APIServer Setting --advertise-address</h3> <p>Also, apparently I needed to set <code>--advertise-address</code> as well for the apiserver. </p> <p>Adjusting these two things seemed to of resolved the issue.</p>