input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
If you used kube-up, your minions should be launched into an auto-scaling group, and you should just be able to change the size of that auto-scaling group. This question is a few months old, so I'm not sure whether this was the case back in July, but it should definitely be the case now. I'm also working on launching the master into an auto-scaling group, so that if the master dies it is automatically restated, but that is a little more complicated because of the persistent disk. But hopefully I'll make 1.1!
###Instruction: I initially setup a cluster with the kube-up.sh utility on AWS. Now I ran out of capacity and I'm looking for a simple way to extend my cluster with more nodes. Is there a 'simple' way to add more nodes to a running cluster on AWS? I've been reading all the docs but cannot seem to find a way. Thanks! ### Response: If you used kube-up, your minions should be launched into an auto-scaling group, and you should just be able to change the size of that auto-scaling group. This question is a few months old, so I'm not sure whether this was the case back in July, but it should definitely be the case now. I'm also working on launching the master into an auto-scaling group, so that if the master dies it is automatically restated, but that is a little more complicated because of the persistent disk. But hopefully I'll make 1.1!
I initially setup a cluster with the kube-up.sh utility on AWS. Now I ran out of capacity and I'm looking for a simple way to extend my cluster with more nodes. Is there a 'simple' way to add more nodes to a running cluster on AWS? I've been reading all the docs but cannot seem to find a way. Thanks!
So after a little experimentation and a hint from the k8s Slack channel, I have this example. Perhaps someone can update the example with a proper import path. package main import ( "fmt" "log" "github.com/kubernetes/kubernetes/pkg/api" client "github.com/kubernetes/kubernetes/pkg/client/unversioned" ) func main() { config := client.Config{ Host: "http://my-kube-api-server.me:8080", } c, err := client.New(&config) if err != nil { log.Fatalln("Can't connect to Kubernetes API:", err) } s, err := c.Services(api.NamespaceDefault).Get("some-service-name") if err != nil { log.Fatalln("Can't get service:", err) } fmt.Println("Name:", s.Name) for p, _ := range s.Spec.Ports { fmt.Println("Port:", s.Spec.Ports[p].Port) fmt.Println("NodePort:", s.Spec.Ports[p].NodePort) } }
###Instruction: I'm struggling with the Kubernetes Go library. The docs--at least the ones I found--appear out-of-date with the library itself. The example provided does not build because of issues with the imports. I'm just trying to do something simple: get a Service object by name and print some attributes (like nodePort). I just need a simple example of library usage to get me going. I could easily do this using the RESTful API but that feels like re-inventing the wheel. ### Response: So after a little experimentation and a hint from the k8s Slack channel, I have this example. Perhaps someone can update the example with a proper import path. package main import ( "fmt" "log" "github.com/kubernetes/kubernetes/pkg/api" client "github.com/kubernetes/kubernetes/pkg/client/unversioned" ) func main() { config := client.Config{ Host: "http://my-kube-api-server.me:8080", } c, err := client.New(&config) if err != nil { log.Fatalln("Can't connect to Kubernetes API:", err) } s, err := c.Services(api.NamespaceDefault).Get("some-service-name") if err != nil { log.Fatalln("Can't get service:", err) } fmt.Println("Name:", s.Name) for p, _ := range s.Spec.Ports { fmt.Println("Port:", s.Spec.Ports[p].Port) fmt.Println("NodePort:", s.Spec.Ports[p].NodePort) } }
I'm struggling with the Kubernetes Go library. The docs--at least the ones I found--appear out-of-date with the library itself. The example provided does not build because of issues with the imports. I'm just trying to do something simple: get a Service object by name and print some attributes (like nodePort). I just need a simple example of library usage to get me going. I could easily do this using the RESTful API but that feels like re-inventing the wheel.
Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes. To enable privileged containers, you need to pass the --allow-privileged flag to the Kubernetes apiserver in addition to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from master.json), but you can make a local copy of that file, add the --allow-privileged=true flag to the apiserver command line, and then change the --config flag you pass to the Kubelet in Step Two to a directory containing your modified file.
###Instruction: I'm busy testing out kubernetes on my local pc using https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md which launches a dockerized single node k8s cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context privileged=true to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with --allow-privileged=true and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod. Anyone know how to enable privileged in this dockerized k8s for testing? Here is how I run the k8s master: docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests ### Response: Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes. To enable privileged containers, you need to pass the --allow-privileged flag to the Kubernetes apiserver in addition to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from master.json), but you can make a local copy of that file, add the --allow-privileged=true flag to the apiserver command line, and then change the --config flag you pass to the Kubelet in Step Two to a directory containing your modified file.
I'm busy testing out kubernetes on my local pc using https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md which launches a dockerized single node k8s cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context privileged=true to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with --allow-privileged=true and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod. Anyone know how to enable privileged in this dockerized k8s for testing? Here is how I run the k8s master: docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests
Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.
###Instruction: I reinstalled some nodes and a master. Now on the master I am getting: Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate Where $ip is one of the nodes. So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message... ### Response: Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.
I reinstalled some nodes and a master. Now on the master I am getting: Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate Where $ip is one of the nodes. So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...
Using a persistent volume in a Replication Controller works great for shared storage. You include a persistentVolumeClaim in the RC's pod template. Each pod will use the same claim, which means it's shared storage. This also works for read-only access in gcloud if your Replica count > 1. If you wanted distinct volumes per pod, you currently have to create many RCs with Replicas=1 and with distinct persistentVolumeClaims. We're working out a design for scaling storage through an RC where each pod gets its own volume instead of sharing the same claim.
###Instruction: I see in the docs how do do this for pods, but I want to use a replication controller to manage my pods, ensuring that there is always one up at all times. How can I define a replication controller where the pod being run has a persistent volume? How is this related to Kubernetes persistentVolumes and persistentVolumeClaims? ### Response: Using a persistent volume in a Replication Controller works great for shared storage. You include a persistentVolumeClaim in the RC's pod template. Each pod will use the same claim, which means it's shared storage. This also works for read-only access in gcloud if your Replica count > 1. If you wanted distinct volumes per pod, you currently have to create many RCs with Replicas=1 and with distinct persistentVolumeClaims. We're working out a design for scaling storage through an RC where each pod gets its own volume instead of sharing the same claim.
I see in the docs how do do this for pods, but I want to use a replication controller to manage my pods, ensuring that there is always one up at all times. How can I define a replication controller where the pod being run has a persistent volume? How is this related to Kubernetes persistentVolumes and persistentVolumeClaims?
broker.id=10200121 host.name=kafka-f8p06 <----- use IP here advertised.host.name=kafka-f8p06 <---- use IP here I think you should have IPs for host.name and advertised.host.name as K8s does not resolve Pods by hostname but it does by IP. Your kafka nodes probably can't talk to each other that way and can't find the leader.
###Instruction: I'm using a SimpleProducer in the python kafka-library. This script has worked flawlessly previously with other more hard-configured kafka-setups I've tried. kafka = KafkaClient(u'[masterNodeIp]:[servicePort]') producer = SimpleProducer(kafka) #make a simple message, while true run producer.send_messages(b'oneMoreTopic', sentence) After running this script once, I get this response in the python-console. kafka.common.LeaderNotAvailableError: TopicMetadata(topic='oneMoreTopic', error=5, partitions=[]) I can then go into my Node on my zookeeper.log and see: 2015-09-14 12:16:32,276 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:setData cxid:0x71 zxid:0x1000000d8 txntype:-1 reqpath:n/a Error Path:/config/topics/oneMoreTopic Error:KeeperErrorCode = NoNode for /config/topics/oneMoreTopic 2015-09-14 12:16:32,278 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x72 zxid:0x1000000d9 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 2015-09-14 12:16:32,302 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7b zxid:0x1000000dc txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions/0 2015-09-14 12:16:32,304 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7c zxid:0x1000000dd txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions This seems to just be the Zookeeper creating a new Znode for the topic since it is not there from before. And the Kafka server.log prints: [2015-09-14 12:16:32,282] INFO Topic creation {"version":1,"partitions":{"0":[10200119]}} (kafka.admin.AdminUtils$) [2015-09-14 12:16:32,287] INFO [KafkaApi-10200219] Auto creation of topic oneMoreTopic with 1 partitions and replication factor 1 is successful! (kafka.server.KafkaApis) [2015-09-14 12:16:51,579] INFO Closing socket connection to /10.240.1.94. (kafka.network.Processor) However, my message is never posted to the topic and the next time i run the python-script i always get: kafka.common.FailedPayloadsError In the cases where i made it work, the advertised.host.name was always the external IP of the node, but I can't seem to get that working through Kubernetes. Would it be possible to make the external IP callable from the container parhaps? my kafka/config/server.properties looks something like this for all brokers: broker.id=10200121 host.name=kafka-f8p06 advertised.host.name=kafka-f8p06 ++ ### Response: broker.id=10200121 host.name=kafka-f8p06 <----- use IP here advertised.host.name=kafka-f8p06 <---- use IP here I think you should have IPs for host.name and advertised.host.name as K8s does not resolve Pods by hostname but it does by IP. Your kafka nodes probably can't talk to each other that way and can't find the leader.
I'm using a SimpleProducer in the python kafka-library. This script has worked flawlessly previously with other more hard-configured kafka-setups I've tried. kafka = KafkaClient(u'[masterNodeIp]:[servicePort]') producer = SimpleProducer(kafka) #make a simple message, while true run producer.send_messages(b'oneMoreTopic', sentence) After running this script once, I get this response in the python-console. kafka.common.LeaderNotAvailableError: TopicMetadata(topic='oneMoreTopic', error=5, partitions=[]) I can then go into my Node on my zookeeper.log and see: 2015-09-14 12:16:32,276 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:setData cxid:0x71 zxid:0x1000000d8 txntype:-1 reqpath:n/a Error Path:/config/topics/oneMoreTopic Error:KeeperErrorCode = NoNode for /config/topics/oneMoreTopic 2015-09-14 12:16:32,278 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x72 zxid:0x1000000d9 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics 2015-09-14 12:16:32,302 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7b zxid:0x1000000dc txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions/0 2015-09-14 12:16:32,304 - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@627] - Got user-level KeeperException when processing sessionid:0x34fcb982d030000 type:create cxid:0x7c zxid:0x1000000dd txntype:-1 reqpath:n/a Error Path:/brokers/topics/oneMoreTopic/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/oneMoreTopic/partitions This seems to just be the Zookeeper creating a new Znode for the topic since it is not there from before. And the Kafka server.log prints: [2015-09-14 12:16:32,282] INFO Topic creation {"version":1,"partitions":{"0":[10200119]}} (kafka.admin.AdminUtils$) [2015-09-14 12:16:32,287] INFO [KafkaApi-10200219] Auto creation of topic oneMoreTopic with 1 partitions and replication factor 1 is successful! (kafka.server.KafkaApis) [2015-09-14 12:16:51,579] INFO Closing socket connection to /10.240.1.94. (kafka.network.Processor) However, my message is never posted to the topic and the next time i run the python-script i always get: kafka.common.FailedPayloadsError In the cases where i made it work, the advertised.host.name was always the external IP of the node, but I can't seem to get that working through Kubernetes. Would it be possible to make the external IP callable from the container parhaps? my kafka/config/server.properties looks something like this for all brokers: broker.id=10200121 host.name=kafka-f8p06 advertised.host.name=kafka-f8p06 ++
We opened an issue to discuss this: https://github.com/kubernetes/kubernetes/issues/13858 The recommended way to go here is to use IAM instance profiles. kube-up does configure this for you, and if you're not using kube-up I recommend looking at it to emulate what it does! Although we did recently merge in support for using a .aws credentials file, I don't believe it has been back-ported into any release, and it isn't really the way I (personally) recommend. It sounds like you're not using kube-up; you may find it easier if you can use that (and I'd love to know if there's some reason you can't or don't want to use kube-up, as I personally am working on an alternative that I hope will meet everyone's needs!) I'd also love to know if IAM instance profiles aren't suitable for you for some reason.
###Instruction: We are trying to Configure kubernetes RC in AWS instance with AWS Elastic Block Store(EBS). here is the key part of our controller yaml file - volumeMounts: - mountPath: "/opt/phabricator/repo" name: ebsvol volumes: - name: ebsvol awsElasticBlockStore: volumeID: aws://us-west-2a/vol-***** fsType: ext4 our rc can start pod and works fine with out mounting it to a AWS EBS but with volume mounting in an AWS EBS it gives us - Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedMount Unable to mount volumes for pod "phabricator-controller-zvg7z_default": error listing AWS instances: NoCredentialProviders: no valid providers in chain Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedSync Error syncing pod, skipping: error listing AWS instances: NoCredentialProviders: no valid providers in chain We have an credential file with appropiate credential in .aws directory. But its not working. Do we missing something? Is it a configuration issue? Kubectl version: 1.0.4 and 1.0.5 (Tried with both) ### Response: We opened an issue to discuss this: https://github.com/kubernetes/kubernetes/issues/13858 The recommended way to go here is to use IAM instance profiles. kube-up does configure this for you, and if you're not using kube-up I recommend looking at it to emulate what it does! Although we did recently merge in support for using a .aws credentials file, I don't believe it has been back-ported into any release, and it isn't really the way I (personally) recommend. It sounds like you're not using kube-up; you may find it easier if you can use that (and I'd love to know if there's some reason you can't or don't want to use kube-up, as I personally am working on an alternative that I hope will meet everyone's needs!) I'd also love to know if IAM instance profiles aren't suitable for you for some reason.
We are trying to Configure kubernetes RC in AWS instance with AWS Elastic Block Store(EBS). here is the key part of our controller yaml file - volumeMounts: - mountPath: "/opt/phabricator/repo" name: ebsvol volumes: - name: ebsvol awsElasticBlockStore: volumeID: aws://us-west-2a/vol-***** fsType: ext4 our rc can start pod and works fine with out mounting it to a AWS EBS but with volume mounting in an AWS EBS it gives us - Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedMount Unable to mount volumes for pod "phabricator-controller-zvg7z_default": error listing AWS instances: NoCredentialProviders: no valid providers in chain Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedSync Error syncing pod, skipping: error listing AWS instances: NoCredentialProviders: no valid providers in chain We have an credential file with appropiate credential in .aws directory. But its not working. Do we missing something? Is it a configuration issue? Kubectl version: 1.0.4 and 1.0.5 (Tried with both)
Unfortunately I'm not familiar with the Java client library. My suggestion would be to try using the regular command-line client (kubectl). If that works, then you know the problem is in the Java client library or your usage of it. If using the command line client doesn't work, then there will be more people who can help you (since a lot more people are familiar with the command-line client than with the Java client library). In other words % kubectl delete pods ... # --cascade=true by default % kubectl delete services ... I'm curious why you need step (4) and (5). Step (4) should happen automatically when you delete the pod, and step (5) should happen automatically in the background. If the two lines of "kubectl delete" work, then the problem is with the Java client library or your usage of it. As a starting point I would suggest remove calling deleteContainers() and removeImage() from your Java code and see if that helps. I think those steps are unnecessary.
###Instruction: I am working on a Java application which deploys web artifacts in Apache Tomcat Docker Containers with the use of Google Kubernetes. I am using https://github.com/spotify/docker-client in order to carry out Docker Image and Container handling activities and https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api for Kubernetes related functionalities. In this application, I have added a functionality which enables the user to remove a web artifact which the user deploys. When removing I, delete the Kubernetes replication controller which I use to generate the desired number of pod replicas separately delete off the replica pods (as pods are not deleted automatically when the replication controller is deleted in the corresponding method in the Java API) delete off the corresponding Service created delete off the Docker Containers corresponding to the pods deleted off finally, remove the Docker Image used for the deployment Following code shows the removal functionality implemented: public boolean remove(String tenant, String appName) throws WebArtifactHandlerException { String componentName = generateKubernetesComponentName(tenant, appName); final int singleImageIndex = 0; try { if (replicationControllerHandler.getReplicationController(componentName) != null) { String dockerImage = replicationControllerHandler.getReplicationController(componentName).getSpec() .getTemplate().getSpec().getContainers().get(singleImageIndex).getImage(); List<String> containerIds = containerHandler.getRunningContainerIdsByImage(dockerImage); replicationControllerHandler.deleteReplicationController(componentName); podHandler.deleteReplicaPods(tenant, appName); serviceHandler.deleteService(componentName); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); containerHandler.deleteContainers(containerIds); imageBuilder.removeImage(tenant, appName, getDockerImageVersion(dockerImage)); return true; } else { return false; } } catch (Exception exception) { String message = String.format("Failed to remove web artifact[artifact]: %s", generateKubernetesComponentName(tenant, appName)); LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } Implementation of the Docker Container deletion functionality is as follows: public void deleteContainers(List<String> containerIds) throws WebArtifactHandlerException { try { for (String containerId : containerIds) { dockerClient.removeContainer(containerId); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); } } catch (Exception exception) { String message = "Could not delete the Docker Containers."; LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } In the above case although the execution of the desired function takes place without any sort of issue, at certain instances I tend to get the following exception. Sep 11, 2015 3:57:28 PM org.apache.poc.webartifact.WebArtifactHandler remove SEVERE: Failed to remove web artifact[artifact]: app-wso2-com org.apache.poc.miscellaneous.exceptions.WebArtifactHandlerException: Could not delete the Docker Containers. at org.apache.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:80) at org.apache.poc.webartifact.WebArtifactHandler.remove(WebArtifactHandler.java:206) at org.apache.poc.Executor.process(Executor.java:222) at org.apache.poc.Executor.main(Executor.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: com.spotify.docker.client.DockerRequestException: Request error: DELETE unix://localhost:80/v1.12/containers/af05916d2bddf73dcf8bf41c6ea7f5f3b859c90b97447a8248ffa7b5b3968691: 409 at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1061) at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1021) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:544) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:535) at org.wso2.carbon6.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:74) ... 8 more Caused by: com.spotify.docker.client.shaded.javax.ws.rs.ClientErrorException: HTTP 409 Conflict at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:991) at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:975) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:795) at org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:91) at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:756) at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:189) at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:74) at org.glassfish.jersey.client.ClientRuntime$1.run(ClientRuntime.java:171) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:320) at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:201) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) I searched a large number of sources for any help for this but still I wasn't able to avoid it in all instances, I execute this functionality. At the beginning I tended to get this issue more often than now, but allowing the executing thread to sleep at the end of deleting each Docker Container and before deleting any Docker Containers, gradually reduced the number of instances I am getting this issue. Is sleeping the thread the ultimate solution for this issue or is there any other reason for this issue to pop and a solution that can help me to avoid this exception? Any help is greatly appreciated. ### Response: Unfortunately I'm not familiar with the Java client library. My suggestion would be to try using the regular command-line client (kubectl). If that works, then you know the problem is in the Java client library or your usage of it. If using the command line client doesn't work, then there will be more people who can help you (since a lot more people are familiar with the command-line client than with the Java client library). In other words % kubectl delete pods ... # --cascade=true by default % kubectl delete services ... I'm curious why you need step (4) and (5). Step (4) should happen automatically when you delete the pod, and step (5) should happen automatically in the background. If the two lines of "kubectl delete" work, then the problem is with the Java client library or your usage of it. As a starting point I would suggest remove calling deleteContainers() and removeImage() from your Java code and see if that helps. I think those steps are unnecessary.
I am working on a Java application which deploys web artifacts in Apache Tomcat Docker Containers with the use of Google Kubernetes. I am using https://github.com/spotify/docker-client in order to carry out Docker Image and Container handling activities and https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api for Kubernetes related functionalities. In this application, I have added a functionality which enables the user to remove a web artifact which the user deploys. When removing I, delete the Kubernetes replication controller which I use to generate the desired number of pod replicas separately delete off the replica pods (as pods are not deleted automatically when the replication controller is deleted in the corresponding method in the Java API) delete off the corresponding Service created delete off the Docker Containers corresponding to the pods deleted off finally, remove the Docker Image used for the deployment Following code shows the removal functionality implemented: public boolean remove(String tenant, String appName) throws WebArtifactHandlerException { String componentName = generateKubernetesComponentName(tenant, appName); final int singleImageIndex = 0; try { if (replicationControllerHandler.getReplicationController(componentName) != null) { String dockerImage = replicationControllerHandler.getReplicationController(componentName).getSpec() .getTemplate().getSpec().getContainers().get(singleImageIndex).getImage(); List<String> containerIds = containerHandler.getRunningContainerIdsByImage(dockerImage); replicationControllerHandler.deleteReplicationController(componentName); podHandler.deleteReplicaPods(tenant, appName); serviceHandler.deleteService(componentName); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); containerHandler.deleteContainers(containerIds); imageBuilder.removeImage(tenant, appName, getDockerImageVersion(dockerImage)); return true; } else { return false; } } catch (Exception exception) { String message = String.format("Failed to remove web artifact[artifact]: %s", generateKubernetesComponentName(tenant, appName)); LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } Implementation of the Docker Container deletion functionality is as follows: public void deleteContainers(List<String> containerIds) throws WebArtifactHandlerException { try { for (String containerId : containerIds) { dockerClient.removeContainer(containerId); Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS); } } catch (Exception exception) { String message = "Could not delete the Docker Containers."; LOG.error(message, exception); throw new WebArtifactHandlerException(message, exception); } } In the above case although the execution of the desired function takes place without any sort of issue, at certain instances I tend to get the following exception. Sep 11, 2015 3:57:28 PM org.apache.poc.webartifact.WebArtifactHandler remove SEVERE: Failed to remove web artifact[artifact]: app-wso2-com org.apache.poc.miscellaneous.exceptions.WebArtifactHandlerException: Could not delete the Docker Containers. at org.apache.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:80) at org.apache.poc.webartifact.WebArtifactHandler.remove(WebArtifactHandler.java:206) at org.apache.poc.Executor.process(Executor.java:222) at org.apache.poc.Executor.main(Executor.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) Caused by: com.spotify.docker.client.DockerRequestException: Request error: DELETE unix://localhost:80/v1.12/containers/af05916d2bddf73dcf8bf41c6ea7f5f3b859c90b97447a8248ffa7b5b3968691: 409 at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1061) at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1021) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:544) at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:535) at org.wso2.carbon6.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:74) ... 8 more Caused by: com.spotify.docker.client.shaded.javax.ws.rs.ClientErrorException: HTTP 409 Conflict at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:991) at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:975) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:795) at org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:91) at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:756) at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:189) at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:74) at org.glassfish.jersey.client.ClientRuntime$1.run(ClientRuntime.java:171) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:320) at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:201) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) I searched a large number of sources for any help for this but still I wasn't able to avoid it in all instances, I execute this functionality. At the beginning I tended to get this issue more often than now, but allowing the executing thread to sleep at the end of deleting each Docker Container and before deleting any Docker Containers, gradually reduced the number of instances I am getting this issue. Is sleeping the thread the ultimate solution for this issue or is there any other reason for this issue to pop and a solution that can help me to avoid this exception? Any help is greatly appreciated.
The kubernetes service is a virtual IP and doesn't currently handle ICMP requests (see #2259). You should be able to verify connectivity to the kubernetes service using a TCP connection, e.g. curl https://kubernetes/.
###Instruction: I use DNS in kubernetes. and test result like: core@core-1-86 ~ $ kubectl exec busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 Name: kubernetes Address 1: 10.100.0.1 And then I entried to busybox container, and ping kubernetes, like: core@core-1-86 ~ $ kubectl exec -it busybox sh / # ping kubernetes PING kubernetes (10.100.0.1): 56 data bytes ^C --- kubernetes ping statistics --- 55 packets transmitted, 0 packets received, 100% packet loss / # if I ping another ip , it ok! / # ping 10.12.1.85 PING 10.12.1.85 (10.12.1.85): 56 data bytes 64 bytes from 10.12.1.85: seq=0 ttl=63 time=0.262 ms 64 bytes from 10.12.1.85: seq=1 ttl=63 time=0.218 ms ^C --- 10.12.1.85 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.218/0.240/0.262 ms / # who can help me and tell me why? ### Response: The kubernetes service is a virtual IP and doesn't currently handle ICMP requests (see #2259). You should be able to verify connectivity to the kubernetes service using a TCP connection, e.g. curl https://kubernetes/.
I use DNS in kubernetes. and test result like: core@core-1-86 ~ $ kubectl exec busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 Name: kubernetes Address 1: 10.100.0.1 And then I entried to busybox container, and ping kubernetes, like: core@core-1-86 ~ $ kubectl exec -it busybox sh / # ping kubernetes PING kubernetes (10.100.0.1): 56 data bytes ^C --- kubernetes ping statistics --- 55 packets transmitted, 0 packets received, 100% packet loss / # if I ping another ip , it ok! / # ping 10.12.1.85 PING 10.12.1.85 (10.12.1.85): 56 data bytes 64 bytes from 10.12.1.85: seq=0 ttl=63 time=0.262 ms 64 bytes from 10.12.1.85: seq=1 ttl=63 time=0.218 ms ^C --- 10.12.1.85 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.218/0.240/0.262 ms / # who can help me and tell me why?
IP is the address of your service, which remains constant over time. Endpoints is the collection of backend addresses across which requests to the service address are spread at a given point in time. That collection changes every time the set of pods comprising your service changes, as you've noticed when performing a rolling update on your replication controller (RC).
###Instruction: I have a kubernetes service called staging that selects all app=jupiter pods. It exposes an HTTP service on port 1337. Here's the describe output: $ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: <unnamed> 1337/TCP NodePort: <unnamed> 30421/TCP Endpoints: 10.8.0.21:1337 Session Affinity: None No events. But when I run a kubectl rolling-update on the RC, which removes the 1 pod running the application and adds another, and run describe again, I get: $ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: <unnamed> 1337/TCP NodePort: <unnamed> 30421/TCP Endpoints: 10.8.0.22:1337 Session Affinity: None No events. Everything is the same, except for the Endpoint IP address. In fact, it goes up by 1 every time I do this. This is the one thing I expected not to change, since services are an abstraction over pods, so they shouldn't change when the pods change. I know you can hardcode the endpoint address, so this is more of a curiosity. Also, can anyone tell me what the IP field in the describe output is for? ### Response: IP is the address of your service, which remains constant over time. Endpoints is the collection of backend addresses across which requests to the service address are spread at a given point in time. That collection changes every time the set of pods comprising your service changes, as you've noticed when performing a rolling update on your replication controller (RC).
I have a kubernetes service called staging that selects all app=jupiter pods. It exposes an HTTP service on port 1337. Here's the describe output: $ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: <unnamed> 1337/TCP NodePort: <unnamed> 30421/TCP Endpoints: 10.8.0.21:1337 Session Affinity: None No events. But when I run a kubectl rolling-update on the RC, which removes the 1 pod running the application and adds another, and run describe again, I get: $ kubectl describe service staging Name: staging Namespace: default Labels: run=staging Selector: app=jupiter Type: NodePort IP: 10.11.255.80 Port: <unnamed> 1337/TCP NodePort: <unnamed> 30421/TCP Endpoints: 10.8.0.22:1337 Session Affinity: None No events. Everything is the same, except for the Endpoint IP address. In fact, it goes up by 1 every time I do this. This is the one thing I expected not to change, since services are an abstraction over pods, so they shouldn't change when the pods change. I know you can hardcode the endpoint address, so this is more of a curiosity. Also, can anyone tell me what the IP field in the describe output is for?
There was a bug that was fixed here (https://github.com/kubernetes/kubernetes/pull/13345) that has been shown to cause this problem in Kubernetes clusters 1.0.5 and older. The problem is fixed in the 1.0.6 release.
###Instruction: I have a Kubernetes cluster that was initialized using the kube-up.sh script inside AWS, and occasionally there's a very slow DNS lookup when finding one service from inside another pod. Here's the basic picture: (browser) | V (ELB) | V (front-end service) | V (front-end pod) | V (back-end service) | V (back-end pod) | V (database) I have timing logging installed at the front-end and back-end levels, and their numbers are wildly divergent for some requests. Occasionally we'll see a request that the FE nginx logging says takes 8.3 seconds, but the back-end gunicorn process says takes 30ms. I can exec into the FE pod and do a curl to the backend endpoint to get timing data according to the example in this blog post, and it looks like this: time_namelookup: 3.513 time_connect: 3.513 time_appconnect: 0.000 time_pretransfer: 3.513 time_redirect: 0.000 time_starttransfer: 3.520 ---------- time_total: 3.520 So the slowness seems to be coming from DNS. We have a separate cluster set up for staging, and this sort of thing doesn't seem to be happening there, so I'm not sure what to make of it. Most requests happen in a reasonable amount of time, less than 50ms, but every tenth one or so takes multiple seconds to resolve. I found this thread that made it sound like SkyDNS's use of etcd might be the problem, but I'm not sure how to verify that or fix it. And this is happening way too often to be periodic missing configuration values (our traffic isn't that high). ### Response: There was a bug that was fixed here (https://github.com/kubernetes/kubernetes/pull/13345) that has been shown to cause this problem in Kubernetes clusters 1.0.5 and older. The problem is fixed in the 1.0.6 release.
I have a Kubernetes cluster that was initialized using the kube-up.sh script inside AWS, and occasionally there's a very slow DNS lookup when finding one service from inside another pod. Here's the basic picture: (browser) | V (ELB) | V (front-end service) | V (front-end pod) | V (back-end service) | V (back-end pod) | V (database) I have timing logging installed at the front-end and back-end levels, and their numbers are wildly divergent for some requests. Occasionally we'll see a request that the FE nginx logging says takes 8.3 seconds, but the back-end gunicorn process says takes 30ms. I can exec into the FE pod and do a curl to the backend endpoint to get timing data according to the example in this blog post, and it looks like this: time_namelookup: 3.513 time_connect: 3.513 time_appconnect: 0.000 time_pretransfer: 3.513 time_redirect: 0.000 time_starttransfer: 3.520 ---------- time_total: 3.520 So the slowness seems to be coming from DNS. We have a separate cluster set up for staging, and this sort of thing doesn't seem to be happening there, so I'm not sure what to make of it. Most requests happen in a reasonable amount of time, less than 50ms, but every tenth one or so takes multiple seconds to resolve. I found this thread that made it sound like SkyDNS's use of etcd might be the problem, but I'm not sure how to verify that or fix it. And this is happening way too often to be periodic missing configuration values (our traffic isn't that high).
Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver). Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network. There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short: Create a bastion GCE route for your cluster's services. Install your cluster's kube-proxy anywhere you want to access the cluster's services.
###Instruction: I have a Kubernetes service on GKE as follows: $ kubectl describe service staging Name: staging Namespace: default Labels: <none> Selector: app=jupiter Type: NodePort IP: 10.11.246.27 Port: <unnamed> 80/TCP NodePort: <unnamed> 31683/TCP Endpoints: 10.8.0.33:1337 Session Affinity: None No events. I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337) or via the node port (10.240.251.174:31683 in my case). However, if I try to access 10.11.246.27:80, I get nothing. I've also tried ports 1337 and 31683. Why can't I access the service via its IP? Do I need a firewall rule or something? ### Response: Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver). Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network. There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short: Create a bastion GCE route for your cluster's services. Install your cluster's kube-proxy anywhere you want to access the cluster's services.
I have a Kubernetes service on GKE as follows: $ kubectl describe service staging Name: staging Namespace: default Labels: <none> Selector: app=jupiter Type: NodePort IP: 10.11.246.27 Port: <unnamed> 80/TCP NodePort: <unnamed> 31683/TCP Endpoints: 10.8.0.33:1337 Session Affinity: None No events. I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337) or via the node port (10.240.251.174:31683 in my case). However, if I try to access 10.11.246.27:80, I get nothing. I've also tried ports 1337 and 31683. Why can't I access the service via its IP? Do I need a firewall rule or something?
Name and namespace are immutable on objects. When you try to change the namespace, replace looks for the service in the new namespace in order to overwrite it. You should be able to do create -f ... to create the service in the new namespace
###Instruction: The kubernetes service is in the default namespace. I want to move it to kube-system namespace. So I did it as follow: kubectl get svc kubernetes -o yaml > temp.yaml This generates temp.yaml using current kubernetes service information. Then I changed the value of namespace to kube-system in temp.yaml. Lastly, I ran the following command: kubectl replace -f temp.yaml But I got the error: Error from server: error when replacing "temp.yaml": service "kubernetes" not found I think there is no service named kubernetes in the kube-system namespace. Who can tell me how can to do this? ### Response: Name and namespace are immutable on objects. When you try to change the namespace, replace looks for the service in the new namespace in order to overwrite it. You should be able to do create -f ... to create the service in the new namespace
The kubernetes service is in the default namespace. I want to move it to kube-system namespace. So I did it as follow: kubectl get svc kubernetes -o yaml > temp.yaml This generates temp.yaml using current kubernetes service information. Then I changed the value of namespace to kube-system in temp.yaml. Lastly, I ran the following command: kubectl replace -f temp.yaml But I got the error: Error from server: error when replacing "temp.yaml": service "kubernetes" not found I think there is no service named kubernetes in the kube-system namespace. Who can tell me how can to do this?
You need to set the type of your Service. http://docs.k8s.io/v1.0/user-guide/services.html#external-services If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
###Instruction: We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses. On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses). The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network... How can we achieve this? The IP addresses of the nodes are: node 1 - 192.168.77.102 node 2 - 192.168.77.103 . and this is how the Service, RC and Pod appear with kubectl: NAME LABELS SELECTOR IP(S) PORT(S) elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS elasticsearch elasticsearch elasticsearch app=elasticsearch 1 NAME READY STATUS RESTARTS AGE elasticsearch-swpy1 1/1 Running 0 26m ### Response: You need to set the type of your Service. http://docs.k8s.io/v1.0/user-guide/services.html#external-services If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses. On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses). The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network... How can we achieve this? The IP addresses of the nodes are: node 1 - 192.168.77.102 node 2 - 192.168.77.103 . and this is how the Service, RC and Pod appear with kubectl: NAME LABELS SELECTOR IP(S) PORT(S) elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS elasticsearch elasticsearch elasticsearch app=elasticsearch 1 NAME READY STATUS RESTARTS AGE elasticsearch-swpy1 1/1 Running 0 26m
My reading of the the Google docs (https://cloud.google.com/compute/docs/disks/customer-supplied-encryption) are that no key is required to mount the disk. The keys are only provided at disk creation time. So, the following should work without changes to kubernetes: create encrypted disk "myencrypteddisk" per https://cloud.google.com/compute/docs/disks/customer-supplied-encryption create a pod which mounts GCEPD called "myencrypteddisk". kubelet will mount the disk on the VM. It's compute scope should be enough to perform the mount, IIUC.
###Instruction: I need to encrypt the data on a block device and allow the Pod to access it as a volume. I noticed its now possible on Google cloud to encrypt a new disk using Customer-Supplied Encryption Keys Can I use self encrypted disk with Kubernetes and attach it to the Pod as volume? If not, is there any other way to encrypt block device (for example LUKS) and use it with Pods? ### Response: My reading of the the Google docs (https://cloud.google.com/compute/docs/disks/customer-supplied-encryption) are that no key is required to mount the disk. The keys are only provided at disk creation time. So, the following should work without changes to kubernetes: create encrypted disk "myencrypteddisk" per https://cloud.google.com/compute/docs/disks/customer-supplied-encryption create a pod which mounts GCEPD called "myencrypteddisk". kubelet will mount the disk on the VM. It's compute scope should be enough to perform the mount, IIUC.
I need to encrypt the data on a block device and allow the Pod to access it as a volume. I noticed its now possible on Google cloud to encrypt a new disk using Customer-Supplied Encryption Keys Can I use self encrypted disk with Kubernetes and attach it to the Pod as volume? If not, is there any other way to encrypt block device (for example LUKS) and use it with Pods?
That guide probably needs to be updated, given that the kubernetes v1beta3 api was deprecated in July. I suspect you're running a recent build of the apiserver (which supports only the v1 api), but older builds of kube-proxy/kubelet. I'd recommend following one of the getting started guides from kubernetes.io/v1.0/docs/getting-started-guides, as those are pretty stable and have dedicated maintainers. e.g. the flannel on fedora guide sounds pretty close to what you're setting up and having trouble with.
###Instruction: So I am hesitant to ask as a newbie but I have hit a wall. I am following: http://www.projectatomic.io/docs/gettingstarted/ Using fedora atomic host 22 latest. I had trouble getting the system up with some of the port settings and with the api string. I was able to get all my services running on the master and my three minions. Kubelet and kube-proxy are failing to connect to the apiserver. I am able to reach the server from curl but the api paths return: http://cas-vm-atomic-m:8080/api/v1beta3 { "kind": "Status", "apiVersion": "v1beta3", "metadata": {}, "status": "Failure", "message": "the server could not find the requested resource", "reason": "NotFound", "details": {}, "code": 404 } KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" I have turned up the logging. I have tried a variety of setting for KUBE_ADMISSION_CONTROL. I think my problem is on the master and with the apiserver being up but not serving working correctly. kubectl does return my three nodes and services and endpoints. But the nodes stay in NotReady status. The node are attempting to move out of NotReady but can't reach the apiserver to do so. I am kinda of bummed that the newbie getting started howto has been so difficult. Though I guess educational. I have the logging set to 3 but now I mostly see the kube-proxy requests failing with 404 errors. Any ideas? If this is the wrong place for this please let me know. ### Response: That guide probably needs to be updated, given that the kubernetes v1beta3 api was deprecated in July. I suspect you're running a recent build of the apiserver (which supports only the v1 api), but older builds of kube-proxy/kubelet. I'd recommend following one of the getting started guides from kubernetes.io/v1.0/docs/getting-started-guides, as those are pretty stable and have dedicated maintainers. e.g. the flannel on fedora guide sounds pretty close to what you're setting up and having trouble with.
So I am hesitant to ask as a newbie but I have hit a wall. I am following: http://www.projectatomic.io/docs/gettingstarted/ Using fedora atomic host 22 latest. I had trouble getting the system up with some of the port settings and with the api string. I was able to get all my services running on the master and my three minions. Kubelet and kube-proxy are failing to connect to the apiserver. I am able to reach the server from curl but the api paths return: http://cas-vm-atomic-m:8080/api/v1beta3 { "kind": "Status", "apiVersion": "v1beta3", "metadata": {}, "status": "Failure", "message": "the server could not find the requested resource", "reason": "NotFound", "details": {}, "code": 404 } KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" I have turned up the logging. I have tried a variety of setting for KUBE_ADMISSION_CONTROL. I think my problem is on the master and with the apiserver being up but not serving working correctly. kubectl does return my three nodes and services and endpoints. But the nodes stay in NotReady status. The node are attempting to move out of NotReady but can't reach the apiserver to do so. I am kinda of bummed that the newbie getting started howto has been so difficult. Though I guess educational. I have the logging set to 3 but now I mostly see the kube-proxy requests failing with 404 errors. Any ideas? If this is the wrong place for this please let me know.
The biggest benefit is streamlined setup for most people. Running a few docker run commands is way easier than downloading binaries, unpacking, fine-tuning init scripts (which are different on every distro), running a supervisor, etc. We have a pretty good process manager - relying on that is powerful. We also don't recommend sharing etcd, so if you're doing that you are already off the beaten path. Overall, containerized components are vastly simpler than the alternatives for most people.
###Instruction: Both the Kubernetes HA guide and the From Scratch guide recommend running Etcd, kube-apiserver, kube-controller-manager, and kube-scheduler in containers. The idea of self-hosting Kubernetes on Kubernetes goes back quite a while (see PR 167 on K8s github and issues/PRs linked there), but I haven't found a discussion about why this approach is so beneficial that it should be the 'recommended' way. Here are the benefits and drawbacks as I see them currently: Benefits: Potentially easy upgrade path to just update manifests and have kubelet pull new images. "Container advantages": binary environment and the host environment separate, leverage others' existing images, etc. Follows the whole Kubernetes pattern, so 'fits the brain' once you are using that pattern extensively. Drawbacks: Increased installation/configuration complexity in some cases. For example, if your Etcd cluster is separate from your Kubernetes nodes, you now have to install Docker (with possible storage changes depending on Linux distro), kubelet, and Etcd. Without using containerized Etcd, you just have that one binary to install. Increased complexity at run time: With more moving parts, any bug in Docker or kubelet may be able to render critical components non-functional. I'm new to Kubernetes (and containers) and feel like I might be missing advantages (or underestimating their value) when compared to the extra complexity it introduces. But I also have to choose once way to try. Why are containerized master components the recommended way to run Kubernetes despite the extra complexity? ### Response: The biggest benefit is streamlined setup for most people. Running a few docker run commands is way easier than downloading binaries, unpacking, fine-tuning init scripts (which are different on every distro), running a supervisor, etc. We have a pretty good process manager - relying on that is powerful. We also don't recommend sharing etcd, so if you're doing that you are already off the beaten path. Overall, containerized components are vastly simpler than the alternatives for most people.
Both the Kubernetes HA guide and the From Scratch guide recommend running Etcd, kube-apiserver, kube-controller-manager, and kube-scheduler in containers. The idea of self-hosting Kubernetes on Kubernetes goes back quite a while (see PR 167 on K8s github and issues/PRs linked there), but I haven't found a discussion about why this approach is so beneficial that it should be the 'recommended' way. Here are the benefits and drawbacks as I see them currently: Benefits: Potentially easy upgrade path to just update manifests and have kubelet pull new images. "Container advantages": binary environment and the host environment separate, leverage others' existing images, etc. Follows the whole Kubernetes pattern, so 'fits the brain' once you are using that pattern extensively. Drawbacks: Increased installation/configuration complexity in some cases. For example, if your Etcd cluster is separate from your Kubernetes nodes, you now have to install Docker (with possible storage changes depending on Linux distro), kubelet, and Etcd. Without using containerized Etcd, you just have that one binary to install. Increased complexity at run time: With more moving parts, any bug in Docker or kubelet may be able to render critical components non-functional. I'm new to Kubernetes (and containers) and feel like I might be missing advantages (or underestimating their value) when compared to the extra complexity it introduces. But I also have to choose once way to try. Why are containerized master components the recommended way to run Kubernetes despite the extra complexity?
We do not support dependencies at the moment. In the future will support a "daemon" scheduler which can run a pod on every node, but not only if some other pod is being scheduled to a node. In the future we might support existence dependencies, but that's more for creation: create Pod P iff Service S exists. We also do not support refcounted local storage, which seems to be what you're proposing. There are a lot of very unclear semantics in your quick sketch. In order to consider something as complex as this, we would have to really think through the corner cases. If this is something you want us to consider, you're welcome to file a proposal on GitHub with use cases and details. In the mean time, it sounds like you want a workflow manager and hostPath volumes.
###Instruction: I would like to achieve the following functionality: when a given pod (let's call it application pod) is deployed on a node, another pod providing an ephemeral volume is deployed before that, if such "volume pod" has not existed on the target node yet the number of application pods can be scaled up and down, and all application pods on the same node share the single volume pod The first requirement assumes a kind of dependency definition among pods (just like it can be done among Marathon apps in case of Marathon). The second requirement assumes that an ephemeral volume created in a container in a pod can be attached to other container(s) in other pod(s). It is important that the volume is ephemeral (i.e. there is no host directory or attached storage, that could be mapped to the application). Also, it is important that it is not on GCE. Please advise how such a setup can be achieved with Kubernetes. I think such dynamic, dependency-based deployment would be welcomed by everyone. Also, sharing ephemeral volumes (e.g. files stored on a tmpfs volume, once such volume is supported by Docker) may be interesting for others, too. ### Response: We do not support dependencies at the moment. In the future will support a "daemon" scheduler which can run a pod on every node, but not only if some other pod is being scheduled to a node. In the future we might support existence dependencies, but that's more for creation: create Pod P iff Service S exists. We also do not support refcounted local storage, which seems to be what you're proposing. There are a lot of very unclear semantics in your quick sketch. In order to consider something as complex as this, we would have to really think through the corner cases. If this is something you want us to consider, you're welcome to file a proposal on GitHub with use cases and details. In the mean time, it sounds like you want a workflow manager and hostPath volumes.
I would like to achieve the following functionality: when a given pod (let's call it application pod) is deployed on a node, another pod providing an ephemeral volume is deployed before that, if such "volume pod" has not existed on the target node yet the number of application pods can be scaled up and down, and all application pods on the same node share the single volume pod The first requirement assumes a kind of dependency definition among pods (just like it can be done among Marathon apps in case of Marathon). The second requirement assumes that an ephemeral volume created in a container in a pod can be attached to other container(s) in other pod(s). It is important that the volume is ephemeral (i.e. there is no host directory or attached storage, that could be mapped to the application). Also, it is important that it is not on GCE. Please advise how such a setup can be achieved with Kubernetes. I think such dynamic, dependency-based deployment would be welcomed by everyone. Also, sharing ephemeral volumes (e.g. files stored on a tmpfs volume, once such volume is supported by Docker) may be interesting for others, too.
This sort of thing is hard to debug remotely. Things I would check: 1) on the sender: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg; This will give you some insight into what the kernel is doing. 2) on the sender: tcpdump -i any host 10.244.32.2 & ncat -v -t 10.244.32.2 6379;` This will give a bit more insight. 3) on the receiver: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg; This will tell you if the packet came through the encapsulation. You need to basically prove the plumbing through the whole connection.
###Instruction: I'm trying to setup Kubernetes in Openstack + CoreOS. I have master 10.240.63.84 and 2 minions .63 and .83. I also created 3 redis pods: redis-gopher-gziey 10.244.32.2 10.240.63.66/10.240.63.66 redis-managed-oh43e 10.244.32.3 10.240.63.66/10.240.63.66 redis-primary-fplln 10.244.54.2 10.240.63.83/10.240.63.83 master's routing table looks like: 10.240.63.0 * 255.255.255.0 U 0 0 0 eth0 10.240.63.1 * 255.255.255.255 UH 1024 0 0 eth0 10.244.0.0 * 255.255.0.0 U 0 0 0 flannel.1 10.244.50.0 * 255.255.255.0 U 0 0 0 docker0 and output of ifconfig -a is : docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.244.50.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::542f:6fff:fe4a:adf3 prefixlen 64 scopeid 0x20<link> ether 56:84:7a:fe:97:99 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 90 (90.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.240.63.84 netmask 255.255.255.0 broadcast 10.240.63.255 inet6 fe80::f816:3eff:fe89:e9a0 prefixlen 64 scopeid 0x20<link> ether fa:16:3e:89:e9:a0 txqueuelen 1000 (Ethernet) RX packets 430706 bytes 559764129 (533.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 238519 bytes 116083693 (110.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.50.0 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::601f:62ff:feed:1556 prefixlen 64 scopeid 0x20<link> ether 62:1f:62:ed:15:56 txqueuelen 0 (Ethernet) RX packets 20 bytes 1504 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 79 bytes 7686 (7.5 KiB) TX errors 0 dropped 19 overruns 0 carrier 0 collisions 0 Flanneld config used for initialization is: Master: - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld --ip-masq=true --iface=eth0 Minion: - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld -etcd-endpoints http://10.240.63.84:4001 --ip-masq=true --iface=eth0 So the issue is that i can't ping any of the pods from master, as well as connect to any port, error is: ncat -v -t 10.244.32.2 6379 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: No route to host. ### Response: This sort of thing is hard to debug remotely. Things I would check: 1) on the sender: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg; This will give you some insight into what the kernel is doing. 2) on the sender: tcpdump -i any host 10.244.32.2 & ncat -v -t 10.244.32.2 6379;` This will give a bit more insight. 3) on the receiver: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg; This will tell you if the packet came through the encapsulation. You need to basically prove the plumbing through the whole connection.
I'm trying to setup Kubernetes in Openstack + CoreOS. I have master 10.240.63.84 and 2 minions .63 and .83. I also created 3 redis pods: redis-gopher-gziey 10.244.32.2 10.240.63.66/10.240.63.66 redis-managed-oh43e 10.244.32.3 10.240.63.66/10.240.63.66 redis-primary-fplln 10.244.54.2 10.240.63.83/10.240.63.83 master's routing table looks like: 10.240.63.0 * 255.255.255.0 U 0 0 0 eth0 10.240.63.1 * 255.255.255.255 UH 1024 0 0 eth0 10.244.0.0 * 255.255.0.0 U 0 0 0 flannel.1 10.244.50.0 * 255.255.255.0 U 0 0 0 docker0 and output of ifconfig -a is : docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.244.50.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::542f:6fff:fe4a:adf3 prefixlen 64 scopeid 0x20<link> ether 56:84:7a:fe:97:99 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 90 (90.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.240.63.84 netmask 255.255.255.0 broadcast 10.240.63.255 inet6 fe80::f816:3eff:fe89:e9a0 prefixlen 64 scopeid 0x20<link> ether fa:16:3e:89:e9:a0 txqueuelen 1000 (Ethernet) RX packets 430706 bytes 559764129 (533.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 238519 bytes 116083693 (110.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.50.0 netmask 255.255.0.0 broadcast 0.0.0.0 inet6 fe80::601f:62ff:feed:1556 prefixlen 64 scopeid 0x20<link> ether 62:1f:62:ed:15:56 txqueuelen 0 (Ethernet) RX packets 20 bytes 1504 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 79 bytes 7686 (7.5 KiB) TX errors 0 dropped 19 overruns 0 carrier 0 collisions 0 Flanneld config used for initialization is: Master: - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld --ip-masq=true --iface=eth0 Minion: - name: flanneld.service command: start drop-ins: - name: 50-network-config.conf content: | [Service] ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' ExecStart= ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \ /usr/bin/docker run --net=host --privileged=true --rm \ --volume=/run/flannel:/run/flannel \ --env=NOTIFY_SOCKET=/run/flannel/sd.sock \ --env-file=/run/flannel/options.env \ --volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \ quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld -etcd-endpoints http://10.240.63.84:4001 --ip-masq=true --iface=eth0 So the issue is that i can't ping any of the pods from master, as well as connect to any port, error is: ncat -v -t 10.244.32.2 6379 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: No route to host.
I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.
###Instruction: Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju? Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network. There its a way to accomplish this deploying kubernetes cluster with juju? ### Response: I can't speak to juju specifically, but Kubernetes supports Amazon ELB - turning up a load-balancer should work.
Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS with juju? Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network. There its a way to accomplish this deploying kubernetes cluster with juju?
Some traffic is masqueraded, but it is done at L3, rather than L7, so there's no way to add a header. :( This will get better soon for in-cluster traffic, but we have to wait for cloud load-balancers to catch up before we can properly handle out-of-cluster traffic properly.
###Instruction: I can't find out customer real IP address when apache-php enviroment runs in google container. Without modifying anything, I get ip address from container address range, when using mod_remoteip, I can add RemoteIPHeader X-Client-IP RemoteIPInternalProxyList ournet/proxy-list and add rows into "proxy-list"-file: 10.240.0.0/16 # google internal network 10.244.0.0/14 # Cluster aadress range Only the row 10.244.0.0/14 gives any result. In this case I get the cluster node's ip as REMOTE_ADDR value from the 10.240.0.0/16 network. It seems, that the node itself acts as forwarder, without adding needed headers to the request or I am looking it from totally wrong perspective? ### Response: Some traffic is masqueraded, but it is done at L3, rather than L7, so there's no way to add a header. :( This will get better soon for in-cluster traffic, but we have to wait for cloud load-balancers to catch up before we can properly handle out-of-cluster traffic properly.
I can't find out customer real IP address when apache-php enviroment runs in google container. Without modifying anything, I get ip address from container address range, when using mod_remoteip, I can add RemoteIPHeader X-Client-IP RemoteIPInternalProxyList ournet/proxy-list and add rows into "proxy-list"-file: 10.240.0.0/16 # google internal network 10.244.0.0/14 # Cluster aadress range Only the row 10.244.0.0/14 gives any result. In this case I get the cluster node's ip as REMOTE_ADDR value from the 10.240.0.0/16 network. It seems, that the node itself acts as forwarder, without adding needed headers to the request or I am looking it from totally wrong perspective?
Services are accessible from all namespaces as long as you address them using both the name and the namespace. For example, if you have a service named db in namespace dev, you can access it using the DNS name db. While this won't work from qa, you can access it from both qa and dev if you instead use the DNS name db.dev (<service>.<namespace>), in order to clarify which namespace should be searched for the service.
###Instruction: I'm searching for the answer but i didn't find it anywhere. Is it possible to share a service between multiple namespaces ? For instance, if i have 2 namespaces (let's say 'qa' and 'dev'), is it possible to use the same database server ? The database server would be preferably managed by kubernetes too. I've read this issue : https://github.com/openshift/origin/issues/1244 But it's not directly related to kubernetes. Regards, Smana ### Response: Services are accessible from all namespaces as long as you address them using both the name and the namespace. For example, if you have a service named db in namespace dev, you can access it using the DNS name db. While this won't work from qa, you can access it from both qa and dev if you instead use the DNS name db.dev (<service>.<namespace>), in order to clarify which namespace should be searched for the service.
I'm searching for the answer but i didn't find it anywhere. Is it possible to share a service between multiple namespaces ? For instance, if i have 2 namespaces (let's say 'qa' and 'dev'), is it possible to use the same database server ? The database server would be preferably managed by kubernetes too. I've read this issue : https://github.com/openshift/origin/issues/1244 But it's not directly related to kubernetes. Regards, Smana
If your kubectl configuration is incorrect after creating a cluster, you can always run gcloud container clusters get-credentials NAME (see configuring kubectl) to restore a working kubeconfig file.
###Instruction: I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard. error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused thx ### Response: If your kubectl configuration is incorrect after creating a cluster, you can always run gcloud container clusters get-credentials NAME (see configuring kubectl) to restore a working kubeconfig file.
I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard. error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused thx
The error messages are not so clear in the openstack cloud provider plugin. It wont register if the there is any unexpected field in the config file, it will throw an error, which unfortunately is not properly bubbled. Here are the fields (and their identifiers) that are recognized by the plugin [Global] AuthUrl stringgcfg:"auth-url" Username string UserId stringgcfg:"user-id" Password string ApiKey stringgcfg:"api-key" TenantId stringgcfg:"tenant-id" TenantName stringgcfg:"tenant-name" DomainId stringgcfg:"domain-id" DomainName stringgcfg:"domain-name" Region string Offcourse not all of them are required. I usually have [Global] auth-url username password region tenant-id
###Instruction: To use cinder volumes I added the options --cloud-provider and --cloud-config to my kubelet configuration: $ cat /etc/kubernetes/kubelet ### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.76" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud_config" I've created a file with the necessary credentials: $ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 user-id=kubecindertest username=kubecindertest password=***** region=RegionOne tenant-name=kubecindertest tenant-id=6568768756a7886767e676f7efe76fe7 project-name=kubecindertest When starting kubelet (manually), the process only logs unknown cloud provider "openstack" and exists: source /etc/kubernetes/kubelet; sudo /usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_ARGS unknown cloud provider "openstack" The openstack.go, defining the openstack provider in the kubernetes repository has the exact same name in lowercase: const ProviderName = "openstack" Update It turned out the error was an expection that occured while parsing the config file. I removed all the optional or unwanted keys and now I use this as my config file: $ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 username=kubecindertest password=***** region=RegionOne tenant-id=6568768756a7886767e676f7efe76fe7 hower, starting the kublet only leads to another error: I0923 07:14:33.315311 23743 manager.go:127] cAdvisor running in container: "/user.slice" I0923 07:14:33.316263 23743 fs.go:93] Filesystem partitions: map[/dev/vda1:{mountpoint:/ major:253 minor:1}] I0923 07:14:33.358848 23743 manager.go:158] Machine: {NumCores:2 CpuFrequency:2099998 MemoryCapacity:4144640000 MachineID:dae72fe0cc064eb0b7797f25bfaf69df SystemUUID:BEDAF943-624D-C04A-B92C-4EB07258246C BootID:e2d988e2-9aba-49bf-a344-fd62607a6754 Filesystems:[{Device:/dev/vda1 Capacity:21456445440}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:107374182400 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:10737418240 Scheduler:none} 252:2:{Name:dm-2 Major:252 Minor:2 Size:10737418240 Scheduler:none} 253:0:{Name:vda Major:253 Minor:0 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:fa:16:3e:64:fa:9a Speed:0 Mtu:1500} {Name:eth1 MacAddress:fa:16:3e:01:00:79 Speed:0 Mtu:1500} {Name:flannel.1 MacAddress:d2:9c:ad:29:df:c5 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:4294434816 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I0923 07:14:33.363915 23743 manager.go:164] Version: {KernelVersion:3.10.0-229.11.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.8.1.el7 CadvisorVersion:0.16.0} panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x8559cd] goroutine 1 [running]: k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).Instances(0x0, 0x0, 0x0, 0xe) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack.go:167 +0x8ed k8s.io/kubernetes/cmd/kubelet/app.RunKubelet(0xc820144900, 0x0, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:628 +0x13c k8s.io/kubernetes/cmd/kubelet/app.(*KubeletServer).Run(0xc8202c2000, 0xc820144900, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:420 +0x84b main.main() /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:46 +0xab goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/lib/golang/src/runtime/asm_amd64.s:1696 +0x1 goroutine 5 [chan receive]: github.com/golang/glog.(*loggingT).flushDaemon(0x1dc7000) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x67 created by github.com/golang/glog.init.1 /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x297 goroutine 37 [runnable]: syscall.Syscall6(0x36, 0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0, 0x1a, 0x0) /usr/lib/golang/src/syscall/asm_linux_amd64.s:44 +0x5 syscall.setsockopt(0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0) /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:1655 +0x73 syscall.SetsockoptInt(0x4, 0x29, 0x1a, 0x0, 0x0, 0x0) /usr/lib/golang/src/syscall/syscall_unix.go:267 +0x61 net.setDefaultSockopts(0x4, 0xa, 0x1, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/sockopt_linux.go:17 +0x7f net.socket(0x135f188, 0x3, 0xa, 0x1, 0x0, 0x0, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, ...) /usr/lib/golang/src/net/sock_posix.go:42 +0xcb net.internetSocket(0x135f188, 0x3, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1, ...) /usr/lib/golang/src/net/ipsock_posix.go:160 +0x141 net.ListenTCP(0x135f188, 0x3, 0xc8205e43f0, 0x7fe02fbf6bc0, 0x0, 0x0) /usr/lib/golang/src/net/tcpsock_posix.go:324 +0x19b net.Listen(0x135f188, 0x3, 0xc8205ee100, 0x5, 0x0, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/dial.go:393 +0x462 net/http.(*Server).ListenAndServe(0xc82007c660, 0x0, 0x0) /usr/lib/golang/src/net/http/server.go:1827 +0x8e k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP.func1(0xc82007c660, 0x1062) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:100 +0x7b created by k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:106 +0x3b6 goroutine 6 [syscall]: os/signal.loop() /usr/lib/golang/src/os/signal/signal_unix.go:22 +0x18 created by os/signal.init.1 /usr/lib/golang/src/os/signal/signal_unix.go:28 +0x37 goroutine 27 [sleep]: time.Sleep(0x12a05f200) /usr/lib/golang/src/runtime/time.go:59 +0xf9 k8s.io/kubernetes/pkg/util.Until(0x164e780, 0x12a05f200, 0xc82007cc00) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/util.go:117 +0x61 created by k8s.io/kubernetes/pkg/util.InitLogs /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/logs.go:49 +0xba Update 2 The second problem was a certificate issue. ### Response: The error messages are not so clear in the openstack cloud provider plugin. It wont register if the there is any unexpected field in the config file, it will throw an error, which unfortunately is not properly bubbled. Here are the fields (and their identifiers) that are recognized by the plugin [Global] AuthUrl stringgcfg:"auth-url" Username string UserId stringgcfg:"user-id" Password string ApiKey stringgcfg:"api-key" TenantId stringgcfg:"tenant-id" TenantName stringgcfg:"tenant-name" DomainId stringgcfg:"domain-id" DomainName stringgcfg:"domain-name" Region string Offcourse not all of them are required. I usually have [Global] auth-url username password region tenant-id
To use cinder volumes I added the options --cloud-provider and --cloud-config to my kubelet configuration: $ cat /etc/kubernetes/kubelet ### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.76" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud_config" I've created a file with the necessary credentials: $ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 user-id=kubecindertest username=kubecindertest password=***** region=RegionOne tenant-name=kubecindertest tenant-id=6568768756a7886767e676f7efe76fe7 project-name=kubecindertest When starting kubelet (manually), the process only logs unknown cloud provider "openstack" and exists: source /etc/kubernetes/kubelet; sudo /usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_ARGS unknown cloud provider "openstack" The openstack.go, defining the openstack provider in the kubernetes repository has the exact same name in lowercase: const ProviderName = "openstack" Update It turned out the error was an expection that occured while parsing the config file. I removed all the optional or unwanted keys and now I use this as my config file: $ sudo cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*********.com:5000/v2.0 username=kubecindertest password=***** region=RegionOne tenant-id=6568768756a7886767e676f7efe76fe7 hower, starting the kublet only leads to another error: I0923 07:14:33.315311 23743 manager.go:127] cAdvisor running in container: "/user.slice" I0923 07:14:33.316263 23743 fs.go:93] Filesystem partitions: map[/dev/vda1:{mountpoint:/ major:253 minor:1}] I0923 07:14:33.358848 23743 manager.go:158] Machine: {NumCores:2 CpuFrequency:2099998 MemoryCapacity:4144640000 MachineID:dae72fe0cc064eb0b7797f25bfaf69df SystemUUID:BEDAF943-624D-C04A-B92C-4EB07258246C BootID:e2d988e2-9aba-49bf-a344-fd62607a6754 Filesystems:[{Device:/dev/vda1 Capacity:21456445440}] DiskMap:map[252:0:{Name:dm-0 Major:252 Minor:0 Size:107374182400 Scheduler:none} 252:1:{Name:dm-1 Major:252 Minor:1 Size:10737418240 Scheduler:none} 252:2:{Name:dm-2 Major:252 Minor:2 Size:10737418240 Scheduler:none} 253:0:{Name:vda Major:253 Minor:0 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:fa:16:3e:64:fa:9a Speed:0 Mtu:1500} {Name:eth1 MacAddress:fa:16:3e:01:00:79 Speed:0 Mtu:1500} {Name:flannel.1 MacAddress:d2:9c:ad:29:df:c5 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:4294434816 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown} I0923 07:14:33.363915 23743 manager.go:164] Version: {KernelVersion:3.10.0-229.11.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.8.1.el7 CadvisorVersion:0.16.0} panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x8559cd] goroutine 1 [running]: k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).Instances(0x0, 0x0, 0x0, 0xe) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack.go:167 +0x8ed k8s.io/kubernetes/cmd/kubelet/app.RunKubelet(0xc820144900, 0x0, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:628 +0x13c k8s.io/kubernetes/cmd/kubelet/app.(*KubeletServer).Run(0xc8202c2000, 0xc820144900, 0x0, 0x0) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:420 +0x84b main.main() /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:46 +0xab goroutine 17 [syscall, locked to thread]: runtime.goexit() /usr/lib/golang/src/runtime/asm_amd64.s:1696 +0x1 goroutine 5 [chan receive]: github.com/golang/glog.(*loggingT).flushDaemon(0x1dc7000) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x67 created by github.com/golang/glog.init.1 /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x297 goroutine 37 [runnable]: syscall.Syscall6(0x36, 0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0, 0x1a, 0x0) /usr/lib/golang/src/syscall/asm_linux_amd64.s:44 +0x5 syscall.setsockopt(0x4, 0x29, 0x1a, 0xc820035a7c, 0x4, 0x0, 0x0) /usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:1655 +0x73 syscall.SetsockoptInt(0x4, 0x29, 0x1a, 0x0, 0x0, 0x0) /usr/lib/golang/src/syscall/syscall_unix.go:267 +0x61 net.setDefaultSockopts(0x4, 0xa, 0x1, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/sockopt_linux.go:17 +0x7f net.socket(0x135f188, 0x3, 0xa, 0x1, 0x0, 0x0, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, ...) /usr/lib/golang/src/net/sock_posix.go:42 +0xcb net.internetSocket(0x135f188, 0x3, 0x7fe031c3cd50, 0xc8205e43f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1, ...) /usr/lib/golang/src/net/ipsock_posix.go:160 +0x141 net.ListenTCP(0x135f188, 0x3, 0xc8205e43f0, 0x7fe02fbf6bc0, 0x0, 0x0) /usr/lib/golang/src/net/tcpsock_posix.go:324 +0x19b net.Listen(0x135f188, 0x3, 0xc8205ee100, 0x5, 0x0, 0x0, 0x0, 0x0) /usr/lib/golang/src/net/dial.go:393 +0x462 net/http.(*Server).ListenAndServe(0xc82007c660, 0x0, 0x0) /usr/lib/golang/src/net/http/server.go:1827 +0x8e k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP.func1(0xc82007c660, 0x1062) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:100 +0x7b created by k8s.io/kubernetes/pkg/kubelet/cadvisor.(*cadvisorClient).exportHTTP /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/cadvisor/cadvisor_linux.go:106 +0x3b6 goroutine 6 [syscall]: os/signal.loop() /usr/lib/golang/src/os/signal/signal_unix.go:22 +0x18 created by os/signal.init.1 /usr/lib/golang/src/os/signal/signal_unix.go:28 +0x37 goroutine 27 [sleep]: time.Sleep(0x12a05f200) /usr/lib/golang/src/runtime/time.go:59 +0xf9 k8s.io/kubernetes/pkg/util.Until(0x164e780, 0x12a05f200, 0xc82007cc00) /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/util.go:117 +0x61 created by k8s.io/kubernetes/pkg/util.InitLogs /builddir/build/BUILD/kubernetes-196f58b9cb25a2222c7f9aacd624737910b03acb/_output/local/go/src/k8s.io/kubernetes/pkg/util/logs.go:49 +0xba Update 2 The second problem was a certificate issue.
Perhaps you are looking for David Rensin's small book on Kubernetes? It is listed with other Google research papers, although it is more of an brief introduction/overview and less a whitepaper (like the Borg one). It seems to be available still as a promotion from OpenShift. It is certainly worth reading and won't take long, but for getting a Kubernetes cluster running, you'll still want to dive deep into the end-user docs at some point. Good luck!
###Instruction: Did Google publish a whitepaper on Kubernetes, of a similar style as Borg? I am aware of end-user documentation and it would likely explain a lot what I am looking for, but I find whitepapers easier to read than end-user docs. It's also easier to convert to dead-tree format and read in a single afternoon. ### Response: Perhaps you are looking for David Rensin's small book on Kubernetes? It is listed with other Google research papers, although it is more of an brief introduction/overview and less a whitepaper (like the Borg one). It seems to be available still as a promotion from OpenShift. It is certainly worth reading and won't take long, but for getting a Kubernetes cluster running, you'll still want to dive deep into the end-user docs at some point. Good luck!
Did Google publish a whitepaper on Kubernetes, of a similar style as Borg? I am aware of end-user documentation and it would likely explain a lot what I am looking for, but I find whitepapers easier to read than end-user docs. It's also easier to convert to dead-tree format and read in a single afternoon.
don't set variables in the command field, take a look at the env field.
###Instruction: I need to set environment variable in kubernetes slave which is a coreos system. I have tried using exportand declare but it keeps reading each argument as a separate command ### Response: don't set variables in the command field, take a look at the env field.
I need to set environment variable in kubernetes slave which is a coreos system. I have tried using exportand declare but it keeps reading each argument as a separate command
For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like this. For any other information, you can look it.
###Instruction: core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get svc --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP 53/TCP kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53 Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get endpoints --namespace=kube-system NAME ENDPOINTS kube-dns 10.244.31.16:53,10.244.31.16:53 kube-ui 10.244.3.2:8080 core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53 Server: 10.244.31.16 Address 1: 10.244.31.16 Name: kubernetes Address 1: 10.100.0.1 I think the service of kube-dns is Not available. the skydns-svc.yaml : apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.100.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP Who can help ? ### Response: For DNS to work, the kubelet needs to be passed the flags --cluster_dns= and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec like this. For any other information, you can look it.
core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get svc --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.100.0.10 53/UDP 53/TCP kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui 10.100.110.236 80/TCP core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.100.0.10:53 Server: 10.100.0.10 Address 1: 10.100.0.10 nslookup: can't resolve 'kubernetes' core@core-1-94 ~ $ kubectl get endpoints --namespace=kube-system NAME ENDPOINTS kube-dns 10.244.31.16:53,10.244.31.16:53 kube-ui 10.244.3.2:8080 core@core-1-94 ~ $ kubectl exec -it busybox -- nslookup kubernetes 10.244.31.16:53 Server: 10.244.31.16 Address 1: 10.244.31.16 Name: kubernetes Address 1: 10.100.0.1 I think the service of kube-dns is Not available. the skydns-svc.yaml : apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.100.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP Who can help ?
The API server is the only component that speaks directly to etcd. When starting the API server, you can pass a --etcd-config=/path/to/client/config parameter instead of just pointing to an unsecured etcd server with --etcd-server In that config file, you would specify the etcd servers, along with the client credentials (cert/key) to use to connect. The format is that expected by the go-etcd client NewClientFromFile function, which expects a JSON serialization of the Client struct, specifically the config and cluster keys
###Instruction: Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. Now, is Kubernetes able to use a TLS secured etcd cluster? In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? Thanks ### Response: The API server is the only component that speaks directly to etcd. When starting the API server, you can pass a --etcd-config=/path/to/client/config parameter instead of just pointing to an unsecured etcd server with --etcd-server In that config file, you would specify the etcd servers, along with the client credentials (cert/key) to use to connect. The format is that expected by the go-etcd client NewClientFromFile function, which expects a JSON serialization of the Client struct, specifically the config and cluster keys
Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. Now, is Kubernetes able to use a TLS secured etcd cluster? In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? Thanks
So, apparently it was wrong server.key used by controller manager. According to kubernetes documentation token is generated by controller manager. While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key.
###Instruction: I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message: 6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I have api server started up with following parameters: kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False I think I'm missing something but can't find what exactly, any help will be appreciated! ### Response: So, apparently it was wrong server.key used by controller manager. According to kubernetes documentation token is generated by controller manager. While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key.
I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message: 6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I have api server started up with following parameters: kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False I think I'm missing something but can't find what exactly, any help will be appreciated!
I recently gave a presentation at the London Mesos User Group, addressing exactly this question: Can I have Mesos and Kubernetes? with a demo available here. Bottom-line: it is all about hybrid workloads. If you can and want to go all in concerning Docker, that is, you containerize all your workloads, then nothing (or little) speaks against Kubernetes standalone.
###Instruction: So Google Kubernetes is a Docker container cluster management solution that helps deploy, scale, schedule and maintenance sets of containers simple. Apache Mesos is a work scheduler that figures out where jobs (e.g. deploying a Docker container, some batch processing job, etc.) should run. Kubernetes can be "raw" and run stadalone, or there is the Kubernetes on Mesos option where Kubernetes is deployed as a Mesos framework and runs on top of your Mesos cluster. I'm wondering what benefits, if any, there are to running Kubernetes on Mesos, as opposed to "raw" Kubernetes? Mesos can do non-Docker work: it can run Java/Python/Ruby/etc. apps on normal Linux VMs. It can run scripts. It's just a general work engine. So at first glance, it seems that the only advantage Kubernetes on Mesos offers is that it places your Kubernetes cluster onto Mesos, where you can run other jobs that aren't Dockerized. I'm sure I'm not seeing the "forest through the trees" here, but if I already know I'm going to use Docker and Kubernetes, why might I consider running Kubernetes on Mesos? ### Response: I recently gave a presentation at the London Mesos User Group, addressing exactly this question: Can I have Mesos and Kubernetes? with a demo available here. Bottom-line: it is all about hybrid workloads. If you can and want to go all in concerning Docker, that is, you containerize all your workloads, then nothing (or little) speaks against Kubernetes standalone.
So Google Kubernetes is a Docker container cluster management solution that helps deploy, scale, schedule and maintenance sets of containers simple. Apache Mesos is a work scheduler that figures out where jobs (e.g. deploying a Docker container, some batch processing job, etc.) should run. Kubernetes can be "raw" and run stadalone, or there is the Kubernetes on Mesos option where Kubernetes is deployed as a Mesos framework and runs on top of your Mesos cluster. I'm wondering what benefits, if any, there are to running Kubernetes on Mesos, as opposed to "raw" Kubernetes? Mesos can do non-Docker work: it can run Java/Python/Ruby/etc. apps on normal Linux VMs. It can run scripts. It's just a general work engine. So at first glance, it seems that the only advantage Kubernetes on Mesos offers is that it places your Kubernetes cluster onto Mesos, where you can run other jobs that aren't Dockerized. I'm sure I'm not seeing the "forest through the trees" here, but if I already know I'm going to use Docker and Kubernetes, why might I consider running Kubernetes on Mesos?
Please see: https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/cassandra/README.md for a tutorial of how to run Cassandra on Kubernetes. You will also need to add in best practices like snapshotting the databases to persistent storage and other such things. (and why do you say that Kubernetes is expensive? Google Container Engine only charges the cost of the VMs for small clusters, and you can deploy open source Kubernetes yourself for free)
###Instruction: I am using containers to run both app servers & Cassandra nodes. When starting the app server container, I need to specify which Cassandra node(1..n) to connect to. How would you divide the workload? One app container to one or more Cassandra nodes(How many). One or more app container to one Cassandra node(How many). Many to many(How many). This is for a production setup, 100 % uptime. Each data load from cassandra is small but many. I should be scalable so I can put in more app containers - like in Kubernetes they have pods. Pods is a set of nodes that make up granules of the application. Therefore I am looking for the best possible group of containers(Cassandra and App server) that will scale Info: Kubernetes is a to expensive setup in the beginning. And while waiting for Docker Swarm to be in release state I will do this manually. Any insight is welcome? Regards ### Response: Please see: https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/cassandra/README.md for a tutorial of how to run Cassandra on Kubernetes. You will also need to add in best practices like snapshotting the databases to persistent storage and other such things. (and why do you say that Kubernetes is expensive? Google Container Engine only charges the cost of the VMs for small clusters, and you can deploy open source Kubernetes yourself for free)
I am using containers to run both app servers & Cassandra nodes. When starting the app server container, I need to specify which Cassandra node(1..n) to connect to. How would you divide the workload? One app container to one or more Cassandra nodes(How many). One or more app container to one Cassandra node(How many). Many to many(How many). This is for a production setup, 100 % uptime. Each data load from cassandra is small but many. I should be scalable so I can put in more app containers - like in Kubernetes they have pods. Pods is a set of nodes that make up granules of the application. Therefore I am looking for the best possible group of containers(Cassandra and App server) that will scale Info: Kubernetes is a to expensive setup in the beginning. And while waiting for Docker Swarm to be in release state I will do this manually. Any insight is welcome? Regards
This Pod is crash-looping. You can tell because the number of restarts is greater than zero. kubectl describe pods <pod-name> Should give further details to help debug. As will kubectl logs <pod-name>
###Instruction: I am trying to configure php phabricator example from kubernetes but after creating the replication controller. POD is not showing in ready state ever. It shows in below state: NAME READY STATUS RESTARTS AGE phabricator-controller-z0nk3 0/1 CrashLoopBackOff 5 2m Below is the controller yaml: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "phabricator-controller", "labels": { "name": "phabricator" } }, "spec": { "replicas": 1, "selector": { "name": "phabricator" }, "template": { "metadata": { "labels": { "name": "phabricator" } }, "spec": { "containers": [ { "name": "phabricator", "image": "fgrzadkowski/example-php-phabricator", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ] } } } } Can someone please suggest me how to fix this? ### Response: This Pod is crash-looping. You can tell because the number of restarts is greater than zero. kubectl describe pods <pod-name> Should give further details to help debug. As will kubectl logs <pod-name>
I am trying to configure php phabricator example from kubernetes but after creating the replication controller. POD is not showing in ready state ever. It shows in below state: NAME READY STATUS RESTARTS AGE phabricator-controller-z0nk3 0/1 CrashLoopBackOff 5 2m Below is the controller yaml: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "phabricator-controller", "labels": { "name": "phabricator" } }, "spec": { "replicas": 1, "selector": { "name": "phabricator" }, "template": { "metadata": { "labels": { "name": "phabricator" } }, "spec": { "containers": [ { "name": "phabricator", "image": "fgrzadkowski/example-php-phabricator", "ports": [ { "name": "http-server", "containerPort": 80 } ] } ] } } } } Can someone please suggest me how to fix this?
Docker's standard networking configuration picks a container subnet for you out of its chosen defaults. As long as it doesn't conflict with any interfaces on your host, Docker is okay with it. Then, Docker inserts an iptables MASQUERADE rule that allows containers to talk to the external world using the host's default interface. Kubernetes' 3 requirements are violated by the fact that subnets are chosen only based on addresses in use on the host, which forces the requirement to NAT all container traffic using the MASQUERADE rule. Consider the following 3-host Docker setup (a little contrived to highlight things): Host 1: eth0: 10.1.2.3 docker0: 172.17.42.1/16 container-A: 172.17.42.2 Host 2: eth0: 10.1.2.4 docker0: 172.17.42.1/16 container-B: 172.17.42.2 Host 3: eth0: 172.17.42.2 docker0: 172.18.42.1 Let's say container-B wants to access an HTTP service on port 80 of container-A. You can get docker to expose container-A's port 80 somewhere on Host 1. Then container-B might make a request to 10.1.2.3:43210. This will be received on container-A's port 80, but will look like it came from some random port on 10.1.2.4 because of the NAT on the way out of Host 2. This violates the all containers communicate without NAT and the container sees same IP as others requirements. Try to access container-A's service directly from Host 2 and you get your nodes can communicate with containers without NAT violation. Now if either of those containers want to talk to Host 3, they're SOL (just a general argument for being careful with the auto-assigned docker0 subnets). Kubernetes approach on GCE/AWS/Flannel/... is to assign each host VM a subnet carved out of a flat private network. No subnets overlap with VM addresses or with each other. This lets containers and VMs communicate NATlessly.
###Instruction: I'm reading the Kubernetes "Getting Started from Scratch" Guide and have arrived at the dreaded Network Section, where they state: Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): * all containers can communicate with all other containers without NAT * all nodes can communicate with all containers (and vice-versa) without NAT * the IP that a container sees itself as is the same IP that others see it as My first source of confusion is: How is this different than the "standard" Docker model? How is Docker different w.r.t. those 3 Kubernetes requirements? The article then goes on to summarize how GCE achieves these requirements: For the Google Compute Engine cluster configuration scripts, we use advanced routing to assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric. This is in addition to the "main" IP address assigned to the VM, which is NAT'ed for outbound internet access. A linux bridge (called cbr0) is configured to exist on that subnet, and is passed to docker's --bridge flag. My question here is: Which requirement(s) from the 3 above does this paragraph address? More importantly, how does it achieve the requirement(s)? I guess I just don't understand how 1-subnet-per-VM accomplishes: container-container communication, node-container communication, and static IP. And, as a bonus/stretch concern: Why doesn't Marathon suffer from the same networking concerns as what Kubernetes is addressing here? ### Response: Docker's standard networking configuration picks a container subnet for you out of its chosen defaults. As long as it doesn't conflict with any interfaces on your host, Docker is okay with it. Then, Docker inserts an iptables MASQUERADE rule that allows containers to talk to the external world using the host's default interface. Kubernetes' 3 requirements are violated by the fact that subnets are chosen only based on addresses in use on the host, which forces the requirement to NAT all container traffic using the MASQUERADE rule. Consider the following 3-host Docker setup (a little contrived to highlight things): Host 1: eth0: 10.1.2.3 docker0: 172.17.42.1/16 container-A: 172.17.42.2 Host 2: eth0: 10.1.2.4 docker0: 172.17.42.1/16 container-B: 172.17.42.2 Host 3: eth0: 172.17.42.2 docker0: 172.18.42.1 Let's say container-B wants to access an HTTP service on port 80 of container-A. You can get docker to expose container-A's port 80 somewhere on Host 1. Then container-B might make a request to 10.1.2.3:43210. This will be received on container-A's port 80, but will look like it came from some random port on 10.1.2.4 because of the NAT on the way out of Host 2. This violates the all containers communicate without NAT and the container sees same IP as others requirements. Try to access container-A's service directly from Host 2 and you get your nodes can communicate with containers without NAT violation. Now if either of those containers want to talk to Host 3, they're SOL (just a general argument for being careful with the auto-assigned docker0 subnets). Kubernetes approach on GCE/AWS/Flannel/... is to assign each host VM a subnet carved out of a flat private network. No subnets overlap with VM addresses or with each other. This lets containers and VMs communicate NATlessly.
I'm reading the Kubernetes "Getting Started from Scratch" Guide and have arrived at the dreaded Network Section, where they state: Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): * all containers can communicate with all other containers without NAT * all nodes can communicate with all containers (and vice-versa) without NAT * the IP that a container sees itself as is the same IP that others see it as My first source of confusion is: How is this different than the "standard" Docker model? How is Docker different w.r.t. those 3 Kubernetes requirements? The article then goes on to summarize how GCE achieves these requirements: For the Google Compute Engine cluster configuration scripts, we use advanced routing to assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric. This is in addition to the "main" IP address assigned to the VM, which is NAT'ed for outbound internet access. A linux bridge (called cbr0) is configured to exist on that subnet, and is passed to docker's --bridge flag. My question here is: Which requirement(s) from the 3 above does this paragraph address? More importantly, how does it achieve the requirement(s)? I guess I just don't understand how 1-subnet-per-VM accomplishes: container-container communication, node-container communication, and static IP. And, as a bonus/stretch concern: Why doesn't Marathon suffer from the same networking concerns as what Kubernetes is addressing here?
This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?. The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes. Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through. Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
###Instruction: I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ? ### Response: This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?. The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes. Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through. Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ?
If you just want to cancel the rolling update, remove the failed pods and try again later, I have found that it is best to stop the update loop with CTRL+c and then delete the replication controller corresponding to the new app that is failing. ^C kubectl delete replicationcontrollers your-app-v1.2.3
###Instruction: We are using Heat + Kubernetes (V0.19) to manage our apps. When do rolling update, sometimes container staring will always fail on a node but kubelet on the node will always retry but always fail. So the updating will hang there which is not the behavior we expected. I found that using "kubectl delete node" to remove the node can avoid pods scheduled to that node. But in our env, the node to be deleted may have running pods on it. So my question is: After using "kubectl delete node" to remove the node, will the pods on that node still worked correctly ? ### Response: If you just want to cancel the rolling update, remove the failed pods and try again later, I have found that it is best to stop the update loop with CTRL+c and then delete the replication controller corresponding to the new app that is failing. ^C kubectl delete replicationcontrollers your-app-v1.2.3
We are using Heat + Kubernetes (V0.19) to manage our apps. When do rolling update, sometimes container staring will always fail on a node but kubelet on the node will always retry but always fail. So the updating will hang there which is not the behavior we expected. I found that using "kubectl delete node" to remove the node can avoid pods scheduled to that node. But in our env, the node to be deleted may have running pods on it. So my question is: After using "kubectl delete node" to remove the node, will the pods on that node still worked correctly ?
digging further and asking on the github project, I was directed towards this post that I hope answers the question: https://groups.google.com/forum/#!topic/google-containers/bTfEcRQ3N28/discussion In short the config file should look like: { "cluster": { "machines": [ "https://kube-master.internal:2379", "https://kube-minion1.internal:2379", "https://kube-minion2.internal:2379" ] }, "config": { "certFile": "/etc/etcd/kube-master.internal.pem", "keyFile": "/etc/etcd/kube-master.internal.key", "caCertFiles": [ "/etc/etcd/kubecluster-ca.pem" ], "consistency": "STRONG_CONSISTENCY" } } Haven't tried yet but will asap.
###Instruction: Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. Now, is Kubernetes able to use a TLS secured etcd cluster? In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? Thanks ### Response: digging further and asking on the github project, I was directed towards this post that I hope answers the question: https://groups.google.com/forum/#!topic/google-containers/bTfEcRQ3N28/discussion In short the config file should look like: { "cluster": { "machines": [ "https://kube-master.internal:2379", "https://kube-minion1.internal:2379", "https://kube-minion2.internal:2379" ] }, "config": { "certFile": "/etc/etcd/kube-master.internal.pem", "keyFile": "/etc/etcd/kube-master.internal.key", "caCertFiles": [ "/etc/etcd/kubecluster-ca.pem" ], "consistency": "STRONG_CONSISTENCY" } } Haven't tried yet but will asap.
Running CoreOS, etcd is not secured by default. To secure it I can use TLS, which adds a level of complexity I'm willing to work on. Now, is Kubernetes able to use a TLS secured etcd cluster? In the config for the kubelet and various pods, Kubernetes passes the etcd endpoints as parameters, so they require etcd and will need the certificates to talk to it if it is secured. If Kubernetes supports TLS connection to etcd, how does it get configured? Thanks
This is well described in the motivation section of Kubernetes's networking design doc. Essentially, relying on port binding requires dynamic port mapping to avoid conflicts between different containers wanting the same port (e.g. a lot of applications will want port 80). While dynamic port mapping can be made to work, it also causes a lot of problems, as outlined in the doc.
###Instruction: Kubernetes has a pretty complicated networking model that appears to be predicated upon circumventing a critical flaw with Docker's default networking: By default Docker containers cannot be contacted directly from the outside world, because their IP addresses are local/private to the subnet they're on. To circumvent this, Kubernetes has a very complex network model that, amongst other things, requires you to carve out your own flat IP space that is then shared by all hosts and containers (pods), thus giving each pod its own public IP. But I ask: isn't this already addressed by Docker port binding? If not, then what about port binding is still lacking, that requires Kubernetes to use the networking solution that they use? ### Response: This is well described in the motivation section of Kubernetes's networking design doc. Essentially, relying on port binding requires dynamic port mapping to avoid conflicts between different containers wanting the same port (e.g. a lot of applications will want port 80). While dynamic port mapping can be made to work, it also causes a lot of problems, as outlined in the doc.
Kubernetes has a pretty complicated networking model that appears to be predicated upon circumventing a critical flaw with Docker's default networking: By default Docker containers cannot be contacted directly from the outside world, because their IP addresses are local/private to the subnet they're on. To circumvent this, Kubernetes has a very complex network model that, amongst other things, requires you to carve out your own flat IP space that is then shared by all hosts and containers (pods), thus giving each pod its own public IP. But I ask: isn't this already addressed by Docker port binding? If not, then what about port binding is still lacking, that requires Kubernetes to use the networking solution that they use?
To log into a container, use the following command: kubectl exec -it POD bash Replace POD with the name of the pod in which the container is running. This works for pods with a single container. For pods with multiple containers, use the -c option to specify the container.
###Instruction: This is my first time using google cloud platform and feeling a bit lost. I pushed my custom ubuntu image (created from a Dockerfile) to GCE and then created a container cluster from the console. What I got is: a container cluster and a vm instances cluster. I can ssh into these instances and see them blank, I mean it does not have what I built from the Dockerfile on my system. I copied some source code into it. I did docker images in the whatever instance I sshed but could not see the image I pushed. I am totally lost now. My basic question is How do I see contents of my container or how do I ssh into it. I tried following some tutorials but no success. For ex: I tried this command gcloud compute instances list and got some big string (gke-cluster-1-53f024ac-node-s5zc) as instance name. When I tried this command gcloud compute ssh gke-cluster-1-53f024ac-node-s5zc I got error that instance not found. Please help. Thanks ### Response: To log into a container, use the following command: kubectl exec -it POD bash Replace POD with the name of the pod in which the container is running. This works for pods with a single container. For pods with multiple containers, use the -c option to specify the container.
This is my first time using google cloud platform and feeling a bit lost. I pushed my custom ubuntu image (created from a Dockerfile) to GCE and then created a container cluster from the console. What I got is: a container cluster and a vm instances cluster. I can ssh into these instances and see them blank, I mean it does not have what I built from the Dockerfile on my system. I copied some source code into it. I did docker images in the whatever instance I sshed but could not see the image I pushed. I am totally lost now. My basic question is How do I see contents of my container or how do I ssh into it. I tried following some tutorials but no success. For ex: I tried this command gcloud compute instances list and got some big string (gke-cluster-1-53f024ac-node-s5zc) as instance name. When I tried this command gcloud compute ssh gke-cluster-1-53f024ac-node-s5zc I got error that instance not found. Please help. Thanks
apiVersion: v1 kind: ReplicationController metadata: labels: name: api name: api spec: replicas: 1 selector: name: api template: metadata: labels: name: api spec: containers: - env: - name: VARIABLE <---- declare an env variable NAME value: "value-of-variable" <--- here is the value - name: ANOTHER_VARIABLE value: "another-value" image: myregistry/api imagePullPolicy: Always name: api
###Instruction: I need to set environment variable in kubernetes slave which is a coreos system. I have tried using exportand declare but it keeps reading each argument as a separate command ### Response: apiVersion: v1 kind: ReplicationController metadata: labels: name: api name: api spec: replicas: 1 selector: name: api template: metadata: labels: name: api spec: containers: - env: - name: VARIABLE <---- declare an env variable NAME value: "value-of-variable" <--- here is the value - name: ANOTHER_VARIABLE value: "another-value" image: myregistry/api imagePullPolicy: Always name: api
I need to set environment variable in kubernetes slave which is a coreos system. I have tried using exportand declare but it keeps reading each argument as a separate command
Yeah, you can create another nginx container with an nginx.conf that is configured similarly to this: upstream api { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:3000; server server2.local.io:3000; } upstream booking { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:5000; server server2.local.io:5000; } server { name api.local.io; location / { proxy_pass http://api; } } server { name booking.local.io; location / { proxy_pass http://booking; } }
###Instruction: I try to serve 2 web applications that should be powered by hhvm. It is easy to build up one docker image that includes nginx and the default.conf. But now where I will get n apps as microservices I want to test them and share the nginx container as I proceed with others like DB e.g. So when nginx is externally accessed with hhvm do I have to provide hhvm on this image too? Or can I refer it to the debian where hhvm is already provided? Then, I could store the nginx.conf with something like this: upstream api.local.io { server 127.0.0.1:3000; } upstream booking.local.io { server 127.0.0.1:5000; } How can I set up a proper nginx container for this? ### Response: Yeah, you can create another nginx container with an nginx.conf that is configured similarly to this: upstream api { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:3000; server server2.local.io:3000; } upstream booking { # Assuming this nginx container can access 127.0.0.1:5000 server 127.0.0.1:5000; server server2.local.io:5000; } server { name api.local.io; location / { proxy_pass http://api; } } server { name booking.local.io; location / { proxy_pass http://booking; } }
I try to serve 2 web applications that should be powered by hhvm. It is easy to build up one docker image that includes nginx and the default.conf. But now where I will get n apps as microservices I want to test them and share the nginx container as I proceed with others like DB e.g. So when nginx is externally accessed with hhvm do I have to provide hhvm on this image too? Or can I refer it to the debian where hhvm is already provided? Then, I could store the nginx.conf with something like this: upstream api.local.io { server 127.0.0.1:3000; } upstream booking.local.io { server 127.0.0.1:5000; } How can I set up a proper nginx container for this?
I think events support a fieldSelector for the involved object kind and name You can also turn the verbosity level on kubectl up to 8 to see network traces to see what it is doing
###Instruction: When I do a kubectl describe <pod>, the bottom section has an "Events" section, displaying Events related to that pod. For example, an event with Reason "failedScheduling", with the message "Failed for reason PodFitsResources and possibly others" How can I query the API to return that list of events? If I call /api/v1/namespaces/<ns>/pods/<pod_name>, it doesn't return any Events. If I try the /api/v1/events endpoint, I can specify a labelSelector parameter, but the name of the pod isn't a label of the Event, though it is in the object.involvedObject.name field. I could request the entire Event stream and filter out the few Events that interest me client-side, but that seems like overkill. kubectl is able to do it, so I figure there must be some way that I'm missing. Thanks. ### Response: I think events support a fieldSelector for the involved object kind and name You can also turn the verbosity level on kubectl up to 8 to see network traces to see what it is doing
When I do a kubectl describe <pod>, the bottom section has an "Events" section, displaying Events related to that pod. For example, an event with Reason "failedScheduling", with the message "Failed for reason PodFitsResources and possibly others" How can I query the API to return that list of events? If I call /api/v1/namespaces/<ns>/pods/<pod_name>, it doesn't return any Events. If I try the /api/v1/events endpoint, I can specify a labelSelector parameter, but the name of the pod isn't a label of the Event, though it is in the object.involvedObject.name field. I could request the entire Event stream and filter out the few Events that interest me client-side, but that seems like overkill. kubectl is able to do it, so I figure there must be some way that I'm missing. Thanks.
A Pod is the basic scheduling unit in Kubernetes. It is the common case that a pod will only have a single container running in it, as most containers can be scheduled independently (i.e. they do not need to be co-located on the same machine). With regards to your example, you could put most containers in individual pods, and use a Replication Controller to horizontally scale the number of replicas of each Pod (and therefore container) as needed. Along with your replication controller, you'll also want a Service to load balance between the replicas. Vertical tiers could be organized using labels on the pods/replication controllers/services, such as tier=message_broker. Edit: The reason it's not a good idea to put your entire stack in a single pod is it limits your flexibility: It forces your entire stack to be scheduled on a single machine, which could make scheduling more difficult if machines lack some of the necessary resources. Individual components cannot be scaled independently (e.g. if you need more frontend replicas to handle traffic, but your DB is only used for a small number of queries) All the containers would need to agree on which ports to use. Each pod has a unique IP, so containers running in separate pods can use the same ports.
###Instruction: So I'm trying to wrap my head around what exactly a typical Kubernetes pod looks like. According to their docs, a pod: "A pod (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context." Later in that same article: "Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located..." OK, so you can organize a single pod as your entire vertical stack (from DB to web app). But apparently that's not typically how it's organized, so I assume that typically a "horizontal" organization is preferred (why??). But to me, horizontal layering/stratification implies that you'll only have one container in a pod, because typically in each tier of service (web, app, cache, db, etc.) you'll have one type of component. Let's take a concrete example. Say we have the following vertical stack of tiers: Web frontend containers; Grails or Spring MVC web/app server Microservices containers; RESTful web services where core business logic lives Message broker (say RabbitMQ) containers Microservice cache (some services have distributed Hazelcast cache clusters sitting between them and their DB/backing store) containers MySQL DB cluster containers MongoDB cluster containers 3rd party RESTful cloud API (say SalesForce or Stripe or something similar) These are fairly typical components in an app stack. If we went against Kubernetes' own advice, and created "vertically-aligned" pods, each pod would consist of 1 type of container for each tier (the web/app server, each microservice, each DB, etc.). But how would a horizontally-aligned pod be organized? What containers would go in which pods? ### Response: A Pod is the basic scheduling unit in Kubernetes. It is the common case that a pod will only have a single container running in it, as most containers can be scheduled independently (i.e. they do not need to be co-located on the same machine). With regards to your example, you could put most containers in individual pods, and use a Replication Controller to horizontally scale the number of replicas of each Pod (and therefore container) as needed. Along with your replication controller, you'll also want a Service to load balance between the replicas. Vertical tiers could be organized using labels on the pods/replication controllers/services, such as tier=message_broker. Edit: The reason it's not a good idea to put your entire stack in a single pod is it limits your flexibility: It forces your entire stack to be scheduled on a single machine, which could make scheduling more difficult if machines lack some of the necessary resources. Individual components cannot be scaled independently (e.g. if you need more frontend replicas to handle traffic, but your DB is only used for a small number of queries) All the containers would need to agree on which ports to use. Each pod has a unique IP, so containers running in separate pods can use the same ports.
So I'm trying to wrap my head around what exactly a typical Kubernetes pod looks like. According to their docs, a pod: "A pod (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context." Later in that same article: "Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located..." OK, so you can organize a single pod as your entire vertical stack (from DB to web app). But apparently that's not typically how it's organized, so I assume that typically a "horizontal" organization is preferred (why??). But to me, horizontal layering/stratification implies that you'll only have one container in a pod, because typically in each tier of service (web, app, cache, db, etc.) you'll have one type of component. Let's take a concrete example. Say we have the following vertical stack of tiers: Web frontend containers; Grails or Spring MVC web/app server Microservices containers; RESTful web services where core business logic lives Message broker (say RabbitMQ) containers Microservice cache (some services have distributed Hazelcast cache clusters sitting between them and their DB/backing store) containers MySQL DB cluster containers MongoDB cluster containers 3rd party RESTful cloud API (say SalesForce or Stripe or something similar) These are fairly typical components in an app stack. If we went against Kubernetes' own advice, and created "vertically-aligned" pods, each pod would consist of 1 type of container for each tier (the web/app server, each microservice, each DB, etc.). But how would a horizontally-aligned pod be organized? What containers would go in which pods?
There are a couple options here, as noted in the doc link you provided. The preferred method is using Service Accounts to access the API: The short description is that your service would read the service-account secrets (token / CA-cert) that are mounted into the pod, then inject the token into the http header and validate the apiserver cert using the CA-cert. This somewhat simplifies the description of service accounts, but the above link can provide more detail. Example using curl and service-account data inside pod: curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces Another option, mentioned in the link you provided, is to run a side-car container running a "kubectl proxy" in the same pod as your application. A note of clarification: the "kube-proxy" and "kubectl proxy" are not referring to the same thing. The kube-proxy is responsible for routing "service" requests, kubectl proxy is a cli cmd which opens a local proxy to the Kubernetes API. What is happening under the covers when running kubectl proxy is that the kubectl command already knows how to use the service-account data, so it will extract the token/CA-cert and establish a connection to the API server for you, then expose an interface locally in the pod (which you can use without any auth/TLS). This is might be an easier approach as it likely requires no changes to your existing application, short of pointing it to the local kubectl proxy container running in the same pod. One other side-note: I'm not sure of your exact use-case, but generally it would be preferable to use the Service IP / Service DNS name and allow Kubernetes to handle service discovery, rather than extracting the pod IP itself (the pod IP will change if the pod gets scheduled to a different machine).
###Instruction: As I understand it, kube-proxy runs on every Kubernetes node (it is started on Master and on the Worker nodes) If I understand correctly, it is also the 'recommended' way to access the API (see: https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod) So, since kube-proxy is already running on every node, is the 'recommended' way to start each pod with a new kube-proxy container in it, or is it possible to 'link' somehow to the running kube-proxy container? Originally I was using the URL with $KUBERNETES_SERVICE_HOST and the credentials passed as a Secret, on GKE, calling curl https://$USER:$PASSWORD@${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SELECTOR} and parsing the results, but on K8s deployed on a CoreOS cluster I only seem to be able to authenticate through TLS and certs and the linked proxy seems like a better way. So, I'm looking for the most efficient / easiest way to connect to the API from a pod to look up the IP of another pod referred to by a Service. Any suggestion/input? ### Response: There are a couple options here, as noted in the doc link you provided. The preferred method is using Service Accounts to access the API: The short description is that your service would read the service-account secrets (token / CA-cert) that are mounted into the pod, then inject the token into the http header and validate the apiserver cert using the CA-cert. This somewhat simplifies the description of service accounts, but the above link can provide more detail. Example using curl and service-account data inside pod: curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces Another option, mentioned in the link you provided, is to run a side-car container running a "kubectl proxy" in the same pod as your application. A note of clarification: the "kube-proxy" and "kubectl proxy" are not referring to the same thing. The kube-proxy is responsible for routing "service" requests, kubectl proxy is a cli cmd which opens a local proxy to the Kubernetes API. What is happening under the covers when running kubectl proxy is that the kubectl command already knows how to use the service-account data, so it will extract the token/CA-cert and establish a connection to the API server for you, then expose an interface locally in the pod (which you can use without any auth/TLS). This is might be an easier approach as it likely requires no changes to your existing application, short of pointing it to the local kubectl proxy container running in the same pod. One other side-note: I'm not sure of your exact use-case, but generally it would be preferable to use the Service IP / Service DNS name and allow Kubernetes to handle service discovery, rather than extracting the pod IP itself (the pod IP will change if the pod gets scheduled to a different machine).
As I understand it, kube-proxy runs on every Kubernetes node (it is started on Master and on the Worker nodes) If I understand correctly, it is also the 'recommended' way to access the API (see: https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod) So, since kube-proxy is already running on every node, is the 'recommended' way to start each pod with a new kube-proxy container in it, or is it possible to 'link' somehow to the running kube-proxy container? Originally I was using the URL with $KUBERNETES_SERVICE_HOST and the credentials passed as a Secret, on GKE, calling curl https://$USER:$PASSWORD@${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SELECTOR} and parsing the results, but on K8s deployed on a CoreOS cluster I only seem to be able to authenticate through TLS and certs and the linked proxy seems like a better way. So, I'm looking for the most efficient / easiest way to connect to the API from a pod to look up the IP of another pod referred to by a Service. Any suggestion/input?
There isn't an easy way to customize the fluentd configuration in Google Container Engine (and if you try to customize it, your changes will be lost if a node gets replaced by the instance group manager or during a node upgrade). If you want to run a custom fluentd configuration, you should disable cloud logging on your cluster and then run your own fluentd container on each node with the configuration that you need for your application. Until Daemon Sets are available, the easiest way to run one pod per host is to assign the pod a host port and then create a replication controller with more replicas than you have hosts.
###Instruction: We are using kubernetes provided by "Google Container Engine" with enabled "Cloud Logging" feature. But we need to configure fluentd for our application (add more information about application what runs in container). I can't find any information how I can add my configs to logging agent provided by google and any way to replace it with my owned container. Does exists any way how I can do this? Thanks! ### Response: There isn't an easy way to customize the fluentd configuration in Google Container Engine (and if you try to customize it, your changes will be lost if a node gets replaced by the instance group manager or during a node upgrade). If you want to run a custom fluentd configuration, you should disable cloud logging on your cluster and then run your own fluentd container on each node with the configuration that you need for your application. Until Daemon Sets are available, the easiest way to run one pod per host is to assign the pod a host port and then create a replication controller with more replicas than you have hosts.
We are using kubernetes provided by "Google Container Engine" with enabled "Cloud Logging" feature. But we need to configure fluentd for our application (add more information about application what runs in container). I can't find any information how I can add my configs to logging agent provided by google and any way to replace it with my owned container. Does exists any way how I can do this? Thanks!
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful. On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model . From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection. See also Setting up the network for Kubernetes for a nice answer. Comments on your other points: 1. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range. The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
###Instruction: So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better. Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place. I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel. My basic understanding is: Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range. Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?) Remove/disable the MASQUERADE rule installed by Docker. Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs. Some other setup is required to make use of load balanced Services and dynamic DNS. Provision 5 VMs: 1 master, 4 minions Install/configure Docker on all 5 VMs Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons Install/configure kubelet and kube-proxy on each minion and run them as services/daemons This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete. I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved. How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured? I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup. ### Response: For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful. On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model . From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection. See also Setting up the network for Kubernetes for a nice answer. Comments on your other points: 1. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range. The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better. Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place. I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel. My basic understanding is: Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range. Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?) Remove/disable the MASQUERADE rule installed by Docker. Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs. Some other setup is required to make use of load balanced Services and dynamic DNS. Provision 5 VMs: 1 master, 4 minions Install/configure Docker on all 5 VMs Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons Install/configure kubelet and kube-proxy on each minion and run them as services/daemons This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete. I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved. How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured? I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
You can run either create a Dockefile and run: docker build . from the same directory where your Dockerfile is located. or you can run: docker run -i -t <docker-image> bash or (if your container is already running) docker exec -i -t <container-id> bash once you are in the shell make all the changes you please. Then run: docker commit <container-id> myimage:0.1 You will have a new docker image locally myimage:0.1. If you want to push to a docker repository (dockerhub or your private docker repo) you can run: docker push myimage:0.1
###Instruction: I have successfully shelled into a RUNNING docker container using docker exec -i -t 7be21f1544a5 bash I have made some changes to some json files and wanted to apply these changes to reflect online. I am a beginner and have tried to restart, mount in vain. What strings I have to replace when I mount using docker run? Is there any online sample? CONTAINER ID: 7be21f1544a5 IMAGE: gater/web COMMAND: "/bin/sh -c 'nginx'" CREATED: 4 weeks ago STATUS: Up 44 minutes PORTS: 443/tcp, 172.16.0.1:10010->80/tcp NAMES: web ### Response: You can run either create a Dockefile and run: docker build . from the same directory where your Dockerfile is located. or you can run: docker run -i -t <docker-image> bash or (if your container is already running) docker exec -i -t <container-id> bash once you are in the shell make all the changes you please. Then run: docker commit <container-id> myimage:0.1 You will have a new docker image locally myimage:0.1. If you want to push to a docker repository (dockerhub or your private docker repo) you can run: docker push myimage:0.1
I have successfully shelled into a RUNNING docker container using docker exec -i -t 7be21f1544a5 bash I have made some changes to some json files and wanted to apply these changes to reflect online. I am a beginner and have tried to restart, mount in vain. What strings I have to replace when I mount using docker run? Is there any online sample? CONTAINER ID: 7be21f1544a5 IMAGE: gater/web COMMAND: "/bin/sh -c 'nginx'" CREATED: 4 weeks ago STATUS: Up 44 minutes PORTS: 443/tcp, 172.16.0.1:10010->80/tcp NAMES: web
Ok, after digging around in the Docker code base, I think I have found some similar reports of what you are seeing. The way this error is displayed changed in 1.7, but this thread seems related: https://github.com/docker/docker/issues/14792 This turned me onto this fix, which landed in 1.8: https://github.com/docker/docker/pull/15040 In particular, see this comment: https://github.com/docker/docker/pull/15040#issuecomment-125661037 The comment seems to indicate that this is only a problem for v1 layers, so our Beta support for v2 may work around this issue. You can push to our v2 beta via: gcloud docker --server=beta.gcr.io push beta.gcr.io/project-id-123456/... You can then simply change the reference in your Pod to "beta.gcr.io/..." and it will pull via v2.
###Instruction: I've successfully pushed my Docker container image to gcr.io with the following command: $ gcloud docker push gcr.io/project-id-123456/my-image But when I try to create a new pod I get the following error: $ kubectl run my-image --image=gcr.io/project-id-123456/my-image CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-image my-image gcr.io/project-id-123456/my-image run=my-image 1 $ kubectl get pods NAME READY STATUS RESTARTS AGE my-image-of9x7 0/1 Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar exit status 1 unexpected EOF 0 5m It doesn't pull on my local as well: $ docker rmi -f $(docker images -q) # Clear local image cache $ gcloud docker pull gcr.io/project-id-123456/my-image:latest … Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar re-exec error: exit status 1: output: unexpected EOF Can someone please suggest me how to fix this? ### Response: Ok, after digging around in the Docker code base, I think I have found some similar reports of what you are seeing. The way this error is displayed changed in 1.7, but this thread seems related: https://github.com/docker/docker/issues/14792 This turned me onto this fix, which landed in 1.8: https://github.com/docker/docker/pull/15040 In particular, see this comment: https://github.com/docker/docker/pull/15040#issuecomment-125661037 The comment seems to indicate that this is only a problem for v1 layers, so our Beta support for v2 may work around this issue. You can push to our v2 beta via: gcloud docker --server=beta.gcr.io push beta.gcr.io/project-id-123456/... You can then simply change the reference in your Pod to "beta.gcr.io/..." and it will pull via v2.
I've successfully pushed my Docker container image to gcr.io with the following command: $ gcloud docker push gcr.io/project-id-123456/my-image But when I try to create a new pod I get the following error: $ kubectl run my-image --image=gcr.io/project-id-123456/my-image CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-image my-image gcr.io/project-id-123456/my-image run=my-image 1 $ kubectl get pods NAME READY STATUS RESTARTS AGE my-image-of9x7 0/1 Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar exit status 1 unexpected EOF 0 5m It doesn't pull on my local as well: $ docker rmi -f $(docker images -q) # Clear local image cache $ gcloud docker pull gcr.io/project-id-123456/my-image:latest … Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar re-exec error: exit status 1: output: unexpected EOF Can someone please suggest me how to fix this?
You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me). First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging). Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path. This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).
###Instruction: Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside. The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside. On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs. Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation. Is there any solution for this use-case? Thanks for your input. EDIT: I see here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port ### Response: You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me). First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging). Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path. This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).
Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside. The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside. On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs. Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation. Is there any solution for this use-case? Thanks for your input. EDIT: I see here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port
Finally I find out what was causing this problem. Timeout was not defined correctly because go-etcd unmarshalls json timeout value into time.Duration which uses nanoseconds as a base unit. So that for a value of 1s, 1000000000 should be written. Following the example above: { "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5000000000, "consistency": "WEAK" } }
###Instruction: I'm trying to bring kubernetes api server up using etcd config (kubernetes uses go-etcd which has a method to read all parameters from a configuration file): { "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5, "consistency": "WEAK" } } But fails in kube-apiserver because it cannot reach etcd successfully. I think this is because it tries to sync the cluster... but I don't know. I have created a (etcd) cluster using internal ips for advertise and client addresses except for the listen-client-urls which is set to 0.0.0.0/0. Also, the whole cluster is behind a load balancer which is accessible through my-public-hostname. Inside the container (because i'm using hyperkube), etcdctl won't work unless I set the '--no-sync' parameter. If i use etcdctl without that parameter it suspiciously fails like kube-apiserver does. But I wasn't able to check the piece of code in kubernetes which does the cluster syncrhonization... Any ideas? Thanks in advance. EDIT: It seems to be an error related to the current etcd client in kubernetes (https://github.com/coreos/go-etcd), which is not the newest one (https://github.com/coreos/etcd/client). I tested this empirically and "etcd/client" works but "go-etcd" doesn't, you can check this test here: https://github.com/glerchundi/etcd-go-clients-test. It's worth noting that there is an ongoing work to migrate go-etcd to etcd/client in kubernetes: https://github.com/kubernetes/kubernetes/issues/11962. Can anyone from the Kubernetes team confirm this? APPENDIX 1 I'm trying to run kubernetes in CoreOS and: flannel works, locksmithd works, fleet works (they access to etcd using the very same etcd client credentials) so it's probably something related to how kubernetes accesses to the etcd endpoint. APPENDIX 2 (these commands are executed inside the hyperkube container, concretely this one: gcr.io/google_containers/hyperkube:v1.0.6) etcdctl without --no-sync fails outputting this: root@98b2524464f1:/# etcdctl --cert-file="/etc/ssl/etcd/client.pem" --key-file="/etc/ssl/etcd/client.key.pem" --ca-file="/etc/ssl/etcd/ca.pem" --peers="http//my-public-hostname:2379" ls / Error: 501: All the given peers are not reachable (failed to propose on members [https://10.1.0.1:2379 https://10.1.0.0:2379 https://10.1.0.2:2379] twice [last error: Get https://10.1.0.0:2379/v2/keys/?quorum=false&recursive=false&sorted=false: dial tcp 10.1.0.0:2379: i/o timeout]) [0] And kube-apiserver with this: root@98b2524464f1:/# /hyperkube \ apiserver \ --bind-address=0.0.0.0 \ --etcd_config=/etc/kubernetes/ssl/etcd.json \ --allow-privileged=true \ --service-cluster-ip-range=10.3.0.0/24 \ --secure_port=443 \ --advertise-address=10.0.0.2 \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/apiserver.key.pem F1002 09:47:29.348527 384 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: 501: All the given peers are not reachable (failed to propose on members [https://my-public-hostname:2379] twice [last error: Get https://my-public-hostname:2379/v2/keys/registry/ranges/serviceips?quorum=false&recursive=false&sorted=false: dial tcp: i/o timeout]) [0] APPENDIX 3 etcd #0: etcd2: name: etcd0 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.0:2379 initial-advertise-peer-urls: http://10.1.0.0:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.0:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #1: etcd2: name: etcd1 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.1:2379 initial-advertise-peer-urls: http://10.1.0.1:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.1:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #2: etcd2: name: etcd2 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.2:2379 initial-advertise-peer-urls: http://10.1.0.2:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.2:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem ### Response: Finally I find out what was causing this problem. Timeout was not defined correctly because go-etcd unmarshalls json timeout value into time.Duration which uses nanoseconds as a base unit. So that for a value of 1s, 1000000000 should be written. Following the example above: { "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5000000000, "consistency": "WEAK" } }
I'm trying to bring kubernetes api server up using etcd config (kubernetes uses go-etcd which has a method to read all parameters from a configuration file): { "cluster": { "machines": [ "https://my-public-hostname:2379" ] }, "config": { "certFile": "/etc/ssl/etcd/client.pem", "keyFile": "/etc/ssl/etcd/client.key.pem", "caCertFiles": [ "/etc/ssl/etcd/ca.pem" ], "timeout": 5, "consistency": "WEAK" } } But fails in kube-apiserver because it cannot reach etcd successfully. I think this is because it tries to sync the cluster... but I don't know. I have created a (etcd) cluster using internal ips for advertise and client addresses except for the listen-client-urls which is set to 0.0.0.0/0. Also, the whole cluster is behind a load balancer which is accessible through my-public-hostname. Inside the container (because i'm using hyperkube), etcdctl won't work unless I set the '--no-sync' parameter. If i use etcdctl without that parameter it suspiciously fails like kube-apiserver does. But I wasn't able to check the piece of code in kubernetes which does the cluster syncrhonization... Any ideas? Thanks in advance. EDIT: It seems to be an error related to the current etcd client in kubernetes (https://github.com/coreos/go-etcd), which is not the newest one (https://github.com/coreos/etcd/client). I tested this empirically and "etcd/client" works but "go-etcd" doesn't, you can check this test here: https://github.com/glerchundi/etcd-go-clients-test. It's worth noting that there is an ongoing work to migrate go-etcd to etcd/client in kubernetes: https://github.com/kubernetes/kubernetes/issues/11962. Can anyone from the Kubernetes team confirm this? APPENDIX 1 I'm trying to run kubernetes in CoreOS and: flannel works, locksmithd works, fleet works (they access to etcd using the very same etcd client credentials) so it's probably something related to how kubernetes accesses to the etcd endpoint. APPENDIX 2 (these commands are executed inside the hyperkube container, concretely this one: gcr.io/google_containers/hyperkube:v1.0.6) etcdctl without --no-sync fails outputting this: root@98b2524464f1:/# etcdctl --cert-file="/etc/ssl/etcd/client.pem" --key-file="/etc/ssl/etcd/client.key.pem" --ca-file="/etc/ssl/etcd/ca.pem" --peers="http//my-public-hostname:2379" ls / Error: 501: All the given peers are not reachable (failed to propose on members [https://10.1.0.1:2379 https://10.1.0.0:2379 https://10.1.0.2:2379] twice [last error: Get https://10.1.0.0:2379/v2/keys/?quorum=false&recursive=false&sorted=false: dial tcp 10.1.0.0:2379: i/o timeout]) [0] And kube-apiserver with this: root@98b2524464f1:/# /hyperkube \ apiserver \ --bind-address=0.0.0.0 \ --etcd_config=/etc/kubernetes/ssl/etcd.json \ --allow-privileged=true \ --service-cluster-ip-range=10.3.0.0/24 \ --secure_port=443 \ --advertise-address=10.0.0.2 \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/apiserver.key.pem F1002 09:47:29.348527 384 controller.go:80] Unable to perform initial IP allocation check: unable to refresh the service IP block: 501: All the given peers are not reachable (failed to propose on members [https://my-public-hostname:2379] twice [last error: Get https://my-public-hostname:2379/v2/keys/registry/ranges/serviceips?quorum=false&recursive=false&sorted=false: dial tcp: i/o timeout]) [0] APPENDIX 3 etcd #0: etcd2: name: etcd0 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.0:2379 initial-advertise-peer-urls: http://10.1.0.0:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.0:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #1: etcd2: name: etcd1 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.1:2379 initial-advertise-peer-urls: http://10.1.0.1:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.1:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem etcd #2: etcd2: name: etcd2 initial-cluster-state: new initial-cluster: etcd0=http://10.1.0.0:2380,etcd1=http://10.1.0.1:2380,etcd2=http://10.1.0.2:2380 data-dir: /var/lib/etcd2 advertise-client-urls: https://10.1.0.2:2379 initial-advertise-peer-urls: http://10.1.0.2:2380 listen-client-urls: https://0.0.0.0:2379 listen-peer-urls: http://10.1.0.2:2380 client-cert-auth: true trusted-ca-file: /etc/ssl/etcd/certs/ca-chain.cert.pem cert-file: /etc/ssl/etcd/certs/etcd-server.cert.pem key-file: /etc/ssl/etcd/private/etcd-server.key.pem
Use services with type nodePort, it will bind your service to a fixed port on all your nodes (http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport) Then you have to use a loadbalancer (i.e haproxy) to forward calls to this service. The loadbalancer configuration can be done by a script that use kubernetes /services API.
###Instruction: is there a way to setup automatic external ip allocation to service like google do in loadbalancer ? I'm running kubernetes on bare metal. Thank you ### Response: Use services with type nodePort, it will bind your service to a fixed port on all your nodes (http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport) Then you have to use a loadbalancer (i.e haproxy) to forward calls to this service. The loadbalancer configuration can be done by a script that use kubernetes /services API.
is there a way to setup automatic external ip allocation to service like google do in loadbalancer ? I'm running kubernetes on bare metal. Thank you
The problem was specific for us-east-1 region. I had to edit the dhcp-set that was created as part of kube-up.sh and add the following - domain-name = ec2.internal Then it worked like charm. More information - https://github.com/kubernetes/kubernetes/issues/7962#issuecomment-145324441
###Instruction: I experience very strange behavior when I'm trying to set new Kubernetes cluster in AWS. Whenever I try to run kube-up.sh with its default config it works perfectly, The cluster and all its relative components are setting up in less than 10 minutes. The problem occur when I set the "kube-aws-zone" to be us-east-1e (the same as my current VPC) instead of us-west-2a (default). The installation process stuck in a loop with the following message- Waiting 3 minutes for cluster to settle ..................Re-running salt highstate sudo: unable to resolve host ip-172-20-0-9 Waiting for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This might loop forever if there was some uncaught error during start up. I tried to dig a bit in the minions and find this error in /var/log/salt/minion 2015-10-01 14:52:54,912 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:52:54,913 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:00,902 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:03,078 [salt.state ][ERROR ] The named service kube-proxy is not available 2015-10-01 14:53:16,677 [salt.state ][ERROR ] An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1533, in call **cdata['kwargs']) File "/usr/lib/python2.7/dist-packages/salt/states/sysctl.py", line 56, in present configured = salt'sysctl.show' File "/usr/lib/python2.7/dist-packages/salt/modules/linux_sysctl.py", line 86, in show for line in salt.utils.fopen(config_file_path): File "/usr/lib/python2.7/dist-packages/salt/utils/init.py", line 1065, in fopen fhandle = open(*args, **kwargs) IOError: [Errno 2] No such file or directory: '/etc/sysctl.d/99-salt.conf' 2015-10-01 14:53:16,707 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:53:16,708 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] Command 'service docker status' failed with return code: 3 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] output: * docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2015-10-01 14:53:16 UTC; 262ms ago Docs: http://docs.docker.com Process: 15285 ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE) Main PID: 15285 (code=exited, status=1/FAILURE) Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: Unit docker.service entered failed state. Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service failed. 2015-10-01 14:53:20,259 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:20,687 [salt.state ][ERROR ] The named service kube-proxy is not available I've tried to remove and re-set the IAM roles as suggested to similar issue, but ended up with no luck. Will appreciate any assistance. Thanks, ### Response: The problem was specific for us-east-1 region. I had to edit the dhcp-set that was created as part of kube-up.sh and add the following - domain-name = ec2.internal Then it worked like charm. More information - https://github.com/kubernetes/kubernetes/issues/7962#issuecomment-145324441
I experience very strange behavior when I'm trying to set new Kubernetes cluster in AWS. Whenever I try to run kube-up.sh with its default config it works perfectly, The cluster and all its relative components are setting up in less than 10 minutes. The problem occur when I set the "kube-aws-zone" to be us-east-1e (the same as my current VPC) instead of us-west-2a (default). The installation process stuck in a loop with the following message- Waiting 3 minutes for cluster to settle ..................Re-running salt highstate sudo: unable to resolve host ip-172-20-0-9 Waiting for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This might loop forever if there was some uncaught error during start up. I tried to dig a bit in the minions and find this error in /var/log/salt/minion 2015-10-01 14:52:54,912 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:52:54,913 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:00,902 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:03,078 [salt.state ][ERROR ] The named service kube-proxy is not available 2015-10-01 14:53:16,677 [salt.state ][ERROR ] An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1533, in call **cdata['kwargs']) File "/usr/lib/python2.7/dist-packages/salt/states/sysctl.py", line 56, in present configured = salt'sysctl.show' File "/usr/lib/python2.7/dist-packages/salt/modules/linux_sysctl.py", line 86, in show for line in salt.utils.fopen(config_file_path): File "/usr/lib/python2.7/dist-packages/salt/utils/init.py", line 1065, in fopen fhandle = open(*args, **kwargs) IOError: [Errno 2] No such file or directory: '/etc/sysctl.d/99-salt.conf' 2015-10-01 14:53:16,707 [salt.loaded.int.module.cmdmod][ERROR ] Command 'runlevel /run/utmp' failed with return code: 1 2015-10-01 14:53:16,708 [salt.loaded.int.module.cmdmod][ERROR ] output: Too many arguments. 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] Command 'service docker status' failed with return code: 3 2015-10-01 14:53:16,719 [salt.loaded.int.module.cmdmod][ERROR ] output: * docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2015-10-01 14:53:16 UTC; 262ms ago Docs: http://docs.docker.com Process: 15285 ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE) Main PID: 15285 (code=exited, status=1/FAILURE) Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: Unit docker.service entered failed state. Oct 01 14:53:16 ip-172-20-0-90 systemd[1]: docker.service failed. 2015-10-01 14:53:20,259 [salt.state ][ERROR ] The named service kubelet is not available 2015-10-01 14:53:20,687 [salt.state ][ERROR ] The named service kube-proxy is not available I've tried to remove and re-set the IAM roles as suggested to similar issue, but ended up with no luck. Will appreciate any assistance. Thanks,
EDIT: I was wrong. It does seem to work. I had a typo in my script. My script is here - I literally just ran this and it worked properly. https://gist.github.com/thockin/36fea15cc0deb08a768a Original response for posterity: I'm not an expert in the GCE L7 API yet, but I have made it work in Kubernetes. I think there's a bug in the --port-name logic. If you specify --port directly it seems to work for me. I'm filing an issue internally.
###Instruction: I have an app running on port 31280 exposed via nodePorts from a Kubernetes cluster. Same port is exposed through named-port on the instance group used by cluster for load balancing. While creating a backend-service with HTTP protocol, the service is created at default http port(80) even if I specify custom named-port. Exposed named-port for instance group is: gcloud preview instance-groups --zone='asia-east1-a' list-services gke-dropwizard-service-31ccc162-group [ { "endpoints": [ { "name": "dropwizard-example-service-http", "port": 31280 } ], "fingerprint": "XXXXXXXXXXXXXXXX" } ] Health check is: gcloud compute http-health-checks describe dropwizard-example-service checkIntervalSec: 5 creationTimestamp: '2015-08-11T12:08:16.245-07:00' description: Dropwizard Example Sevice health check ping healthyThreshold: 2 host: '' id: 'XXXXXXX' kind: compute#httpHealthCheck name: dropwizard-example-service port: 31318 requestPath: /ping selfLink: https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service timeoutSec: 3 unhealthyThreshold: 2 Health port(31318) is also exposed via named port in instance group. Commands used to create backend-service is: gcloud compute backend-services create "dropwizard-example-external-service" --description "Dropwizard Example Service via Nodeports from Kubernetes cluster" --http-health-check "dropwizard-example-service" --port-name "dropwizard-example-service-http" --timeout "30" Command used to add instance group to backend-service is: gcloud compute backend-services add-backend "dropwizard-example-external-service" --group "gke-dropwizard-service-31ccc162-group" --zone "asia-east1-a" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "0.8" Finally created backend-service is described as: gcloud compute backend-services describe dropwizard-example-external-service backends: - balancingMode: UTILIZATION capacityScaler: 1.0 description: '' group: https://www.googleapis.com/resourceviews/v1beta2/projects/XXX/zones/asia-east1-a/resourceViews/gke-dropwizard-service-31ccc162-group maxUtilization: 0.8 creationTimestamp: '2015-08-11T13:10:46.608-07:00' description: Dropwizard Example Service via Nodeport from Kubernetes cluster fingerprint: XXXXXXXXXXXX healthChecks: - https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service id: 'XXXX' kind: compute#backendService name: dropwizard-example-external-service port: 80 portName: dropwizard-example-service-http protocol: HTTP selfLink: https://www.googleapis.com/compute/v1/projects/XXXX/global/backendServices/dropwizard-example-external-service timeoutSec: 30 I don't understand which part is wrong. Why backend-service is using port 80? ### Response: EDIT: I was wrong. It does seem to work. I had a typo in my script. My script is here - I literally just ran this and it worked properly. https://gist.github.com/thockin/36fea15cc0deb08a768a Original response for posterity: I'm not an expert in the GCE L7 API yet, but I have made it work in Kubernetes. I think there's a bug in the --port-name logic. If you specify --port directly it seems to work for me. I'm filing an issue internally.
I have an app running on port 31280 exposed via nodePorts from a Kubernetes cluster. Same port is exposed through named-port on the instance group used by cluster for load balancing. While creating a backend-service with HTTP protocol, the service is created at default http port(80) even if I specify custom named-port. Exposed named-port for instance group is: gcloud preview instance-groups --zone='asia-east1-a' list-services gke-dropwizard-service-31ccc162-group [ { "endpoints": [ { "name": "dropwizard-example-service-http", "port": 31280 } ], "fingerprint": "XXXXXXXXXXXXXXXX" } ] Health check is: gcloud compute http-health-checks describe dropwizard-example-service checkIntervalSec: 5 creationTimestamp: '2015-08-11T12:08:16.245-07:00' description: Dropwizard Example Sevice health check ping healthyThreshold: 2 host: '' id: 'XXXXXXX' kind: compute#httpHealthCheck name: dropwizard-example-service port: 31318 requestPath: /ping selfLink: https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service timeoutSec: 3 unhealthyThreshold: 2 Health port(31318) is also exposed via named port in instance group. Commands used to create backend-service is: gcloud compute backend-services create "dropwizard-example-external-service" --description "Dropwizard Example Service via Nodeports from Kubernetes cluster" --http-health-check "dropwizard-example-service" --port-name "dropwizard-example-service-http" --timeout "30" Command used to add instance group to backend-service is: gcloud compute backend-services add-backend "dropwizard-example-external-service" --group "gke-dropwizard-service-31ccc162-group" --zone "asia-east1-a" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "0.8" Finally created backend-service is described as: gcloud compute backend-services describe dropwizard-example-external-service backends: - balancingMode: UTILIZATION capacityScaler: 1.0 description: '' group: https://www.googleapis.com/resourceviews/v1beta2/projects/XXX/zones/asia-east1-a/resourceViews/gke-dropwizard-service-31ccc162-group maxUtilization: 0.8 creationTimestamp: '2015-08-11T13:10:46.608-07:00' description: Dropwizard Example Service via Nodeport from Kubernetes cluster fingerprint: XXXXXXXXXXXX healthChecks: - https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service id: 'XXXX' kind: compute#backendService name: dropwizard-example-external-service port: 80 portName: dropwizard-example-service-http protocol: HTTP selfLink: https://www.googleapis.com/compute/v1/projects/XXXX/global/backendServices/dropwizard-example-external-service timeoutSec: 30 I don't understand which part is wrong. Why backend-service is using port 80?
You can see the pod.spec for a replicationController using kubectl get and the --output=json or --output-yaml arguments. Something like this should work: kubectl get rc <rc_name> --output=yaml
###Instruction: How can I output the PodSpec of a replication controller using kubectl? I tried kubectl describe rc <name> but the output doesn't include the PodSpec. ### Response: You can see the pod.spec for a replicationController using kubectl get and the --output=json or --output-yaml arguments. Something like this should work: kubectl get rc <rc_name> --output=yaml
How can I output the PodSpec of a replication controller using kubectl? I tried kubectl describe rc <name> but the output doesn't include the PodSpec.
You want to use the Volumes construct, which is quite well explained in the docs and used in a number of examples. If you want to use a host directory, use the hostPath volume type. There's an example of using it here.
###Instruction: I'm starting a container using kubernetes and I have to send a -v parameter to docker. I'm searching for it about 3 hours and no success. Here you can see my run command: kubectl run api --image=${API_IMAGE} --port=${PORT_SERVICE} --overrides='{"apiVersion": "v1","spec": {"template": {"spec": {"containers": [{"name": "api","image": "'${API_IMAGE}'","env": [{"name": "listen","value": "0.0.0.0"},{"name": "etcdAddr","value": "'${ETCD_ADDR}'"}]}]}}}}' ### Response: You want to use the Volumes construct, which is quite well explained in the docs and used in a number of examples. If you want to use a host directory, use the hostPath volume type. There's an example of using it here.
I'm starting a container using kubernetes and I have to send a -v parameter to docker. I'm searching for it about 3 hours and no success. Here you can see my run command: kubectl run api --image=${API_IMAGE} --port=${PORT_SERVICE} --overrides='{"apiVersion": "v1","spec": {"template": {"spec": {"containers": [{"name": "api","image": "'${API_IMAGE}'","env": [{"name": "listen","value": "0.0.0.0"},{"name": "etcdAddr","value": "'${ETCD_ADDR}'"}]}]}}}}'
This is likely caused by a bug in the Kubernetes EBS management code, and should be fixed by PR #14493. To summarize, not validating the device block cache was causing the kubelet to think the disk was still attached after it had actually been detached.
###Instruction: If I create this pod: apiVersion: v1 kind: Pod metadata: name: dsm-manager spec: containers: - name: dsm-manager image: ****** imagePullPolicy: Always command: - /sbin/init volumeMounts: - mountPath: /srv/project/DSMManager/snapshots name: dsm-snapshot-storage volumes: - name: dsm-snapshot-storage awsElasticBlockStore: volumeID: aws://us-west-2b/vol-43e44482 fsType: ext4 imagePullSecrets: - name: dockerregistrykey It always works, but if I delete it and re-create it it gets stuck with status 'CreatingContainer'. Looking in the events yields: -Unable to mount volumes for pod "dsm-manager_default": Timeout waiting for volume state -Error syncing pod, skipping: Timeout waiting for volume state If I delete the pod and re-create it the same thing happens no matter what I do. However if I attach the volume to some instance and then detach it through the aws cli, then create the pod it works find. I'm wondering if the volume isn't being detached properly. For now I just have this odd work flow of attaching the volume to a random instance then detaching it while updating the container image ### Response: This is likely caused by a bug in the Kubernetes EBS management code, and should be fixed by PR #14493. To summarize, not validating the device block cache was causing the kubelet to think the disk was still attached after it had actually been detached.
If I create this pod: apiVersion: v1 kind: Pod metadata: name: dsm-manager spec: containers: - name: dsm-manager image: ****** imagePullPolicy: Always command: - /sbin/init volumeMounts: - mountPath: /srv/project/DSMManager/snapshots name: dsm-snapshot-storage volumes: - name: dsm-snapshot-storage awsElasticBlockStore: volumeID: aws://us-west-2b/vol-43e44482 fsType: ext4 imagePullSecrets: - name: dockerregistrykey It always works, but if I delete it and re-create it it gets stuck with status 'CreatingContainer'. Looking in the events yields: -Unable to mount volumes for pod "dsm-manager_default": Timeout waiting for volume state -Error syncing pod, skipping: Timeout waiting for volume state If I delete the pod and re-create it the same thing happens no matter what I do. However if I attach the volume to some instance and then detach it through the aws cli, then create the pod it works find. I'm wondering if the volume isn't being detached properly. For now I just have this odd work flow of attaching the volume to a random instance then detaching it while updating the container image
To add to what @rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml: First, base64 encode your ~/.docker/config.json: cat ~/.docker/config.json | base64 -w0 Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping. Next, create a yaml file: my-secret.yaml apiVersion: v1 kind: Secret metadata: name: registrypullsecret data: .dockerconfigjson: <base-64-encoded-json-here> type: kubernetes.io/dockerconfigjson - $ kubectl create -f my-secret.yaml && kubectl get secrets NAME TYPE DATA default-token-olob7 kubernetes.io/service-account-token 2 registrypullsecret kubernetes.io/dockerconfigjson 1 Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller: apiVersion: v1 kind: Pod metadata: name: my-private-pod spec: containers: - name: private image: yourusername/privateimage:version imagePullSecrets: - name: registrypullsecret
###Instruction: I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically. I have tried running docker login on each server and putting the .dockercfg file in /root and /core I have also done the above with the .docker/config.json I have added secret to the kube master and added imagePullSecrets: name: docker.io to the Pod configuration file. When I create the pod i get the error message Error: image <user/image>:latest not found If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io. ### Response: To add to what @rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml: First, base64 encode your ~/.docker/config.json: cat ~/.docker/config.json | base64 -w0 Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping. Next, create a yaml file: my-secret.yaml apiVersion: v1 kind: Secret metadata: name: registrypullsecret data: .dockerconfigjson: <base-64-encoded-json-here> type: kubernetes.io/dockerconfigjson - $ kubectl create -f my-secret.yaml && kubectl get secrets NAME TYPE DATA default-token-olob7 kubernetes.io/service-account-token 2 registrypullsecret kubernetes.io/dockerconfigjson 1 Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller: apiVersion: v1 kind: Pod metadata: name: my-private-pod spec: containers: - name: private image: yourusername/privateimage:version imagePullSecrets: - name: registrypullsecret
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically. I have tried running docker login on each server and putting the .dockercfg file in /root and /core I have also done the above with the .docker/config.json I have added secret to the kube master and added imagePullSecrets: name: docker.io to the Pod configuration file. When I create the pod i get the error message Error: image <user/image>:latest not found If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
The service accounts inject authentication credentials into the pod to talk to the Kubernetes service (e.g. the apiserver). This is important if you are building an application that needs to inspect the pods/services/controllers that are running in the cluster to have correct behavior. For example, the kube2sky container watches services and endpoints to provide DNS within the cluster by connecting to the Kubernetes service.
###Instruction: I've read documentation, I've seen exemples, but I don't know why would I add a serviceAccount in my pods ? The 'elasticsearch' exemple from Kubernetes (https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch) has a service account 'elasticsearch', what does it grant ? Thank you. ### Response: The service accounts inject authentication credentials into the pod to talk to the Kubernetes service (e.g. the apiserver). This is important if you are building an application that needs to inspect the pods/services/controllers that are running in the cluster to have correct behavior. For example, the kube2sky container watches services and endpoints to provide DNS within the cluster by connecting to the Kubernetes service.
I've read documentation, I've seen exemples, but I don't know why would I add a serviceAccount in my pods ? The 'elasticsearch' exemple from Kubernetes (https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch) has a service account 'elasticsearch', what does it grant ? Thank you.
Client Go has offered a function GetLogs for this, which has been answered in How to get logs from kubernetes using Go? Looking at how kubectl implements its commands can be helpful when getting a feel for how to use the client library. In this case, kubectl's implementation of the logs command looks like this: req := client.RESTClient.Get(). Namespace(namespace). Name(podID). Resource("pods"). SubResource("log"). Param("follow", strconv.FormatBool(logOptions.Follow)). Param("container", logOptions.Container). Param("previous", strconv.FormatBool(logOptions.Previous)). Param("timestamps", strconv.FormatBool(logOptions.Timestamps)) if logOptions.SinceSeconds != nil { req.Param("sinceSeconds", strconv.FormatInt(*logOptions.SinceSeconds, 10)) } if logOptions.SinceTime != nil { req.Param("sinceTime", logOptions.SinceTime.Format(time.RFC3339)) } if logOptions.LimitBytes != nil { req.Param("limitBytes", strconv.FormatInt(*logOptions.LimitBytes, 10)) } if logOptions.TailLines != nil { req.Param("tailLines", strconv.FormatInt(*logOptions.TailLines, 10)) } readCloser, err := req.Stream() if err != nil { return err } defer readCloser.Close() _, err = io.Copy(out, readCloser) return err
###Instruction: I am using kube go client with kube api to access kube data. I am currently not finding any api call for logs of a particular pod. kubectl logs pod-name returns the logs for a particular pod. How do I do this using go client? I am using v1.0.6 of kubernetes. I can get the pod by using client.Pods("namespace").Get("pod-name") ### Response: Client Go has offered a function GetLogs for this, which has been answered in How to get logs from kubernetes using Go? Looking at how kubectl implements its commands can be helpful when getting a feel for how to use the client library. In this case, kubectl's implementation of the logs command looks like this: req := client.RESTClient.Get(). Namespace(namespace). Name(podID). Resource("pods"). SubResource("log"). Param("follow", strconv.FormatBool(logOptions.Follow)). Param("container", logOptions.Container). Param("previous", strconv.FormatBool(logOptions.Previous)). Param("timestamps", strconv.FormatBool(logOptions.Timestamps)) if logOptions.SinceSeconds != nil { req.Param("sinceSeconds", strconv.FormatInt(*logOptions.SinceSeconds, 10)) } if logOptions.SinceTime != nil { req.Param("sinceTime", logOptions.SinceTime.Format(time.RFC3339)) } if logOptions.LimitBytes != nil { req.Param("limitBytes", strconv.FormatInt(*logOptions.LimitBytes, 10)) } if logOptions.TailLines != nil { req.Param("tailLines", strconv.FormatInt(*logOptions.TailLines, 10)) } readCloser, err := req.Stream() if err != nil { return err } defer readCloser.Close() _, err = io.Copy(out, readCloser) return err
I am using kube go client with kube api to access kube data. I am currently not finding any api call for logs of a particular pod. kubectl logs pod-name returns the logs for a particular pod. How do I do this using go client? I am using v1.0.6 of kubernetes. I can get the pod by using client.Pods("namespace").Get("pod-name")
We're making watch a query param and removing it from the path (legacy form). You should call /api/v1/events?watch=true. See more discussions here if you're interested.
###Instruction: I'm interested in watching a stream of Events from Kubernetes, to determine whether a deployment was successful, or if any of the Pods were unable to be scheduled. I could call the endpoint /api/v1/watch/events, or I could call /api/v1/events?watch=true. Is there a difference between those two? I'm confused about the purpose of them. Thanks. ### Response: We're making watch a query param and removing it from the path (legacy form). You should call /api/v1/events?watch=true. See more discussions here if you're interested.
I'm interested in watching a stream of Events from Kubernetes, to determine whether a deployment was successful, or if any of the Pods were unable to be scheduled. I could call the endpoint /api/v1/watch/events, or I could call /api/v1/events?watch=true. Is there a difference between those two? I'm confused about the purpose of them. Thanks.
The error message is from cadvisor, which gathers container stats. You can try curl http://127.0.0.1:4194/validate/ on your node and to see if there is any problem preventing cadvisor from functioning correctly. Unlike image garbage collection (GC), container GC does not rely on cadvisor. You may want to check your GC policy settings to see if the parameters are set correctly.
###Instruction: I have problem with containers garbage collection. When I start kubelet I have an error: E1006 08:04:08.856100 25155 kubelet.go:682] Image garbage collection failed: unable to find data for container / And garbage collection doesn't work. kubernetes v1.0.6 How to fix it? ### Response: The error message is from cadvisor, which gathers container stats. You can try curl http://127.0.0.1:4194/validate/ on your node and to see if there is any problem preventing cadvisor from functioning correctly. Unlike image garbage collection (GC), container GC does not rely on cadvisor. You may want to check your GC policy settings to see if the parameters are set correctly.
I have problem with containers garbage collection. When I start kubelet I have an error: E1006 08:04:08.856100 25155 kubelet.go:682] Image garbage collection failed: unable to find data for container / And garbage collection doesn't work. kubernetes v1.0.6 How to fix it?
Both your and jpapejr's solutions seem like they'd work, but using a nodeSelector to force scheduling to a single node has the downside of requiring multiple RCs for a single application and making that application less resilient to a node failure. The idea of a custom scheduler is nice but has the downside of the amount of work to write and maintain that code. I think another possible solution would be to set runtime constraints in your pod spec that might get you near to what you want. Based on this newly merged doc with examples of runtime contraints, I think you could set resources.requests.cpu in the pod spec part of the RC and get close to a CPU-per-pod: apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp image: myregistry/myapp:v1 resources: requests: cpu: "1000m" That docs has other good examples of how requests and limits differ and interact. There may be a combination that gives you what you want and also keeps your application at proper capacity when an individual node fails.
###Instruction: If I create 3 nodes in a cluster, how do I distribute the docker containers evenly across the containers? For example, if I create a cluster of 3 nodes with 8 cpus on each node, I've determined through performance profiling that I get the best performance when I run one container per cpu. gcloud container clusters create mycluster --num-nodes 3 --machine-type n1-standard-8 kubectl run myapp --image=gcr.io/myproject/myapp -r 24 When I ran kubectl above, it put 11 containers on the first node, 10 on the second, and 3 on the third. How to I make it so that it is 8 each? ### Response: Both your and jpapejr's solutions seem like they'd work, but using a nodeSelector to force scheduling to a single node has the downside of requiring multiple RCs for a single application and making that application less resilient to a node failure. The idea of a custom scheduler is nice but has the downside of the amount of work to write and maintain that code. I think another possible solution would be to set runtime constraints in your pod spec that might get you near to what you want. Based on this newly merged doc with examples of runtime contraints, I think you could set resources.requests.cpu in the pod spec part of the RC and get close to a CPU-per-pod: apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp image: myregistry/myapp:v1 resources: requests: cpu: "1000m" That docs has other good examples of how requests and limits differ and interact. There may be a combination that gives you what you want and also keeps your application at proper capacity when an individual node fails.
If I create 3 nodes in a cluster, how do I distribute the docker containers evenly across the containers? For example, if I create a cluster of 3 nodes with 8 cpus on each node, I've determined through performance profiling that I get the best performance when I run one container per cpu. gcloud container clusters create mycluster --num-nodes 3 --machine-type n1-standard-8 kubectl run myapp --image=gcr.io/myproject/myapp -r 24 When I ran kubectl above, it put 11 containers on the first node, 10 on the second, and 3 on the third. How to I make it so that it is 8 each?
You can see all the API calls kubectl is making by passing --v=8 to any kubectl command
###Instruction: Does kubernetes accessible via a REST API? I was looking over at the Kubernetes API page and it all looks very cryptic / incomplete. They talk about new versions but have not disclosed the API usage or docs anywhere. I just wanted to know if there is a way to access the cluster information in any other way other than using the kubectl command. Example usage: What I do now: kubectl get pod --context='my-prod-cluster' What I'd like to do: curl GET /some/parameters/to/get/info ### Response: You can see all the API calls kubectl is making by passing --v=8 to any kubectl command
Does kubernetes accessible via a REST API? I was looking over at the Kubernetes API page and it all looks very cryptic / incomplete. They talk about new versions but have not disclosed the API usage or docs anywhere. I just wanted to know if there is a way to access the cluster information in any other way other than using the kubectl command. Example usage: What I do now: kubectl get pod --context='my-prod-cluster' What I'd like to do: curl GET /some/parameters/to/get/info
Never mind, I got it gcloud container clusters describe CLUSTER-NAME contains username and password!
###Instruction: I follow this docs https://cloud.google.com/container-engine/docs/tutorials/guestbook to created a Guestbook on GCE. And everything works fine. But when I try to access kube-ui. I am totally confused. first ≥ kubectl get svc 14:29 NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 10.191.254.236 80/TCP 146.148.x.x kubernetes component=apiserver,provider=kubernetes <none> 10.191.240.1 443/TCP redis-master name=redis-master name=redis-master 10.191.253.125 6379/TCP redis-slave name=redis-slave name=redis-slave 10.191.254.248 6379/TCP I can access my guestbook by 146.148.x.x; But I can't access web-ui through this ip. So I think this is not the master IP of my GCE. Then I execute this: ≥ kubectl get endpoints 14:33 NAME ENDPOINTS frontend 10.188.0.6:80,10.188.0.7:80,10.188.2.4:80 + 2 more... kubernetes 104.197.x.x:443 redis-master 10.188.2.7:6379 redis-slave 10.188.0.8:6379,10.188.2.3:6379 Now, I got another IP, and I try to access kube-ui through this IP. I can get response from the server. But It will need Authtication. How can I get access to the kube-ui? ### Response: Never mind, I got it gcloud container clusters describe CLUSTER-NAME contains username and password!
I follow this docs https://cloud.google.com/container-engine/docs/tutorials/guestbook to created a Guestbook on GCE. And everything works fine. But when I try to access kube-ui. I am totally confused. first ≥ kubectl get svc 14:29 NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 10.191.254.236 80/TCP 146.148.x.x kubernetes component=apiserver,provider=kubernetes <none> 10.191.240.1 443/TCP redis-master name=redis-master name=redis-master 10.191.253.125 6379/TCP redis-slave name=redis-slave name=redis-slave 10.191.254.248 6379/TCP I can access my guestbook by 146.148.x.x; But I can't access web-ui through this ip. So I think this is not the master IP of my GCE. Then I execute this: ≥ kubectl get endpoints 14:33 NAME ENDPOINTS frontend 10.188.0.6:80,10.188.0.7:80,10.188.2.4:80 + 2 more... kubernetes 104.197.x.x:443 redis-master 10.188.2.7:6379 redis-slave 10.188.0.8:6379,10.188.2.3:6379 Now, I got another IP, and I try to access kube-ui through this IP. I can get response from the server. But It will need Authtication. How can I get access to the kube-ui?
TL;DR. You can comment out "certificate-authority-data:" key to get it working. More info There is an open issue (https://github.com/kubernetes/kubernetes/issues/13830) with the behavior of that flag when a client cert/key is provided. When a client certificate is provided, the insecure flag is ignored.
###Instruction: I run the kube-apiserver with my self-signed certificate: /opt/bin/kube-apiserver \ --etcd_servers=http://master:2379,http://slave1:2379,http://slave2:2379 \ --logtostderr=false \ --v=4 \ --client-ca-file=/home/kubernetes/ssl/ca.crt \ --service-cluster-ip-range=192.168.3.0/24 \ --tls-cert-file=/home/kubernetes/ssl/server.crt \ --tls-private-key-file=/home/kubernetes/ssl/server.key Then I run the kubelet with the kubeconfig: /opt/bin/kubelet \ --address=0.0.0.0 \ --port=10250 \ --api_servers=https://master:6443 \ --kubeconfig=/home/kubernetes/ssl/config.yaml \ --logtostderr=false \ --v=4 The content of the config.yaml is below: apiVersion: v1 kind: Config clusters: - name: ubuntu cluster: insecure-skip-tls-verify: true server: https://master:6443 contexts: - context: cluster: "ubuntu" user: "ubuntu" name: development current-context: development users: - name: ubuntu user: client-certificate: /home/kubernetes/ssl/ca.crt client-key: /home/kubernetes/ssl/ca.key So, I thought the kubelet will not verify the self-signed certificate of apiserver, but the logs showed: E1009 16:48:51.919749 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.919876 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.923153 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority E1009 16:48:52.821556 100724 event.go:194] Unable to write event: 'Post https://master:6443/api/v1/namespaces/default/events: x509: certificate signed by unknown authority' (may retry after sleeping) E1009 16:48:52.922414 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.922433 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.924432 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority So I am confused with the meaning of the insecure-skip-tls-verify... ### Response: TL;DR. You can comment out "certificate-authority-data:" key to get it working. More info There is an open issue (https://github.com/kubernetes/kubernetes/issues/13830) with the behavior of that flag when a client cert/key is provided. When a client certificate is provided, the insecure flag is ignored.
I run the kube-apiserver with my self-signed certificate: /opt/bin/kube-apiserver \ --etcd_servers=http://master:2379,http://slave1:2379,http://slave2:2379 \ --logtostderr=false \ --v=4 \ --client-ca-file=/home/kubernetes/ssl/ca.crt \ --service-cluster-ip-range=192.168.3.0/24 \ --tls-cert-file=/home/kubernetes/ssl/server.crt \ --tls-private-key-file=/home/kubernetes/ssl/server.key Then I run the kubelet with the kubeconfig: /opt/bin/kubelet \ --address=0.0.0.0 \ --port=10250 \ --api_servers=https://master:6443 \ --kubeconfig=/home/kubernetes/ssl/config.yaml \ --logtostderr=false \ --v=4 The content of the config.yaml is below: apiVersion: v1 kind: Config clusters: - name: ubuntu cluster: insecure-skip-tls-verify: true server: https://master:6443 contexts: - context: cluster: "ubuntu" user: "ubuntu" name: development current-context: development users: - name: ubuntu user: client-certificate: /home/kubernetes/ssl/ca.crt client-key: /home/kubernetes/ssl/ca.key So, I thought the kubelet will not verify the self-signed certificate of apiserver, but the logs showed: E1009 16:48:51.919749 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.919876 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:51.923153 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority E1009 16:48:52.821556 100724 event.go:194] Unable to write event: 'Post https://master:6443/api/v1/namespaces/default/events: x509: certificate signed by unknown authority' (may retry after sleeping) E1009 16:48:52.922414 100724 reflector.go:136] Failed to list *api.Node: Get https://master:6443/api/v1/nodes?fieldSelector=metadata.name%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.922433 100724 reflector.go:136] Failed to list *api.Pod: Get https://master:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dslave1: x509: certificate signed by unknown authority E1009 16:48:52.924432 100724 reflector.go:136] Failed to list *api.Service: Get https://master:6443/api/v1/services: x509: certificate signed by unknown authority So I am confused with the meaning of the insecure-skip-tls-verify...
Kubelet is able to reach Openstack, however it is failing to find this node in the list of servers, in this tenant, and in this region. Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object Node's hostname is used to identify it from the list of servers provided by cloud provider. However, it can be overridden using --hostname_overide flag. In your config, I see that you have overridden it with an ip, if this does not match the name of the server as reported by Nova, you are likely to get this error.
###Instruction: Trying to use Cinder volumens on OpenStack as persistent volumes for my pods. As soon as I configure the cloudprovider and restart the kubelet, the kubelet fails to get its external ID from the cloud provider. The OpenStack API is reachable via https using a comodo certificate. the comodo-ca-bundle is installed as trusted ca on the node. Using curl against the API works without --insecure and --cacert options. Using kubernetes 1.1.0-alpha on centos 7 $ sudo journalctl -u kubelet Oct 01 07:40:26 [4196]: I1001 07:40:26.303887 4196 debugging.go:129] Content-Length: 1159 Oct 01 07:40:26 [4196]: I1001 07:40:26.303895 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.303950 4196 request.go:755] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/nodes","resourceVersion":"172921"},"items":[{"metadata":{"name":"192.168.100.80","selfLink":"/api/v1/nodes/192.168.100.80","uid":"b48b4cb9-676f-11e5-8521-fa163ef34ff1","resourceVersion":"172900","creationTimestamp":"2015-09-30T12:35:17Z","labels":{"kubernetes.io/hostname":"192.168.100.80"}},"spec":{"externalID":"192.168.100.80"},"status":{"capacity":{"cpu":"2","memory":"4047500Ki","pods":"40"},"conditions":[{"type":"Ready","status":"Unknown","lastHeartbeatTime":"2015-10-01T07:31:55Z","lastTransitionTime":"2015-10-01T07:32:36Z","reason":"Kubelet stopped posting node status."}],"addresses":[{"type":"LegacyHostIP","address":"192.168.100.80"},{"type":"InternalIP","address":"192.168.100.80"}],"nodeInfo":{"machineID":"dae72fe0cc064eb0b7797f25bfaf69df","systemUUID":"384A8E40-1296-9A42-AD77-445D83BB5888","bootID":"5c7eb3ff-d86f-41f2-b3eb-a39adf313a4f","kernelVersion":"3.10.0-229.14.1.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://1.7.1","kubeletVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2","kubeProxyVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2"}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.475016 4196 request.go:457] Request Body: {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0} Oct 01 07:40:26 [4196]: I1001 07:40:26.475148 4196 debugging.go:101] curl -k -v -XDELETE -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 Oct 01 07:40:26 [4196]: I1001 07:40:26.526794 4196 debugging.go:120] DELETE https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 200 OK in 51 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.526865 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.526897 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.526927 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.526957 4196 debugging.go:129] Content-Length: 1977 Oct 01 07:40:26 [4196]: I1001 07:40:26.527056 4196 request.go:755] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"fluentd-elasticsearch-192.168.100.80","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80","uid":"a90941f6-680f-11e5-988c-fa163e94cde4","resourceVersion":"172926","creationTimestamp":"2015-10-01T07:40:17Z","deletionTimestamp":"2015-10-01T07:40:26Z","deletionGracePeriodSeconds":0,"annotations":{"kubernetes.io/config.mirror":"mirror","kubernetes.io/config.seen":"2015-10-01T07:39:43.986114806Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"varlog","hostPath":{"path":"/var/log"}},{"name":"varlibdockercontainers","hostPath":{"path":"/var/lib/docker/containers"}}],"containers":[{"name":"fluentd-elasticsearch","image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","args":["-q"],"resources":{"limits":{"cpu":"100m"},"requests":{"cpu":"100m"}},"volumeMounts":[{"name":"varlog","mountPath":"/var/log"},{"name":"varlibdockercontainers","readOnly":true,"mountPath":"/var/lib/docker/containers"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeName":"192.168.100.80"},"status":{"phase":"Running","conditions":[{"type":"Ready","status":"True"}],"hostIP":"192.168.100.80","podIP":"172.16.58.24","startTime":"2015-10-01T07:40:17Z","containerStatuses":[{"name":"fluentd-elasticsearch","state":{"running":{"startedAt":"2015-10-01T07:37:23Z"}},"lastState":{"terminated":{"exitCode":137,"startedAt":"2015-10-01T07:23:00Z","finishedAt":"2015-10-01T07:33:17Z","containerID":"docker://1398736fd9b274132721206ccaf89030af5e8e304118d29286aec6b2529395ee"}},"ready":true,"restartCount":1,"image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","imageID":"docker://03ba3d224c2a80600a0b44a9894ac0de5526d36b810b13924e33ada76f1e7406","containerID":"docker://d9ac24c8a0fbceea7c494bce73d56d6ea5f003f1d1b7b8ad3975fc7e3c7679b4"}]}} Oct 01 07:40:26 [4196]: I1001 07:40:26.528210 4196 status_manager.go:209] Pod "fluentd-elasticsearch-192.168.100.80" fully terminated and removed from etcd Oct 01 07:40:26 [4196]: I1001 07:40:26.675178 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/services Oct 01 07:40:26 [4196]: I1001 07:40:26.710214 4196 debugging.go:120] GET https://localhost:6443/api/v1/services 200 OK in 34 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.710249 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.710260 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.710270 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.710436 4196 request.go:755] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"172927"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"28717019-676b-11e5-afb9-fa163e94cde4","resourceVersion":"18","creationTimestamp":"2015-09-30T12:02:44Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"protocol":"TCP","port":443,"targetPort":443}],"clusterIP":"10.100.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"elasticsearch-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/elasticsearch-logging","uid":"833c8df5-676b-11e5-958e-fa163e94cde4","resourceVersion":"153","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"elasticsearch-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Elasticsearch"}},"spec":{"ports":[{"protocol":"TCP","port":9200,"targetPort":"db"}],"selector":{"k8s-app":"elasticsearch-logging"},"clusterIP":"10.100.3.159","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kibana-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kibana-logging","uid":"833043fa-676b-11e5-958e-fa163e94cde4","resourceVersion":"149","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kibana-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Kibana"}},"spec":{"ports":[{"protocol":"TCP","port":5601,"targetPort":"ui"}],"selector":{"k8s-app":"kibana-logging"},"clusterIP":"10.100.136.111","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-dns","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-dns","uid":"8319ba13-676b-11e5-958e-fa163e94cde4","resourceVersion":"146","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-dns Oct 01 07:40:26 [4196]: ","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"}},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.100.0.10","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-ui","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-ui","uid":"83473271-676b-11e5-958e-fa163e94cde4","resourceVersion":"155","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-ui","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeUI"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"kube-ui"},"clusterIP":"10.100.246.61","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-grafana","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-grafana","uid":"835da09c-676b-11e5-958e-fa163e94cde4","resourceVersion":"157","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Grafana"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.207.92","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-heapster","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-heapster","uid":"83367b90-676b-11e5-958e-fa163e94cde4","resourceVersion":"151","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Heapster"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8082}],"selector":{"k8s-app":"heapster"},"clusterIP":"10.100.119.4","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-influxdb","namespace":"kube-system","selfLink":"/api/v1/names Oct 01 07:40:26 [4196]: paces/kube-system/services/monitoring-influxdb","uid":"836c95b8-676b-11e5-958e-fa163e94cde4","resourceVersion":"159","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"InfluxDB"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8083,"targetPort":8083},{"name":"api","protocol":"TCP","port":8086,"targetPort":8086}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.101.182","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"reverseproxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/reverseproxy","uid":"15e65b7d-6776-11e5-a5d0-fa163e94cde4","resourceVersion":"10994","creationTimestamp":"2015-09-30T13:20:57Z","labels":{"k8s-app":"reverseproxy","kubernetes.io/cluster-service":"true","kubernetes.io/name":"reverseproxy"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":80},{"name":"https","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":443}],"selector":{"k8s-app":"reverseproxy"},"clusterIP":"10.100.168.84","type":"NodePort","sessionAffinity":"None"},"status":{"loadBalancer":{}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.875150 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 Oct 01 07:40:26 [4196]: I1001 07:40:26.900981 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 200 OK in 25 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.901009 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.901018 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.001744 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.002685 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.002716 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Node ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.075065 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/services?resourceVersion=172927 Oct 01 07:40:27 [4196]: I1001 07:40:27.101642 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/services?resourceVersion=172927 200 OK in 26 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.101689 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.101705 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.104168 4196 openstack.go:164] openstack.Instances() called Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: I1001 07:40:27.133519 4196 openstack.go:202] Claiming to support Instances Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object Oct 01 07:40:27 [4196]: I1001 07:40:27.202978 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.203110 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.203136 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Service ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.275208 4196 debugging.go:101] curl -k -v -XGET -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 Oct 01 07:40:27 [4196]: I1001 07:40:27.308434 4196 debugging.go:120] GET https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 200 OK in 33 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.308464 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.308475 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:27 [4196]: I1001 07:40:27.308484 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.308491 4196 debugging.go:129] Content-Length: 113 Oct 01 07:40:27 [4196]: I1001 07:40:27.308524 4196 request.go:755] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/pods","resourceVersion":"172941"},"items":[]} Oct 01 07:40:27 [4196]: I1001 07:40:27.308719 4196 config.go:252] Setting pods for source api Oct 01 07:40:27 [4196]: I1001 07:40:27.308753 4196 kubelet.go:1921] SyncLoop (REMOVE): "fluentd-elasticsearch-192.168.100.80_kube-system" Oct 01 07:40:27 [4196]: I1001 07:40:27.308931 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlog Oct 01 07:40:27 [4196]: I1001 07:40:27.308960 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlibdockercontainers Oct 01 07:40:27 [4196]: I1001 07:40:27.308977 4196 kubelet.go:2531] Generating status for "fluentd-elasticsearch-192.168.100.80_kube-system" $ kubectl version Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit:"196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit: "196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} $ cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" $ cat /etc/kubernetes/kubelet ### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.80" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --v=9 --cloud-config=/etc/kubernetes/cloud_config --cloud-provider=openstack --machine-id-file=/etc/machine-id" $ cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*******.de:5000/v2.0 username=username password=password region=RegionOne tenant-id=4ee7b21351d94f2b96d363efe131b833 ### Response: Kubelet is able to reach Openstack, however it is failing to find this node in the list of servers, in this tenant, and in this region. Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object Node's hostname is used to identify it from the list of servers provided by cloud provider. However, it can be overridden using --hostname_overide flag. In your config, I see that you have overridden it with an ip, if this does not match the name of the server as reported by Nova, you are likely to get this error.
Trying to use Cinder volumens on OpenStack as persistent volumes for my pods. As soon as I configure the cloudprovider and restart the kubelet, the kubelet fails to get its external ID from the cloud provider. The OpenStack API is reachable via https using a comodo certificate. the comodo-ca-bundle is installed as trusted ca on the node. Using curl against the API works without --insecure and --cacert options. Using kubernetes 1.1.0-alpha on centos 7 $ sudo journalctl -u kubelet Oct 01 07:40:26 [4196]: I1001 07:40:26.303887 4196 debugging.go:129] Content-Length: 1159 Oct 01 07:40:26 [4196]: I1001 07:40:26.303895 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.303950 4196 request.go:755] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/nodes","resourceVersion":"172921"},"items":[{"metadata":{"name":"192.168.100.80","selfLink":"/api/v1/nodes/192.168.100.80","uid":"b48b4cb9-676f-11e5-8521-fa163ef34ff1","resourceVersion":"172900","creationTimestamp":"2015-09-30T12:35:17Z","labels":{"kubernetes.io/hostname":"192.168.100.80"}},"spec":{"externalID":"192.168.100.80"},"status":{"capacity":{"cpu":"2","memory":"4047500Ki","pods":"40"},"conditions":[{"type":"Ready","status":"Unknown","lastHeartbeatTime":"2015-10-01T07:31:55Z","lastTransitionTime":"2015-10-01T07:32:36Z","reason":"Kubelet stopped posting node status."}],"addresses":[{"type":"LegacyHostIP","address":"192.168.100.80"},{"type":"InternalIP","address":"192.168.100.80"}],"nodeInfo":{"machineID":"dae72fe0cc064eb0b7797f25bfaf69df","systemUUID":"384A8E40-1296-9A42-AD77-445D83BB5888","bootID":"5c7eb3ff-d86f-41f2-b3eb-a39adf313a4f","kernelVersion":"3.10.0-229.14.1.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://1.7.1","kubeletVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2","kubeProxyVersion":"v1.1.0-alpha.1.390+196f58b9cb25a2"}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.475016 4196 request.go:457] Request Body: {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0} Oct 01 07:40:26 [4196]: I1001 07:40:26.475148 4196 debugging.go:101] curl -k -v -XDELETE -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 Oct 01 07:40:26 [4196]: I1001 07:40:26.526794 4196 debugging.go:120] DELETE https://localhost:6443/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80 200 OK in 51 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.526865 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.526897 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.526927 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.526957 4196 debugging.go:129] Content-Length: 1977 Oct 01 07:40:26 [4196]: I1001 07:40:26.527056 4196 request.go:755] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"fluentd-elasticsearch-192.168.100.80","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/fluentd-elasticsearch-192.168.100.80","uid":"a90941f6-680f-11e5-988c-fa163e94cde4","resourceVersion":"172926","creationTimestamp":"2015-10-01T07:40:17Z","deletionTimestamp":"2015-10-01T07:40:26Z","deletionGracePeriodSeconds":0,"annotations":{"kubernetes.io/config.mirror":"mirror","kubernetes.io/config.seen":"2015-10-01T07:39:43.986114806Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"varlog","hostPath":{"path":"/var/log"}},{"name":"varlibdockercontainers","hostPath":{"path":"/var/lib/docker/containers"}}],"containers":[{"name":"fluentd-elasticsearch","image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","args":["-q"],"resources":{"limits":{"cpu":"100m"},"requests":{"cpu":"100m"}},"volumeMounts":[{"name":"varlog","mountPath":"/var/log"},{"name":"varlibdockercontainers","readOnly":true,"mountPath":"/var/lib/docker/containers"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","nodeName":"192.168.100.80"},"status":{"phase":"Running","conditions":[{"type":"Ready","status":"True"}],"hostIP":"192.168.100.80","podIP":"172.16.58.24","startTime":"2015-10-01T07:40:17Z","containerStatuses":[{"name":"fluentd-elasticsearch","state":{"running":{"startedAt":"2015-10-01T07:37:23Z"}},"lastState":{"terminated":{"exitCode":137,"startedAt":"2015-10-01T07:23:00Z","finishedAt":"2015-10-01T07:33:17Z","containerID":"docker://1398736fd9b274132721206ccaf89030af5e8e304118d29286aec6b2529395ee"}},"ready":true,"restartCount":1,"image":"gcr.io/google_containers/fluentd-elasticsearch:1.11","imageID":"docker://03ba3d224c2a80600a0b44a9894ac0de5526d36b810b13924e33ada76f1e7406","containerID":"docker://d9ac24c8a0fbceea7c494bce73d56d6ea5f003f1d1b7b8ad3975fc7e3c7679b4"}]}} Oct 01 07:40:26 [4196]: I1001 07:40:26.528210 4196 status_manager.go:209] Pod "fluentd-elasticsearch-192.168.100.80" fully terminated and removed from etcd Oct 01 07:40:26 [4196]: I1001 07:40:26.675178 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/services Oct 01 07:40:26 [4196]: I1001 07:40:26.710214 4196 debugging.go:120] GET https://localhost:6443/api/v1/services 200 OK in 34 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.710249 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.710260 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:26 [4196]: I1001 07:40:26.710270 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:26 [4196]: I1001 07:40:26.710436 4196 request.go:755] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"172927"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"28717019-676b-11e5-afb9-fa163e94cde4","resourceVersion":"18","creationTimestamp":"2015-09-30T12:02:44Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"protocol":"TCP","port":443,"targetPort":443}],"clusterIP":"10.100.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"elasticsearch-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/elasticsearch-logging","uid":"833c8df5-676b-11e5-958e-fa163e94cde4","resourceVersion":"153","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"elasticsearch-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Elasticsearch"}},"spec":{"ports":[{"protocol":"TCP","port":9200,"targetPort":"db"}],"selector":{"k8s-app":"elasticsearch-logging"},"clusterIP":"10.100.3.159","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kibana-logging","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kibana-logging","uid":"833043fa-676b-11e5-958e-fa163e94cde4","resourceVersion":"149","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kibana-logging","kubernetes.io/cluster-service":"true","kubernetes.io/name":"Kibana"}},"spec":{"ports":[{"protocol":"TCP","port":5601,"targetPort":"ui"}],"selector":{"k8s-app":"kibana-logging"},"clusterIP":"10.100.136.111","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-dns","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-dns","uid":"8319ba13-676b-11e5-958e-fa163e94cde4","resourceVersion":"146","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-dns Oct 01 07:40:26 [4196]: ","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"}},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.100.0.10","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"kube-ui","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/kube-ui","uid":"83473271-676b-11e5-958e-fa163e94cde4","resourceVersion":"155","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"k8s-app":"kube-ui","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeUI"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"kube-ui"},"clusterIP":"10.100.246.61","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-grafana","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-grafana","uid":"835da09c-676b-11e5-958e-fa163e94cde4","resourceVersion":"157","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Grafana"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8080}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.207.92","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-heapster","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/monitoring-heapster","uid":"83367b90-676b-11e5-958e-fa163e94cde4","resourceVersion":"151","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Heapster"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":8082}],"selector":{"k8s-app":"heapster"},"clusterIP":"10.100.119.4","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"monitoring-influxdb","namespace":"kube-system","selfLink":"/api/v1/names Oct 01 07:40:26 [4196]: paces/kube-system/services/monitoring-influxdb","uid":"836c95b8-676b-11e5-958e-fa163e94cde4","resourceVersion":"159","creationTimestamp":"2015-09-30T12:05:16Z","labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"InfluxDB"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8083,"targetPort":8083},{"name":"api","protocol":"TCP","port":8086,"targetPort":8086}],"selector":{"k8s-app":"influxGrafana"},"clusterIP":"10.100.101.182","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"reverseproxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/services/reverseproxy","uid":"15e65b7d-6776-11e5-a5d0-fa163e94cde4","resourceVersion":"10994","creationTimestamp":"2015-09-30T13:20:57Z","labels":{"k8s-app":"reverseproxy","kubernetes.io/cluster-service":"true","kubernetes.io/name":"reverseproxy"}},"spec":{"ports":[{"name":"http","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":80},{"name":"https","protocol":"TCP","port":8181,"targetPort":8181,"nodePort":443}],"selector":{"k8s-app":"reverseproxy"},"clusterIP":"10.100.168.84","type":"NodePort","sessionAffinity":"None"},"status":{"loadBalancer":{}}}]} Oct 01 07:40:26 [4196]: I1001 07:40:26.875150 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 Oct 01 07:40:26 [4196]: I1001 07:40:26.900981 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/nodes?fieldSelector=metadata.name%3D192.168.100.80&resourceVersion=172921 200 OK in 25 milliseconds Oct 01 07:40:26 [4196]: I1001 07:40:26.901009 4196 debugging.go:126] Response Headers: Oct 01 07:40:26 [4196]: I1001 07:40:26.901018 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:26 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.001744 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.002685 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.002716 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Node ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.075065 4196 debugging.go:101] curl -k -v -XGET -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" https://localhost:6443/api/v1/watch/services?resourceVersion=172927 Oct 01 07:40:27 [4196]: I1001 07:40:27.101642 4196 debugging.go:120] GET https://localhost:6443/api/v1/watch/services?resourceVersion=172927 200 OK in 26 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.101689 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.101705 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.104168 4196 openstack.go:164] openstack.Instances() called Oct 01 07:40:27 [4196]: I1001 07:40:27.133478 4196 openstack.go:201] Found 8 compute flavors Oct 01 07:40:27 [4196]: I1001 07:40:27.133519 4196 openstack.go:202] Claiming to support Instances Oct 01 07:40:27 [4196]: E1001 07:40:27.158908 4196 kubelet.go:846] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object Oct 01 07:40:27 [4196]: I1001 07:40:27.202978 4196 iowatcher.go:102] Unexpected EOF during watch stream event decoding: unexpected EOF Oct 01 07:40:27 [4196]: I1001 07:40:27.203110 4196 reflector.go:294] pkg/client/unversioned/cache/reflector.go:87: Unexpected watch close - watch lasted less than a second and no items received Oct 01 07:40:27 [4196]: W1001 07:40:27.203136 4196 reflector.go:224] pkg/client/unversioned/cache/reflector.go:87: watch of *api.Service ended with: very short watch Oct 01 07:40:27 [4196]: I1001 07:40:27.275208 4196 debugging.go:101] curl -k -v -XGET -H "Authorization: Bearer rhARkbozkWcrJyvdLQqF9TNO86KHjOsq" -H "User-Agent: kubelet/v1.1.0 (linux/amd64) kubernetes/196f58b" https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 Oct 01 07:40:27 [4196]: I1001 07:40:27.308434 4196 debugging.go:120] GET https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.100.80 200 OK in 33 milliseconds Oct 01 07:40:27 [4196]: I1001 07:40:27.308464 4196 debugging.go:126] Response Headers: Oct 01 07:40:27 [4196]: I1001 07:40:27.308475 4196 debugging.go:129] Content-Type: application/json Oct 01 07:40:27 [4196]: I1001 07:40:27.308484 4196 debugging.go:129] Date: Thu, 01 Oct 2015 07:40:27 GMT Oct 01 07:40:27 [4196]: I1001 07:40:27.308491 4196 debugging.go:129] Content-Length: 113 Oct 01 07:40:27 [4196]: I1001 07:40:27.308524 4196 request.go:755] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/pods","resourceVersion":"172941"},"items":[]} Oct 01 07:40:27 [4196]: I1001 07:40:27.308719 4196 config.go:252] Setting pods for source api Oct 01 07:40:27 [4196]: I1001 07:40:27.308753 4196 kubelet.go:1921] SyncLoop (REMOVE): "fluentd-elasticsearch-192.168.100.80_kube-system" Oct 01 07:40:27 [4196]: I1001 07:40:27.308931 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlog Oct 01 07:40:27 [4196]: I1001 07:40:27.308960 4196 volumes.go:100] Used volume plugin "kubernetes.io/host-path" for varlibdockercontainers Oct 01 07:40:27 [4196]: I1001 07:40:27.308977 4196 kubelet.go:2531] Generating status for "fluentd-elasticsearch-192.168.100.80_kube-system" $ kubectl version Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit:"196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.1.390+196f58b9cb25a2", GitCommit: "196f58b9cb25a2222c7f9aacd624737910b03acb", GitTreeState:"clean"} $ cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" $ cat /etc/kubernetes/kubelet ### # kubernetes kubelet (node) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.100.80" # location of the api-server KUBELET_API_SERVER="--api_servers=https://localhost:6443" # Add your own! KUBELET_ARGS="--cluster_dns=10.100.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --v=9 --cloud-config=/etc/kubernetes/cloud_config --cloud-provider=openstack --machine-id-file=/etc/machine-id" $ cat /etc/kubernetes/cloud_config [Global] auth-url=https://api.*******.de:5000/v2.0 username=username password=password region=RegionOne tenant-id=4ee7b21351d94f2b96d363efe131b833
Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer". Services are intended to represent a homogeneous set of pods. Requests to the service are load balanced across the pods that back the service. If you want to reduce the number of load balancers you should add a proxying layer above the subsystem services that can redirect incoming requests to the right subsystem. HAProxy or nginx should be pretty easy to configure to do this.
###Instruction: Currently services support multiple ports from a single selector, that maps onto pods of single type. This service/ports in turn can be exposed externally through 'type:LoadBalancer". For a given subsystem like Spark - there are 3 pod types - master, worker, driver with each exposing a set of management ui ports that need to be accessed externally. Current service definition requires to create 3-LoadBalancers, one for each type - master, worker, driver). These access ports are for low use management ui access. Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer". This is to avoid proliferation of LoadBalancers for external access of services ### Response: Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer". Services are intended to represent a homogeneous set of pods. Requests to the service are load balanced across the pods that back the service. If you want to reduce the number of load balancers you should add a proxying layer above the subsystem services that can redirect incoming requests to the right subsystem. HAProxy or nginx should be pretty easy to configure to do this.
Currently services support multiple ports from a single selector, that maps onto pods of single type. This service/ports in turn can be exposed externally through 'type:LoadBalancer". For a given subsystem like Spark - there are 3 pod types - master, worker, driver with each exposing a set of management ui ports that need to be accessed externally. Current service definition requires to create 3-LoadBalancers, one for each type - master, worker, driver). These access ports are for low use management ui access. Is there anyway to combine all of these heterogeneous pod ports into a single "Service" with a corresponding "LoadBalancer". This is to avoid proliferation of LoadBalancers for external access of services
Each run of kube-up.sh is intended to generate a new cluster. As such, it will create new credentials for the cluster and any existing nodes are not expected to continue to work with the new master components.
###Instruction: I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment. I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do? The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file? Thank you in advance. Regards :) ### Response: Each run of kube-up.sh is intended to generate a new cluster. As such, it will create new credentials for the cluster and any existing nodes are not expected to continue to work with the new master components.
I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment. I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do? The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file? Thank you in advance. Regards :)
This is a known issue. See https://github.com/kubernetes/kubernetes/issues/14642 We're working on a fix: See https://github.com/kubernetes/kubernetes/issues/15524 Sorry for the inconvenience.
###Instruction: Using GKE. I declared a disk I use as persistent volume. When the pod which uses the volume crashed and was re-started on a different node, the disk was still attached / mounted to the node it was mounted on before. How comes the volume is not unmounted by the RC? Then the pod fails and there is no recovery. i didn't find a way to unmount the disk, and the only way I could start the pod again was by restarting until it was started on the node that the disk was mounted on. That definitely doesn't sound right. Am I missing something here? ### Response: This is a known issue. See https://github.com/kubernetes/kubernetes/issues/14642 We're working on a fix: See https://github.com/kubernetes/kubernetes/issues/15524 Sorry for the inconvenience.
Using GKE. I declared a disk I use as persistent volume. When the pod which uses the volume crashed and was re-started on a different node, the disk was still attached / mounted to the node it was mounted on before. How comes the volume is not unmounted by the RC? Then the pod fails and there is no recovery. i didn't find a way to unmount the disk, and the only way I could start the pod again was by restarting until it was started on the node that the disk was mounted on. That definitely doesn't sound right. Am I missing something here?
Re-iterating Tim's comment above: As far as I know nobody has implemented a fully PCI-compliant kubernetes install yet (they might have done and not told us). I don't know of anything specific to Docker or Kubernetes that would prevent you from getting your deployment certified.
###Instruction: We are creating a new version of payment gateway processor and We want to use docker container with kubernetes but we are worried if Kubernetes and docker container follow the PCI DSS requirements. We don't find anything clear in PCI DSS specifications. ### Response: Re-iterating Tim's comment above: As far as I know nobody has implemented a fully PCI-compliant kubernetes install yet (they might have done and not told us). I don't know of anything specific to Docker or Kubernetes that would prevent you from getting your deployment certified.
We are creating a new version of payment gateway processor and We want to use docker container with kubernetes but we are worried if Kubernetes and docker container follow the PCI DSS requirements. We don't find anything clear in PCI DSS specifications.
[update-2016-8] In latest Kubernetes release, you can use a very nice feature named init-container to replace the postStart part in my answer below, which will make sure the container order. apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: initContainers: - name: war image: resouer/sample:v2 command: ["cp", "/sample.war", "/app"] volumeMounts: - mountPath: /app name: app-volume containers: - name: tomcat image: resouer/mytomcat:7.0 command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {} NOTE: initContainer is still a beta feature so the work version of this yaml is actually like: http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization, please notice the pod.beta.kubernetes.io/init-containers part. ---original answer begin--- Actually, you can. You need to use container life cycle handler to control what files/dirs you want to share with other containers. Like: --- apiVersion: v1 kind: Pod metadata: name: server spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: hostv1 - name: peer image: busybox command: ["tail", "-f", "/dev/null"] volumeMounts: - name: hostv2 mountPath: /app/sample.war volumes: - name: hostv1 hostPath: path: /tmp - name: hostv2 hostPath: path: /tmp/sample.war Please check my gist for more details: https://gist.github.com/resouer/378bcdaef1d9601ed6aa And of course you can use emptyDir. Thus, war container can share its /sample.war to peer container without mess peer's /app directory. If we can tolerate /app been overridden, it will be much simpler: --- apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: app-volume - image: resouer/mytomcat:7.0 name: tomcat command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {}
###Instruction: I'm looking for a pattern that allows to share volumes between two containers running on the same pod in Kubernetes. My use case is: I have a Ruby on Rails application running inside a docker container. The docker image contains static assets in /app/<app-name>/public directory, and I need to access those assets from the nginx container running alongside in the same pod. In 'vanilla' docker I would have used --volumes-from flag to share this directory: docker run --name app -v /app/<app-dir>/public <app-image> docker run --volumes-from app nginx After reading this doc: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md I tried this (only relevant entries presented): spec: containers: - image: <app-image> name: <app-name> volumeMounts: - mountPath: /app/<app-name>/public name: assets - image: nginx name: nginx volumeMounts: - mountPath: /var/www/html name: assets readOnly: true volumes: - name: assets hostPath: path: /tmp/assets But: Even though /tmp/assets on the node exists, it's empty /app/<app-name>/public inside the app container is also empty As a workaround I'm gonna try to populate the shared directory when the application container is up (simply cp /app/<app-name>/public/* to shared directory), but I really dislike this idea. Question: how to mimic --volumes-from in Kubernetes, or if there is no direct counterpart, how can I share files from one container to other running in the same pod ? apiVersion: v1beta3 Client Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"} Server Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"} ### Response: [update-2016-8] In latest Kubernetes release, you can use a very nice feature named init-container to replace the postStart part in my answer below, which will make sure the container order. apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: initContainers: - name: war image: resouer/sample:v2 command: ["cp", "/sample.war", "/app"] volumeMounts: - mountPath: /app name: app-volume containers: - name: tomcat image: resouer/mytomcat:7.0 command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {} NOTE: initContainer is still a beta feature so the work version of this yaml is actually like: http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization, please notice the pod.beta.kubernetes.io/init-containers part. ---original answer begin--- Actually, you can. You need to use container life cycle handler to control what files/dirs you want to share with other containers. Like: --- apiVersion: v1 kind: Pod metadata: name: server spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: hostv1 - name: peer image: busybox command: ["tail", "-f", "/dev/null"] volumeMounts: - name: hostv2 mountPath: /app/sample.war volumes: - name: hostv1 hostPath: path: /tmp - name: hostv2 hostPath: path: /tmp/sample.war Please check my gist for more details: https://gist.github.com/resouer/378bcdaef1d9601ed6aa And of course you can use emptyDir. Thus, war container can share its /sample.war to peer container without mess peer's /app directory. If we can tolerate /app been overridden, it will be much simpler: --- apiVersion: v1 kind: Pod metadata: name: javaweb-2 spec: restartPolicy: OnFailure containers: - image: resouer/sample:v2 name: war lifecycle: postStart: exec: command: - "cp" - "/sample.war" - "/app" volumeMounts: - mountPath: /app name: app-volume - image: resouer/mytomcat:7.0 name: tomcat command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"] volumeMounts: - mountPath: /root/apache-tomcat-7.0.42-v2/webapps name: app-volume ports: - containerPort: 8080 hostPort: 8001 volumes: - name: app-volume emptyDir: {}
I'm looking for a pattern that allows to share volumes between two containers running on the same pod in Kubernetes. My use case is: I have a Ruby on Rails application running inside a docker container. The docker image contains static assets in /app/<app-name>/public directory, and I need to access those assets from the nginx container running alongside in the same pod. In 'vanilla' docker I would have used --volumes-from flag to share this directory: docker run --name app -v /app/<app-dir>/public <app-image> docker run --volumes-from app nginx After reading this doc: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md I tried this (only relevant entries presented): spec: containers: - image: <app-image> name: <app-name> volumeMounts: - mountPath: /app/<app-name>/public name: assets - image: nginx name: nginx volumeMounts: - mountPath: /var/www/html name: assets readOnly: true volumes: - name: assets hostPath: path: /tmp/assets But: Even though /tmp/assets on the node exists, it's empty /app/<app-name>/public inside the app container is also empty As a workaround I'm gonna try to populate the shared directory when the application container is up (simply cp /app/<app-name>/public/* to shared directory), but I really dislike this idea. Question: how to mimic --volumes-from in Kubernetes, or if there is no direct counterpart, how can I share files from one container to other running in the same pod ? apiVersion: v1beta3 Client Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"} Server Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"}
kubectl describe ${POD_NAME} or kubectl logs ${POD_NAME} ${CONTAINER_NAME} should give you more information to debug. Please also see https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/application-troubleshooting.md#debugging-pods for general troubleshooting instructions. EDIT: After discussing in the comments, I think the problem with your node is that the node was unresponsive for >5 minutes (potentially due to high memory usage of influxdb). Node controller then deemed the node not ready and evicted all pods on the node. Note that pods managed by replication controllers would be re-created (with a different name), but pods created manually would not be recreated. If you suspect influxdb memory usage is the root cause, you can try not running this pod to see if the problem resolves itself. Alternatively, you can change the memory limit of influxdb container to a smaller value. EDIT2: Some tips for finding out what happened to the node: Check /var/log/kubelet.log. This is the easiest approach. kubectl describe nodes or kubectl get events | grep <node_name> (for older version of kubernetes) This command would give you the events associated with the node status. However, the events are flushed every two hours, so you would need to run this command within the window of time after your node encounters the problem. kubectl get node <node_name> -o yaml --watch lets you monitor the node object, including its status in yaml. This would be updated periodically.
###Instruction: We are trying a test setup with Kubernetes version 1.0.6 on AWS. This setup involves pods for Cassandra (2-nodes), Spark (master, 2-workers, driver), RabbitMQ(1-node). Some the pods this setup die after a day or so Is there way to get logs from Kubernetes on how/why they died? When you try to restart died pods manually, you get some pods status as ''category/spark-worker is ready, container is creating' and the pod start never completes. Only option in the scenario is to "kube-down.sh and then kube-up.sh" and go through entire setup from scratch. ### Response: kubectl describe ${POD_NAME} or kubectl logs ${POD_NAME} ${CONTAINER_NAME} should give you more information to debug. Please also see https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/application-troubleshooting.md#debugging-pods for general troubleshooting instructions. EDIT: After discussing in the comments, I think the problem with your node is that the node was unresponsive for >5 minutes (potentially due to high memory usage of influxdb). Node controller then deemed the node not ready and evicted all pods on the node. Note that pods managed by replication controllers would be re-created (with a different name), but pods created manually would not be recreated. If you suspect influxdb memory usage is the root cause, you can try not running this pod to see if the problem resolves itself. Alternatively, you can change the memory limit of influxdb container to a smaller value. EDIT2: Some tips for finding out what happened to the node: Check /var/log/kubelet.log. This is the easiest approach. kubectl describe nodes or kubectl get events | grep <node_name> (for older version of kubernetes) This command would give you the events associated with the node status. However, the events are flushed every two hours, so you would need to run this command within the window of time after your node encounters the problem. kubectl get node <node_name> -o yaml --watch lets you monitor the node object, including its status in yaml. This would be updated periodically.
We are trying a test setup with Kubernetes version 1.0.6 on AWS. This setup involves pods for Cassandra (2-nodes), Spark (master, 2-workers, driver), RabbitMQ(1-node). Some the pods this setup die after a day or so Is there way to get logs from Kubernetes on how/why they died? When you try to restart died pods manually, you get some pods status as ''category/spark-worker is ready, container is creating' and the pod start never completes. Only option in the scenario is to "kube-down.sh and then kube-up.sh" and go through entire setup from scratch.
As far as I've seen, that's the way it should work. The claim is for the entire volume. The part that confused me at first as well, was the resources.requests.storage value is only a minimum value that claim requires. I use this with Ceph, and when Pods bind to the block device, they take the whole volume.
###Instruction: I have a 3 node coros kubernetes cluster up and running. I want to use persitentvolumes(pv) from a standalone NFS Server. nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: kube1 spec: capacity: storage: 9.5G accessModes: - ReadWriteMany nfs: path: /mnt/nfs/kube1 server: 10.3.0.3 claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc2-1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1G kubecfg get pv kube1 <none> 9500M RWX Released default/pvc2-1 kubecfg get pvc pvc2-1 <none> Bound kube1 9500M RWX So why is the pvc created with the full capacity of pv? As I assumed that pvc is just a part of pv, otherwise it's pretty useless. Regards cdpb ### Response: As far as I've seen, that's the way it should work. The claim is for the entire volume. The part that confused me at first as well, was the resources.requests.storage value is only a minimum value that claim requires. I use this with Ceph, and when Pods bind to the block device, they take the whole volume.
I have a 3 node coros kubernetes cluster up and running. I want to use persitentvolumes(pv) from a standalone NFS Server. nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: kube1 spec: capacity: storage: 9.5G accessModes: - ReadWriteMany nfs: path: /mnt/nfs/kube1 server: 10.3.0.3 claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc2-1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1G kubecfg get pv kube1 <none> 9500M RWX Released default/pvc2-1 kubecfg get pvc pvc2-1 <none> Bound kube1 9500M RWX So why is the pvc created with the full capacity of pv? As I assumed that pvc is just a part of pv, otherwise it's pretty useless. Regards cdpb
Kubernetes does support several volume types, including rbd, as you mention in your example. When you create a pod, you can specify what volumes and their types you want in the yaml file. Documentation on volumes is here: http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd Kubernetes uses its own volume system that is different from Docker's: Kubenetes supports some types of volumes that Docker doesn't and vice versa.
###Instruction: Is it possible to pass the "--volume-driver" with in kubernetes' yml file? Ex. Using Docker I can perform the following docker run --volume-driver rbd -v image:/mountpoint ubuntu Thanks ### Response: Kubernetes does support several volume types, including rbd, as you mention in your example. When you create a pod, you can specify what volumes and their types you want in the yaml file. Documentation on volumes is here: http://kubernetes.io/v1.0/docs/user-guide/volumes.html#rbd Kubernetes uses its own volume system that is different from Docker's: Kubenetes supports some types of volumes that Docker doesn't and vice versa.
Is it possible to pass the "--volume-driver" with in kubernetes' yml file? Ex. Using Docker I can perform the following docker run --volume-driver rbd -v image:/mountpoint ubuntu Thanks
The Kubernetes API supports multiple forms of authentication: http basic auth, bearer token, client certificates. When launching the apiserver, you can enable / disable each of these authentication methods with command line flags. You should also be running the apiserver where the insecure port is only accessible to localhost, so that all connections coming across the network use https. By having your api clients verify the TLS certificate presented by the apiserver, they can verify that the connection is both encrypted and not susceptible to man-in-the-middle attacks. By default, anyone who has access credentials to the apiserver has full access to the cluster. You can also configure more fine grained authorization policies which will become more flexible and configurable in future Kubernetes releases.
###Instruction: I'm a bit disturbed on how to secure the kubernetes API for call and access, also Kube-ui is available to everybody. How can I set credential to secure all the services ? Thank you ### Response: The Kubernetes API supports multiple forms of authentication: http basic auth, bearer token, client certificates. When launching the apiserver, you can enable / disable each of these authentication methods with command line flags. You should also be running the apiserver where the insecure port is only accessible to localhost, so that all connections coming across the network use https. By having your api clients verify the TLS certificate presented by the apiserver, they can verify that the connection is both encrypted and not susceptible to man-in-the-middle attacks. By default, anyone who has access credentials to the apiserver has full access to the cluster. You can also configure more fine grained authorization policies which will become more flexible and configurable in future Kubernetes releases.
I'm a bit disturbed on how to secure the kubernetes API for call and access, also Kube-ui is available to everybody. How can I set credential to secure all the services ? Thank you
Did you open up port 80 for meteor? gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
###Instruction: Stuck on how to make it run. I followed everything on the http://kubernetes.io/v1.0/examples/meteor/README.html I was able to build the image and push it to the gcloud. Now the problem is how to run it. I accessed the ip that it gave me when executing the command kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}" but nothing's showing up. Web page not available.. ### Response: Did you open up port 80 for meteor? gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
Stuck on how to make it run. I followed everything on the http://kubernetes.io/v1.0/examples/meteor/README.html I was able to build the image and push it to the gcloud. Now the problem is how to run it. I accessed the ip that it gave me when executing the command kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}" but nothing's showing up. Web page not available..
The Jenkins Kubernetes plugin is (so far) only to run slaves dynamically in a Kubernetes cluster There's not a lot about deploying from Jenkins to Kubernetes, maybe this post Continuous Delivery Pipelines with Fabric8 and Jenkins on OpenShift helps
###Instruction: I tried looking for a Jenkins plugin (like AWS codeDeploy) so that I could deploy my application to a Kubernetes cluster. So far, I have been successful at pushing it to a Docker registry and adding some command line build steps to deploy to Kubernetes. Looking at the CloudBees announcement this seems possible Installing the Kubernetes plugin gave me errors...I can attach a screenshot if that helps ... Also it seems like this plugin allows you to run slaves in Docker containers not deploy your own app. After looking at this video , it seemed I could accomplish this using the withKubernetes workflow stage... However adding that line to my workflow script gives me the following error java.lang.NoSuchMethodError: No such DSL method withKubernetes found among [archive, bat, build, catchError, checkout, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, error, fileExists, git, input, load, mail, node, parallel, publishHTML, pwd, readFile, retry, sh, sleep, stage, stash, step, svn, timeout, tool, unarchive, unstash, waitUntil, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws] at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:107) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112) at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:75) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15) at WorkflowScript.run(WorkflowScript:17) at Unknown.Unknown(Unknown) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:69) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor290.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:40) at com.cloudbees.groovy.cps.Next.step(Next.java:58) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:145) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:106) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:271) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:71) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:180) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:178) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ### Response: The Jenkins Kubernetes plugin is (so far) only to run slaves dynamically in a Kubernetes cluster There's not a lot about deploying from Jenkins to Kubernetes, maybe this post Continuous Delivery Pipelines with Fabric8 and Jenkins on OpenShift helps
I tried looking for a Jenkins plugin (like AWS codeDeploy) so that I could deploy my application to a Kubernetes cluster. So far, I have been successful at pushing it to a Docker registry and adding some command line build steps to deploy to Kubernetes. Looking at the CloudBees announcement this seems possible Installing the Kubernetes plugin gave me errors...I can attach a screenshot if that helps ... Also it seems like this plugin allows you to run slaves in Docker containers not deploy your own app. After looking at this video , it seemed I could accomplish this using the withKubernetes workflow stage... However adding that line to my workflow script gives me the following error java.lang.NoSuchMethodError: No such DSL method withKubernetes found among [archive, bat, build, catchError, checkout, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, error, fileExists, git, input, load, mail, node, parallel, publishHTML, pwd, readFile, retry, sh, sleep, stage, stash, step, svn, timeout, tool, unarchive, unstash, waitUntil, withDockerContainer, withDockerRegistry, withDockerServer, withEnv, wrap, writeFile, ws] at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:107) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112) at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151) at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:75) at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146) at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123) at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15) at WorkflowScript.run(WorkflowScript:17) at Unknown.Unknown(Unknown) at ___cps.transform___(Native Method) at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:69) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106) at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79) at sun.reflect.GeneratedMethodAccessor290.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:40) at com.cloudbees.groovy.cps.Next.step(Next.java:58) at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:145) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30) at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:106) at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30) at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:271) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:71) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:180) at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:178) at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
I guess your pods are exposing port 8080? then you are missing targetPort "ports": [ { "port":80, "targetPort": "8080", "name":"check-server" }
###Instruction: I was able to expose port 80 before, just last month, using kubernetes and google containers. But now simple service example like this doesn't work anymore: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":80, "name":"check-server" } ], "selector":{ "app":"check" } } } and this works: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":8080, "name":"check-server" } ], "selector":{ "app":"check" } } } does anyone know what changed in google cloud? ### Response: I guess your pods are exposing port 8080? then you are missing targetPort "ports": [ { "port":80, "targetPort": "8080", "name":"check-server" }
I was able to expose port 80 before, just last month, using kubernetes and google containers. But now simple service example like this doesn't work anymore: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":80, "name":"check-server" } ], "selector":{ "app":"check" } } } and this works: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"check", "labels":{ "app":"check" } }, "spec":{ "type": "LoadBalancer", "ports": [ { "port":8080, "name":"check-server" } ], "selector":{ "app":"check" } } } does anyone know what changed in google cloud?
This is precisely what the readinessProbe option is :) It's documented more here and here, and is part of the container definition in a pod specification. For example, you might use a pod specification like the one below to ensure that your nginx pod won't be marked as ready (and thus won't have traffic sent to it) until it responds to an HTTP request for /index.html: apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: httpGet: path: /index.html port: 80 initialDelaySeconds: 10 timeoutSeconds: 5
###Instruction: We are running a workload against a cluster hosting 2 instances of a small (3 container) pod. Accessing the pod using a service w/nodeport. If we stop a pod and rc starts a new one, our constant (low volume) workload has numerous failures (Rational Perf Tester, http test hitting the service on the master ... but likely same if it were hitting either minion ... master also has a minion). Anyway, if we just add a pod with kubectl scale, we also get errors. If we then take down this a pod (rc doesn't start a new one because we had one more than needed due to scale) ... no errors. Seems that service starts sending work to new pod because kubelet has done his thing, even though containers are not up. Thus, any time a pod is started ... it starts receiving work a little too soon (after kubelet did his work, but before all containers are ready). Is there a way to guarantee that the service will not route to this pod until all containers are up? Barring that is there some way to say wait 'n' seconds before sending to this pod? I may be wrong, but behavior seems to suggest this scenario. ### Response: This is precisely what the readinessProbe option is :) It's documented more here and here, and is part of the container definition in a pod specification. For example, you might use a pod specification like the one below to ensure that your nginx pod won't be marked as ready (and thus won't have traffic sent to it) until it responds to an HTTP request for /index.html: apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: httpGet: path: /index.html port: 80 initialDelaySeconds: 10 timeoutSeconds: 5
We are running a workload against a cluster hosting 2 instances of a small (3 container) pod. Accessing the pod using a service w/nodeport. If we stop a pod and rc starts a new one, our constant (low volume) workload has numerous failures (Rational Perf Tester, http test hitting the service on the master ... but likely same if it were hitting either minion ... master also has a minion). Anyway, if we just add a pod with kubectl scale, we also get errors. If we then take down this a pod (rc doesn't start a new one because we had one more than needed due to scale) ... no errors. Seems that service starts sending work to new pod because kubelet has done his thing, even though containers are not up. Thus, any time a pod is started ... it starts receiving work a little too soon (after kubelet did his work, but before all containers are ready). Is there a way to guarantee that the service will not route to this pod until all containers are up? Barring that is there some way to say wait 'n' seconds before sending to this pod? I may be wrong, but behavior seems to suggest this scenario.
What config file are you using to launch the DNS add-on? You should be able to just edit it (or make a copy of this one) and point it at a different registry.
###Instruction: I'm trying to enable the skyDNS addon for my kubernetes cluster. I'm behind a corporate proxy, and it seems to be unable to talk to gcr.io. The following errors show up in the logs: Oct 20 13:55:46 atomic01.localdomain kubelet[112453]: W1020 13:55:46.143403 112453 manager.go:1569] Failed to pull image "gcr.io/google_containers/kube2sky:1.11" from pod "kube-dns-v9-w492r_kube-system" and container "kube2sky": image pull failed for gcr.io/google_containers/kube2sky:1.11, this may be because there are no credentials on this request. details: (invalid character '<' looking for beginning of value) Anything I try to pull from gcr.io fails, even manually: # docker pull gcr.io/google_containers/etcd:2.0.9 Trying to pull repository gcr.io/google_containers/etcd ... failed invalid character '<' looking for beginning of value I've got HTTP_PROXY and HTTPS_PROXY variables configured in /etc/sysconfig/docker. As well I have INSECURE_REGISTRY='--insecure-registry gcr.io' enabled in the same config file. I had to apply a workaround of manually pulling the pause container from docker.io and specifying it with --pod_infra_container_image=docker.io/kubernetes/pause:latest Is there such a workaround for the DNS containers? I can pull images from other docker registries just fine. What is different with the gcr.io registry and how can I use it with kubernetes? ### Response: What config file are you using to launch the DNS add-on? You should be able to just edit it (or make a copy of this one) and point it at a different registry.
I'm trying to enable the skyDNS addon for my kubernetes cluster. I'm behind a corporate proxy, and it seems to be unable to talk to gcr.io. The following errors show up in the logs: Oct 20 13:55:46 atomic01.localdomain kubelet[112453]: W1020 13:55:46.143403 112453 manager.go:1569] Failed to pull image "gcr.io/google_containers/kube2sky:1.11" from pod "kube-dns-v9-w492r_kube-system" and container "kube2sky": image pull failed for gcr.io/google_containers/kube2sky:1.11, this may be because there are no credentials on this request. details: (invalid character '<' looking for beginning of value) Anything I try to pull from gcr.io fails, even manually: # docker pull gcr.io/google_containers/etcd:2.0.9 Trying to pull repository gcr.io/google_containers/etcd ... failed invalid character '<' looking for beginning of value I've got HTTP_PROXY and HTTPS_PROXY variables configured in /etc/sysconfig/docker. As well I have INSECURE_REGISTRY='--insecure-registry gcr.io' enabled in the same config file. I had to apply a workaround of manually pulling the pause container from docker.io and specifying it with --pod_infra_container_image=docker.io/kubernetes/pause:latest Is there such a workaround for the DNS containers? I can pull images from other docker registries just fine. What is different with the gcr.io registry and how can I use it with kubernetes?
Somehow I got it working by switching the Dockerfile around Dockerfile FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron COPY . /src WORKDIR /src RUN cp run.sh /run.sh \ && chmod a+x /run.sh \ && touch /var/log/cron.log CMD ["/run.sh"] run.sh #!/bin/sh cat << EOF > /tmp/setup-env.sh export ENV=dev export RECIPIENT=email@example.com EOF crontab /src/crons.conf exec cron -f -L 15 crons.conf PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 0-59/2 * * * 0-4 . /tmp/setup-env.sh ; echo "$(date) ${ENV} ${RECIPIENT}" >> /var/log/cron.log 2>&1 My guess is that because crontab /src/crons.conf was running at build time and build file-system is different from running file-system i.e building with native docker (rootfs) and running it on kubernetes (overlayfs).
###Instruction: I'm having an issue whereby cronjob in kubernetes doesnt seem to work. Below is the test Dockerfile used FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron RUN echo 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' | crontab RUN echo '0-59/2 * * * 0-4 export ENV=dev RECIPIENT=email@example.com; echo "$(date) ${ENV} ${RECIPIENT}" >> /var/log/cron.log' | crontab CMD ["cron", "-f", "-L", "15"] Using native docker to run the above dockerfile, I could see the output in the logfile but not in kubernetes. Checked the events but didn't notice anything unusual. Below is the replicationcontroller yaml file used apiVersion: v1 kind: ReplicationController metadata: labels: name: cron-test name: cron-test spec: replicas: 1 selector: name: cron-test template: metadata: labels: name: cron-test spec: containers: - name: cron-test image: example/cron-test:latest resources: limits: cpu: 100m memory: 512Mi imagePullPolicy: Always Thanks ### Response: Somehow I got it working by switching the Dockerfile around Dockerfile FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron COPY . /src WORKDIR /src RUN cp run.sh /run.sh \ && chmod a+x /run.sh \ && touch /var/log/cron.log CMD ["/run.sh"] run.sh #!/bin/sh cat << EOF > /tmp/setup-env.sh export ENV=dev export RECIPIENT=email@example.com EOF crontab /src/crons.conf exec cron -f -L 15 crons.conf PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 0-59/2 * * * 0-4 . /tmp/setup-env.sh ; echo "$(date) ${ENV} ${RECIPIENT}" >> /var/log/cron.log 2>&1 My guess is that because crontab /src/crons.conf was running at build time and build file-system is different from running file-system i.e building with native docker (rootfs) and running it on kubernetes (overlayfs).
I'm having an issue whereby cronjob in kubernetes doesnt seem to work. Below is the test Dockerfile used FROM debian:jessie RUN apt-get update RUN apt-get -y install --no-install-recommends cron RUN echo 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' | crontab RUN echo '0-59/2 * * * 0-4 export ENV=dev RECIPIENT=email@example.com; echo "$(date) ${ENV} ${RECIPIENT}" >> /var/log/cron.log' | crontab CMD ["cron", "-f", "-L", "15"] Using native docker to run the above dockerfile, I could see the output in the logfile but not in kubernetes. Checked the events but didn't notice anything unusual. Below is the replicationcontroller yaml file used apiVersion: v1 kind: ReplicationController metadata: labels: name: cron-test name: cron-test spec: replicas: 1 selector: name: cron-test template: metadata: labels: name: cron-test spec: containers: - name: cron-test image: example/cron-test:latest resources: limits: cpu: 100m memory: 512Mi imagePullPolicy: Always Thanks
This is not supported in kubernetes v1.0.x but in v1.1.x it will be available as service.spec.loadBalancerIP. As long as you actually own that IP we will use it.
###Instruction: A little bit of background: I have a Go service that uses gRPC to communicate with client apps. gRPC uses HTTP2, so I can't use Google App Engine or the Google Cloud HTTP Load Balancer. I need raw TCP load balancing from the internet to my Go application. I went through the GKE tutorials and read the various docs and I can't find any way to give my application a static IP address. So how do you get a static IP attached to something running in GKE? ### Response: This is not supported in kubernetes v1.0.x but in v1.1.x it will be available as service.spec.loadBalancerIP. As long as you actually own that IP we will use it.
A little bit of background: I have a Go service that uses gRPC to communicate with client apps. gRPC uses HTTP2, so I can't use Google App Engine or the Google Cloud HTTP Load Balancer. I need raw TCP load balancing from the internet to my Go application. I went through the GKE tutorials and read the various docs and I can't find any way to give my application a static IP address. So how do you get a static IP attached to something running in GKE?
Attach to the container docker exec -it 9f13966b1201 /bin/sh Export the DISCOVERY_SERVICE export DISCOVERY_SERVICE=${DISCOVERY_SERVICE:-elasticsearch-discovery} Install the Plugin /elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ keep in mind this will not be persistent
###Instruction: I have a working kubernetes cluster with the elasticsearch up and running returning a 200. I am trying to install the HQ plugin. So i log into the node running the es-client, and attach to the docker container. When i execute the command / # /elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ i am getting Exception in thread "main" java.lang.IllegalArgumentException: Could not resolve placeholder 'DISCOVERY_SERVICE' I am using the yaml files from here https://github.com/pires/kubernetes-elasticsearch-cluster with these modifications https://github.com/UKHomeOffice/docker-elasticsearch/blob/master/examples/kubernetes.md Am i missing something? Thanks in advance ### Response: Attach to the container docker exec -it 9f13966b1201 /bin/sh Export the DISCOVERY_SERVICE export DISCOVERY_SERVICE=${DISCOVERY_SERVICE:-elasticsearch-discovery} Install the Plugin /elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ keep in mind this will not be persistent
I have a working kubernetes cluster with the elasticsearch up and running returning a 200. I am trying to install the HQ plugin. So i log into the node running the es-client, and attach to the docker container. When i execute the command / # /elasticsearch/bin/plugin -install royrusso/elasticsearch-HQ i am getting Exception in thread "main" java.lang.IllegalArgumentException: Could not resolve placeholder 'DISCOVERY_SERVICE' I am using the yaml files from here https://github.com/pires/kubernetes-elasticsearch-cluster with these modifications https://github.com/UKHomeOffice/docker-elasticsearch/blob/master/examples/kubernetes.md Am i missing something? Thanks in advance
You can check the spec schema to see what you can add in a pod or replication controller or whatever: https://cloud.google.com/container-engine/docs/spec-schema You have runAsUser for what you want: apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 securityContext: runAsUser: 41
###Instruction: Docker allows execution of commands as other user with docker exec -u, when USER something in used in Dockerfile. It is helpful to enter into superuser mode to debug issues, when you are running you CMD as system user in Dockerfile. How to execute commands on Kubernetes as other user? My kubectl version output is Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} ### Response: You can check the spec schema to see what you can add in a pod or replication controller or whatever: https://cloud.google.com/container-engine/docs/spec-schema You have runAsUser for what you want: apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 securityContext: runAsUser: 41
Docker allows execution of commands as other user with docker exec -u, when USER something in used in Dockerfile. It is helpful to enter into superuser mode to debug issues, when you are running you CMD as system user in Dockerfile. How to execute commands on Kubernetes as other user? My kubectl version output is Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it). To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893 And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
###Instruction: I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable. Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s. Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container? ### Response: That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it). To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893 And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable. Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s. Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container?
Yes, you can add multiple container with same image. The containers object must contain: name: Name of the container. It must be a DNS_LABEL and be unique within the pod. Cannot be updated. image: Docker image name. You have to make container name unique You can do following: - name: app image: php-app-db-url-env --- - name: app2 |> same image image: php-app-db-url-env --- But not this one: - name: app image: php-app-db-url-env - name: app image: <any image> Also the containers spec should include a unique port number within the Pod
###Instruction: For instance can I have following yaml to produce a pod with multiple containers: apiVersion: v1 kind: Pod metadata: name: lampapp labels: app: app spec: containers: - name: lampdb image: mysql_test - name: app image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306 - name: app2 image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306 ### Response: Yes, you can add multiple container with same image. The containers object must contain: name: Name of the container. It must be a DNS_LABEL and be unique within the pod. Cannot be updated. image: Docker image name. You have to make container name unique You can do following: - name: app image: php-app-db-url-env --- - name: app2 |> same image image: php-app-db-url-env --- But not this one: - name: app image: php-app-db-url-env - name: app image: <any image> Also the containers spec should include a unique port number within the Pod
For instance can I have following yaml to produce a pod with multiple containers: apiVersion: v1 kind: Pod metadata: name: lampapp labels: app: app spec: containers: - name: lampdb image: mysql_test - name: app image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306 - name: app2 image: php-app-db-url-env env: - name: DB_URL value: 127.0.0.1:3306
For PATCH to work you need to send one of the accepted content-type header values. Your example uses a merge patch, so you should send: Content-Type: application/merge-patch+json
###Instruction: Kubernetes verison: 1.02 PATCH /api/v1/namespaces/default/replicationcontrollers/test body {"spec": {"replicas": 3} } response '{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 }' Is this a bug for API? ### Response: For PATCH to work you need to send one of the accepted content-type header values. Your example uses a merge patch, so you should send: Content-Type: application/merge-patch+json
Kubernetes verison: 1.02 PATCH /api/v1/namespaces/default/replicationcontrollers/test body {"spec": {"replicas": 3} } response '{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 }' Is this a bug for API?
NodeStatus is a subresource that is periodically updated by the node (kubelet) itself, and the capacity is calculated based on available resources (cpu, mem, etc) on the node. Updating the Node object does not update the status. If you want to set a maximum capacity of a node, you can pass a flag to the kubelet during startup. This would require you to restart kubelet though.
###Instruction: I am trying to figure out how to update a node's pod capacity. I have a simple cluster setup using the Vagrant/VM environment outlined in the documentation. I have attempted to patch the node's pod capacity using kubectl doing the following: Sending just JSON needed for patch via: kubectl patch node 10.245.1.3 -p '{"status": {"capacity": {"pods": "4"}}}' and kubectl patch node 10.245.1.3 -p "`cat node.json`" Where node.json is the nodes JSON from a GET request except with pods change to 4 and the resourceVersion attribute removed. The command seems to be accepted because the node's resourceVersion number changes. However, the capacity of pods does not. Any ideas? I am using Kubernetes 1.0.6 ### Response: NodeStatus is a subresource that is periodically updated by the node (kubelet) itself, and the capacity is calculated based on available resources (cpu, mem, etc) on the node. Updating the Node object does not update the status. If you want to set a maximum capacity of a node, you can pass a flag to the kubelet during startup. This would require you to restart kubelet though.
I am trying to figure out how to update a node's pod capacity. I have a simple cluster setup using the Vagrant/VM environment outlined in the documentation. I have attempted to patch the node's pod capacity using kubectl doing the following: Sending just JSON needed for patch via: kubectl patch node 10.245.1.3 -p '{"status": {"capacity": {"pods": "4"}}}' and kubectl patch node 10.245.1.3 -p "`cat node.json`" Where node.json is the nodes JSON from a GET request except with pods change to 4 and the resourceVersion attribute removed. The command seems to be accepted because the node's resourceVersion number changes. However, the capacity of pods does not. Any ideas? I am using Kubernetes 1.0.6
You only have one node in your cluster and its status in NotReady. So you won't be able to schedule any pods. You can try to determine why your node isn't ready by looking in /var/log/kubelet.log. You can also add new nodes to your cluster (scale the cluster size up to 2) or delete the node (it will be automatically replaced by the instance group manager) to see if either of those options get you a working node.
###Instruction: I'm having an issue where a container I'd like to run doesn't appear to be getting started on my cluster. I've tried searching around for possible solutions, but there's a surprising lack of information out there to assist with this issue or anything of it's nature. Here's the most I could gather: $ kubectl describe pods/elasticsearch Name: elasticsearch Namespace: default Image(s): my.image.host/my-project/elasticsearch Node: / Labels: <none> Status: Pending Reason: Message: IP: Replication Controllers: <none> Containers: elasticsearch: Image: my.image.host/my-project/elasticsearch Limits: cpu: 100m State: Waiting Ready: False Restart Count: 0 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Mon, 19 Oct 2015 10:28:44 -0500 Mon, 19 Oct 2015 10:34:09 -0500 12 {scheduler } failedScheduling no nodes available to schedule pods I also see this: $ kubectl get pod elasticsearch -o wide NAME READY STATUS RESTARTS AGE NODE elasticsearch 0/1 Pending 0 5s I guess I'd like to know: What prerequisites exist so that I can be confident that my container is going to run in container engine? What do I need to do in this scenario to get it running? Here's my yml file: apiVersion: v1 kind: Pod metadata: name: elasticsearch spec: containers: - name: elasticsearch image: my.image.host/my-project/elasticsearch ports: - containerPort: 9200 resources: volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch volumes: - name: elasticsearch-data gcePersistentDisk: pdName: elasticsearch-staging fsType: ext4 Here's some more output about my node: $ kubectl get nodes NAME LABELS STATUS gke-elasticsearch-staging-00000000-node-yma3 kubernetes.io/hostname=gke-elasticsearch-staging-00000000-node-yma3 NotReady ### Response: You only have one node in your cluster and its status in NotReady. So you won't be able to schedule any pods. You can try to determine why your node isn't ready by looking in /var/log/kubelet.log. You can also add new nodes to your cluster (scale the cluster size up to 2) or delete the node (it will be automatically replaced by the instance group manager) to see if either of those options get you a working node.
I'm having an issue where a container I'd like to run doesn't appear to be getting started on my cluster. I've tried searching around for possible solutions, but there's a surprising lack of information out there to assist with this issue or anything of it's nature. Here's the most I could gather: $ kubectl describe pods/elasticsearch Name: elasticsearch Namespace: default Image(s): my.image.host/my-project/elasticsearch Node: / Labels: <none> Status: Pending Reason: Message: IP: Replication Controllers: <none> Containers: elasticsearch: Image: my.image.host/my-project/elasticsearch Limits: cpu: 100m State: Waiting Ready: False Restart Count: 0 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Mon, 19 Oct 2015 10:28:44 -0500 Mon, 19 Oct 2015 10:34:09 -0500 12 {scheduler } failedScheduling no nodes available to schedule pods I also see this: $ kubectl get pod elasticsearch -o wide NAME READY STATUS RESTARTS AGE NODE elasticsearch 0/1 Pending 0 5s I guess I'd like to know: What prerequisites exist so that I can be confident that my container is going to run in container engine? What do I need to do in this scenario to get it running? Here's my yml file: apiVersion: v1 kind: Pod metadata: name: elasticsearch spec: containers: - name: elasticsearch image: my.image.host/my-project/elasticsearch ports: - containerPort: 9200 resources: volumeMounts: - name: elasticsearch-data mountPath: /usr/share/elasticsearch volumes: - name: elasticsearch-data gcePersistentDisk: pdName: elasticsearch-staging fsType: ext4 Here's some more output about my node: $ kubectl get nodes NAME LABELS STATUS gke-elasticsearch-staging-00000000-node-yma3 kubernetes.io/hostname=gke-elasticsearch-staging-00000000-node-yma3 NotReady
The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
###Instruction: I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh. Cluster is using kube-dns. $ kubectl get svc kube-dns --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP Which works fine. $ kubectl exec busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal Name: kubernetes.default Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal This is the resolv.conf of a pod. $ kubectl exec busybox -- cat /etc/resolv.conf nameserver 10.0.0.10 nameserver 172.20.0.2 search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal Is it possible to have the containers use an additional nameserver? I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution. ps. A kubernetes 1.1 solution would also be acceptable :) Thank you very much in advance, George ### Response: The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh. Cluster is using kube-dns. $ kubectl get svc kube-dns --namespace=kube-system NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP Which works fine. $ kubectl exec busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal Name: kubernetes.default Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal This is the resolv.conf of a pod. $ kubectl exec busybox -- cat /etc/resolv.conf nameserver 10.0.0.10 nameserver 172.20.0.2 search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal Is it possible to have the containers use an additional nameserver? I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution. ps. A kubernetes 1.1 solution would also be acceptable :) Thank you very much in advance, George
kube-ui seems to be showing the total memory usage, as opposed to the memory working set. The former includes inactive pages which are not in use, so the memory would appear higher. To see the memory working set, you can try reaching other monitoring services such as monitoring-grafana (backed by heapster) or simply reach the cadvisor port on the node. To reach cadvisor: Run kubectl proxy and then open http://localhost:8001/api/v1/proxy/nodes/NODENAME:4194/ Alternatively, you can deploy kubedash as your UI.
###Instruction: i´ve created a cluster on google cloud platform consisting of 3 g1-small instances and have not yet added any pod / service / etc. Still, when I log on to the Kubernetes UI, all three instances show a very high memory consumption of ~ 1.3 GB. What is this memory used for? Or is it a problem with the kubernetes ui? Thanks, Fabian ### Response: kube-ui seems to be showing the total memory usage, as opposed to the memory working set. The former includes inactive pages which are not in use, so the memory would appear higher. To see the memory working set, you can try reaching other monitoring services such as monitoring-grafana (backed by heapster) or simply reach the cadvisor port on the node. To reach cadvisor: Run kubectl proxy and then open http://localhost:8001/api/v1/proxy/nodes/NODENAME:4194/ Alternatively, you can deploy kubedash as your UI.
i´ve created a cluster on google cloud platform consisting of 3 g1-small instances and have not yet added any pod / service / etc. Still, when I log on to the Kubernetes UI, all three instances show a very high memory consumption of ~ 1.3 GB. What is this memory used for? Or is it a problem with the kubernetes ui? Thanks, Fabian
Prior to DaemonSet being available, you can also specify that you pod uses a host port and set the number of replicas in your replication controller to something greater than your number of nodes. The host port constraint will allow only one pod per host.
###Instruction: I have 4 nodes (kubelets) configured with a label role=nginx master ~ # kubectl get node NAME LABELS STATUS 10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx Ready 10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx Ready 10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx Ready 10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx Ready I modified the replication controller and added these lines spec: replicas: 4 selector: role: nginx But when I fire it up I get 2 pods on one host. What I want is 1 pod on each host. What am I missing? ### Response: Prior to DaemonSet being available, you can also specify that you pod uses a host port and set the number of replicas in your replication controller to something greater than your number of nodes. The host port constraint will allow only one pod per host.
I have 4 nodes (kubelets) configured with a label role=nginx master ~ # kubectl get node NAME LABELS STATUS 10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx Ready 10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx Ready 10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx Ready 10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx Ready I modified the replication controller and added these lines spec: replicas: 4 selector: role: nginx But when I fire it up I get 2 pods on one host. What I want is 1 pod on each host. What am I missing?
Unfortunately, I am not aware of any method to customize the config. You can either create your own image, open a feature request at issues.k8s.io, or even submit a PR to to enhance fluentd.
###Instruction: We'd like to customise the fluentd config that comes out of the box with the kubernetes fluentd-elasticsearch addon. It seems however that there is no easy way of doing this with the current supplied Docker images. The following file: td-agent.conf is copied to the fluentd-es Docker image with no (apparent) way of us being able to customise it. We need to customise this config file so that we can handle multi-line log entries as one event. Most likely this would invovle making use of the multiline format (as detailed here fluentd in_tail) which would obviously mean a change from the default config file. Currently a multi line Java stack trace appears in Kibana as multiple entires which is not ideal. ### Response: Unfortunately, I am not aware of any method to customize the config. You can either create your own image, open a feature request at issues.k8s.io, or even submit a PR to to enhance fluentd.
We'd like to customise the fluentd config that comes out of the box with the kubernetes fluentd-elasticsearch addon. It seems however that there is no easy way of doing this with the current supplied Docker images. The following file: td-agent.conf is copied to the fluentd-es Docker image with no (apparent) way of us being able to customise it. We need to customise this config file so that we can handle multi-line log entries as one event. Most likely this would invovle making use of the multiline format (as detailed here fluentd in_tail) which would obviously mean a change from the default config file. Currently a multi line Java stack trace appears in Kibana as multiple entires which is not ideal.
I just ran through this (slightly out of date) doc. What I found is that it works if you replace the hyperkube:v0.21.2 with hyperkube:v1.0.7 in the 2 "docker run" lines, and replace 0.18.2 with 1.0.7 in the kubectl download URL. I have offered a pull-request to update this doc. Sorry for the trouble.
###Instruction: Followed this guide to starting a local-machine kubernetes cluster: http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks! ### Response: I just ran through this (slightly out of date) doc. What I found is that it works if you replace the hyperkube:v0.21.2 with hyperkube:v1.0.7 in the 2 "docker run" lines, and replace 0.18.2 with 1.0.7 in the kubectl download URL. I have offered a pull-request to update this doc. Sorry for the trouble.
Followed this guide to starting a local-machine kubernetes cluster: http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks!
I'm not an expert with ELB, so I don't know if it is possible, but I'll talk about GCE and then assert that AWS should operate similarly. In Kubernetes v1.0.x there is an unfortunate bug that releases your external load-balancer and recreates it when you update a Service. In Kubernetes v1.1 we have gone to great lengths to NOT release the load-balancer (more precisely the external IP), so that a PUT or a PATCH (kubectl replace or kubectl patch) on the Service is safe. If AWS releases the external load-balancer (I know it's not an IP for ELB) then we should try to find a way to fix that.
###Instruction: I have a service in kubernetes that is exposed on port 80 via load balancer on AWS. I also have a DNS configured to point on the load balancer host name. I want to add another port to the service without replacing it, which also replaces the load balancer and its domain. The only option I saw is to apply "patch" operation via kubectl. Is there a more convenient way I'm missing? Thanks ### Response: I'm not an expert with ELB, so I don't know if it is possible, but I'll talk about GCE and then assert that AWS should operate similarly. In Kubernetes v1.0.x there is an unfortunate bug that releases your external load-balancer and recreates it when you update a Service. In Kubernetes v1.1 we have gone to great lengths to NOT release the load-balancer (more precisely the external IP), so that a PUT or a PATCH (kubectl replace or kubectl patch) on the Service is safe. If AWS releases the external load-balancer (I know it's not an IP for ELB) then we should try to find a way to fix that.
I have a service in kubernetes that is exposed on port 80 via load balancer on AWS. I also have a DNS configured to point on the load balancer host name. I want to add another port to the service without replacing it, which also replaces the load balancer and its domain. The only option I saw is to apply "patch" operation via kubectl. Is there a more convenient way I'm missing? Thanks
Thanks for the very detailed setup, but I still have some questions. 1) When you say "Container" {A,B,C} do you mean Pod? Or are A, B, C containers in the same Pod? 2) "Container B utilises incoming port 8081 to talk to Container A and C" - What do you mean that it uses an INcoming port to talk to other containers? Who opens the connection, to whom, and on what destination port? 3) "needs to access the external database on port 3306" but later "needs to be able to be reached from an external system on port 3306" - Does B access an external database or is it serving a database on 3306? I'm confused on where traffic is coming in and where it is going out in this explanation. In general, you should avoid thinking in terms of nodes and you should avoid thinking about pods talking to pods (or containers to containers). You have some number of Services, each of which is backed by some number of Pods. Client pods (usually) talk to Services. Services receive traffic on a port and send that traffic to the corresponding targetPort on Pods. Pods receive traffic on a containerPort. None of that requires hostPorts or nodePorts. The last question is which of these Services need to be accessed from outside the cluster, and what is your environment capable of wrt load-balancing. If you answer this far, then I can come back for round 2 :)
###Instruction: I'm trying to wrap my head around how kubernetes (k8s) utilises ports. Having read the API documentation as well as the available docs, I'm not sure how the port mapping and port flow works. Let's say I have three containers with an externally hosted database, my k8s cluster is three on-prem CoreOS nodes, and there is a software-defined load balancer in front of all three nodes to forward traffic to all three nodes on ports 3306 and 10082. Container A utilises incoming port 8080, needs to talk to Container B and C, but does not need external access. It is defined with Replication Controller A that has 1 replica. Container B utilises incoming port 8081 to talk to Container A and C, but needs to access the external database on port 3306. It is defined with Replication Controller B that has 2 replicas. Container C utilises incoming port 8082, needs to talk to Container A and B, but also needs external access on port 10082 for end users. It is defined with Replication Controller C that has 3 replicas. I have three services to abstract the replication controllers. Service A selects Replication Controller A and needs to forward incoming traffic on port 9080 to port 8080. Service B selects Replication Controller B and needs to forward incoming traffic on ports 9081 and 3306 to ports 8081 and 3306. Service C selects Replication Controller C and needs to forward incoming traffic on port 9082 to port 8082. I have one endpoint for the external database, configured to on port 3306 with an IPv4 address. Goals: Services need to abstract Replication Controller ports. Service B needs to be able to be reached from an external system on port 3306 on all nodes. Service C needs to be able to be reached from an external system on port 10082 on all nodes. With that: When would I use each of the types of ports; i.e. port, targetPort, nodePort, etc.? ### Response: Thanks for the very detailed setup, but I still have some questions. 1) When you say "Container" {A,B,C} do you mean Pod? Or are A, B, C containers in the same Pod? 2) "Container B utilises incoming port 8081 to talk to Container A and C" - What do you mean that it uses an INcoming port to talk to other containers? Who opens the connection, to whom, and on what destination port? 3) "needs to access the external database on port 3306" but later "needs to be able to be reached from an external system on port 3306" - Does B access an external database or is it serving a database on 3306? I'm confused on where traffic is coming in and where it is going out in this explanation. In general, you should avoid thinking in terms of nodes and you should avoid thinking about pods talking to pods (or containers to containers). You have some number of Services, each of which is backed by some number of Pods. Client pods (usually) talk to Services. Services receive traffic on a port and send that traffic to the corresponding targetPort on Pods. Pods receive traffic on a containerPort. None of that requires hostPorts or nodePorts. The last question is which of these Services need to be accessed from outside the cluster, and what is your environment capable of wrt load-balancing. If you answer this far, then I can come back for round 2 :)
I'm trying to wrap my head around how kubernetes (k8s) utilises ports. Having read the API documentation as well as the available docs, I'm not sure how the port mapping and port flow works. Let's say I have three containers with an externally hosted database, my k8s cluster is three on-prem CoreOS nodes, and there is a software-defined load balancer in front of all three nodes to forward traffic to all three nodes on ports 3306 and 10082. Container A utilises incoming port 8080, needs to talk to Container B and C, but does not need external access. It is defined with Replication Controller A that has 1 replica. Container B utilises incoming port 8081 to talk to Container A and C, but needs to access the external database on port 3306. It is defined with Replication Controller B that has 2 replicas. Container C utilises incoming port 8082, needs to talk to Container A and B, but also needs external access on port 10082 for end users. It is defined with Replication Controller C that has 3 replicas. I have three services to abstract the replication controllers. Service A selects Replication Controller A and needs to forward incoming traffic on port 9080 to port 8080. Service B selects Replication Controller B and needs to forward incoming traffic on ports 9081 and 3306 to ports 8081 and 3306. Service C selects Replication Controller C and needs to forward incoming traffic on port 9082 to port 8082. I have one endpoint for the external database, configured to on port 3306 with an IPv4 address. Goals: Services need to abstract Replication Controller ports. Service B needs to be able to be reached from an external system on port 3306 on all nodes. Service C needs to be able to be reached from an external system on port 10082 on all nodes. With that: When would I use each of the types of ports; i.e. port, targetPort, nodePort, etc.?
That error means that the master can't communicate with the nodes in your cluster. Have you deleted the instances from your cluster, or messed with the firewalls? There should be a firewall allowing access SSH to the nodes in the cluster from the master's IP address, and an entry in your project-wide metadata with the master's public SSH key.
###Instruction: Is there any way to access the UI on the GKE service? I tried following the information on https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/user-guide/ui.md And got this Error: 'empty tunnel list.' Trying to reach: 'http://10.64.xx.xx:8080/' Is this feature turned on ? ### Response: That error means that the master can't communicate with the nodes in your cluster. Have you deleted the instances from your cluster, or messed with the firewalls? There should be a firewall allowing access SSH to the nodes in the cluster from the master's IP address, and an entry in your project-wide metadata with the master's public SSH key.
Is there any way to access the UI on the GKE service? I tried following the information on https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/user-guide/ui.md And got this Error: 'empty tunnel list.' Trying to reach: 'http://10.64.xx.xx:8080/' Is this feature turned on ?
Google Container Engine is a zonal service, which means that each cluster runs wholly within a single zone. If you want to do this manually you can follow these instructions to create a new instance template in a different zone and spin up nodes that connect to your existing Kubernetes master. Just be aware that you will be paying for the bandwidth between the nodes and the master between different zones even if they are in the same region (see the GCE Network Pricing).
###Instruction: If possible I'd like setup a multi-zone Kubernetes cluster on GCE but when creating a new cluster from the web console I can only select a single zone: Is it possible to have a multi-zone or even multi-region Kubernetes cluster on GCE? ### Response: Google Container Engine is a zonal service, which means that each cluster runs wholly within a single zone. If you want to do this manually you can follow these instructions to create a new instance template in a different zone and spin up nodes that connect to your existing Kubernetes master. Just be aware that you will be paying for the bandwidth between the nodes and the master between different zones even if they are in the same region (see the GCE Network Pricing).
If possible I'd like setup a multi-zone Kubernetes cluster on GCE but when creating a new cluster from the web console I can only select a single zone: Is it possible to have a multi-zone or even multi-region Kubernetes cluster on GCE?
You might be interested in our new Ingress API and load-balancer controllers (beta in kubernetes v1.1) We have HAProxy support, but not nginx (yet). https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
###Instruction: I was hoping to follow this architecture: -> service 1 -> nginx -> service 2 -> service 3 The reason I want to use nginx (instead of the default kube services using kube-proxy) is that it has x-accel-redirects which is great for authentication. Normally, nginx and all three services would be on separate boxes. With Kubernetes, I'm trying to find the best way of architecting this. I know there's a way to do it by simulating the service proxying, but this seems like a jenky approach. I was thinking to make nginx and all three services a kube service, so the flow would be -> kube-proxy -> nginx -> kube-proxy -> service 1, 2, 3 and the nginx config would look like upstream backend { server backend1.example.com; # with skydns name, or the service ip which is static } server { location / { proxy_pass http://backend; } } I was wondering if there are any downsides to this approach, and if there are any better approaches? Seems like a request will go through quite a bit of hops and processing. ### Response: You might be interested in our new Ingress API and load-balancer controllers (beta in kubernetes v1.1) We have HAProxy support, but not nginx (yet). https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
I was hoping to follow this architecture: -> service 1 -> nginx -> service 2 -> service 3 The reason I want to use nginx (instead of the default kube services using kube-proxy) is that it has x-accel-redirects which is great for authentication. Normally, nginx and all three services would be on separate boxes. With Kubernetes, I'm trying to find the best way of architecting this. I know there's a way to do it by simulating the service proxying, but this seems like a jenky approach. I was thinking to make nginx and all three services a kube service, so the flow would be -> kube-proxy -> nginx -> kube-proxy -> service 1, 2, 3 and the nginx config would look like upstream backend { server backend1.example.com; # with skydns name, or the service ip which is static } server { location / { proxy_pass http://backend; } } I was wondering if there are any downsides to this approach, and if there are any better approaches? Seems like a request will go through quite a bit of hops and processing.
kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver. When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
###Instruction: I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem. How is this implemented? ### Response: kubectl has a configuration file that specifies the location of the Kubernetes apiserver and the client credentials to authenticate to the master. All of the commands issued by kubectl are over the HTTPS connection to the apiserver. When you run the scripts to bring up a cluster, they typically generate this local configuration file with the parameters necessary to access the cluster you just created. By default, the file is located at ~/.kube/config.
I've installed Kubernetes via Vagrant on OS X and everything seems to be working fine, but I'm unsure how kubectl is able to communicate with the master node despite being local to the workstation filesystem. How is this implemented?
In the not-too-distant future we will introduce controls for network policy in Kubernetes. As of today that is not integrated, but several vendors (e.g. Weave, Calico) have policy engines that can work with Kubernetes.
###Instruction: TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate? Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster. That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database. People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people. One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful. The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production? ### Response: In the not-too-distant future we will introduce controls for network policy in Kubernetes. As of today that is not integrated, but several vendors (e.g. Weave, Calico) have policy engines that can work with Kubernetes.
TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate? Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster. That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database. People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people. One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful. The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production?