input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
SSH port forwarding should do the trick. Try running this from you client. ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis> Then from your client redis-cli -h 127.0.0.1 -p 6379 It works for me. Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group. Also, AWS now supports accessing your cluster more info here
###Instruction: I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors. Currently when trying from my local set up: redis-cli -h my-node-endpoint -p 6379 I only get a timeout after some time. ### Response: SSH port forwarding should do the trick. Try running this from you client. ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis> Then from your client redis-cli -h 127.0.0.1 -p 6379 It works for me. Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group. Also, AWS now supports accessing your cluster more info here
I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors. Currently when trying from my local set up: redis-cli -h my-node-endpoint -p 6379 I only get a timeout after some time.
Solved. The volumes syntax was incorrect. https://github.com/GoogleCloudPlatform/kubernetes/issues/1446
###Instruction: I've got the following ReplicationController JSON defined: { "id": "PHPController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": "php"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "PHPController", "volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}], "containers": [{ "name": "php", "image": "internaluser/php53", "ports": [{"containerPort": 80, "hostPort": 9021}], "volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}] }] } }, "labels": {"name": "php"} }}, "labels": {"name": "php"} } The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53". /mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present. After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted: "Volumes": { "/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7" }, "VolumesRW": { "/mnt/nfs/wordpress_a": true } "HostConfig": { "Binds": null, "ContainerIDFile": "", "LxcConf": null, "Privileged": false, "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "9021" } ] }, I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7". From the kubelet log: Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}] Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks! ### Response: Solved. The volumes syntax was incorrect. https://github.com/GoogleCloudPlatform/kubernetes/issues/1446
I've got the following ReplicationController JSON defined: { "id": "PHPController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": "php"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "PHPController", "volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}], "containers": [{ "name": "php", "image": "internaluser/php53", "ports": [{"containerPort": 80, "hostPort": 9021}], "volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}] }] } }, "labels": {"name": "php"} }}, "labels": {"name": "php"} } The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53". /mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present. After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted: "Volumes": { "/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7" }, "VolumesRW": { "/mnt/nfs/wordpress_a": true } "HostConfig": { "Binds": null, "ContainerIDFile": "", "LxcConf": null, "Privileged": false, "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "9021" } ] }, I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7". From the kubelet log: Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}] Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks!
The errors you are seeing in your log is that the pod you are trying to create has a port conflict with the Kubernetes API server. Both are trying to use port 8080. Fix this issue by changing the pod to run on another port. You'll have to clean things up first. So remove the redis pod using kubecfg command. Update the pod.json file, and try again. Also please follow the quickstart guide found at the https://github.com/kelseyhightower/kubernetes-coreos. Kubernetes is a fast moving project, so that blog will become out of date fairly quickly.
###Instruction: I'm trying to eval Kubernetes. I interesting to mount kubernetes on CoreOS cluster, and official documentation not have much, only have two references to blog of CoreOS; I'm currently using guide to running kubernetes example on CoreOS part 1. My apiserver.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/apiserver Description=Kubernetes API Server [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/apiserver \ --address=127.0.0.1 \ --port=8080 \ --etcd_servers=http://127.0.0.1:4001 \ --machines=127.0.0.1 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My controller-manager.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My kubelet.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/kubelet Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kubelet \ --address=127.0.0.1 \ --port=10250 \ --hostname_override=127.0.0.1 \ --etcd_servers=http://127.0.0.1:4001 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My proxy.service [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/proxy Description=Kubernetes Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target The problem arises when I create a Kubernetes pod redis. When I execute command: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods the error outputs after a long time waiting: {Kind:"", ID:"", CreationTimestamp:"", SelfLink:"", ResourceVersion:0x0}, Status:"failure", Details:"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"redis\", CreationTimestamp:\"\", SelfLink:\"\", ResourceVersion:0x0}, Labels:map[string]string{\"name\":\"redis\"}, DesiredState:api.PodState{Manifest:api.ContainerManifest{Version:\"v1beta1\", ID:\"redis\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"redis\", Image:\"registry.vc.datys.cu:5000/redis\", Command:[]string(nil), WorkingDir:\"\", Ports:[]api.Port{api.Port{Name:\"\", HostPort:6379, ContainerPort:6379, Protocol:\"\", HostIP:\"\"}}, Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:api.LivenessProbe{Enabled:false, Type:\"\", HTTPGet:api.HTTPGetProbe{Path:\"\", Port:\"\", Host:\"\"}, InitialDelaySeconds:0}}}}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}}", Code:500} NOTE: When I execute: sudo systemctl status proxy return: ● proxy.service - Kubernetes Proxy Loaded: loaded (/etc/systemd/system/proxy.service; disabled) Active: active (running) since Fri 2014-08-08 14:21:36 UTC; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 1036 (proxy) CGroup: /system.slice/proxy.service └─1036 /opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Aug 08 14:21:42 core-01 proxy[1036]: I0808 14:21:42.074694 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074763 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074791 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075337 01036 logs.go:38] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001] [%!s(MISSING)] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075501 01036 logs.go:38] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075528 01036 logs.go:38] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/...thod GET] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078524 01036 logs.go:38] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registr...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078824 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078897 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078925 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Hint: Some lines were ellipsized, use -l to show in full. And when execute: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods return: Name Image(s) Host Labels ---------- ---------- ---------- ---------- redis dockerfile/redis 127.0.0.1/ name=redis What's the problem? I appreciate any idea or collaboration. EDIT 1: My redis.json: { "id": "redis", "desiredState": { "manifest": { "version": "v1beta1", "id": "redis", "containers": [{ "name": "redis", "image": "registry.vc.datys.cu:5000/redis", "ports": [{ "containerPort": 6379, "hostPort": 6379 }] }] } }, "labels": { "name": "redis" } } ### Response: The errors you are seeing in your log is that the pod you are trying to create has a port conflict with the Kubernetes API server. Both are trying to use port 8080. Fix this issue by changing the pod to run on another port. You'll have to clean things up first. So remove the redis pod using kubecfg command. Update the pod.json file, and try again. Also please follow the quickstart guide found at the https://github.com/kelseyhightower/kubernetes-coreos. Kubernetes is a fast moving project, so that blog will become out of date fairly quickly.
I'm trying to eval Kubernetes. I interesting to mount kubernetes on CoreOS cluster, and official documentation not have much, only have two references to blog of CoreOS; I'm currently using guide to running kubernetes example on CoreOS part 1. My apiserver.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/apiserver Description=Kubernetes API Server [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/apiserver \ --address=127.0.0.1 \ --port=8080 \ --etcd_servers=http://127.0.0.1:4001 \ --machines=127.0.0.1 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My controller-manager.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My kubelet.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/kubelet Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kubelet \ --address=127.0.0.1 \ --port=10250 \ --hostname_override=127.0.0.1 \ --etcd_servers=http://127.0.0.1:4001 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My proxy.service [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/proxy Description=Kubernetes Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target The problem arises when I create a Kubernetes pod redis. When I execute command: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods the error outputs after a long time waiting: {Kind:"", ID:"", CreationTimestamp:"", SelfLink:"", ResourceVersion:0x0}, Status:"failure", Details:"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"redis\", CreationTimestamp:\"\", SelfLink:\"\", ResourceVersion:0x0}, Labels:map[string]string{\"name\":\"redis\"}, DesiredState:api.PodState{Manifest:api.ContainerManifest{Version:\"v1beta1\", ID:\"redis\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"redis\", Image:\"registry.vc.datys.cu:5000/redis\", Command:[]string(nil), WorkingDir:\"\", Ports:[]api.Port{api.Port{Name:\"\", HostPort:6379, ContainerPort:6379, Protocol:\"\", HostIP:\"\"}}, Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:api.LivenessProbe{Enabled:false, Type:\"\", HTTPGet:api.HTTPGetProbe{Path:\"\", Port:\"\", Host:\"\"}, InitialDelaySeconds:0}}}}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}}", Code:500} NOTE: When I execute: sudo systemctl status proxy return: ● proxy.service - Kubernetes Proxy Loaded: loaded (/etc/systemd/system/proxy.service; disabled) Active: active (running) since Fri 2014-08-08 14:21:36 UTC; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 1036 (proxy) CGroup: /system.slice/proxy.service └─1036 /opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Aug 08 14:21:42 core-01 proxy[1036]: I0808 14:21:42.074694 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074763 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074791 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075337 01036 logs.go:38] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001] [%!s(MISSING)] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075501 01036 logs.go:38] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075528 01036 logs.go:38] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/...thod GET] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078524 01036 logs.go:38] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registr...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078824 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078897 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078925 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Hint: Some lines were ellipsized, use -l to show in full. And when execute: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods return: Name Image(s) Host Labels ---------- ---------- ---------- ---------- redis dockerfile/redis 127.0.0.1/ name=redis What's the problem? I appreciate any idea or collaboration. EDIT 1: My redis.json: { "id": "redis", "desiredState": { "manifest": { "version": "v1beta1", "id": "redis", "containers": [{ "name": "redis", "image": "registry.vc.datys.cu:5000/redis", "ports": [{ "containerPort": 6379, "hostPort": 6379 }] }] } }, "labels": { "name": "redis" } }
I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager) Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app. Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).
###Instruction: Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker? ### Response: I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager) Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app. Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).
Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?
No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to. Hope this helps.
###Instruction: On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com) Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please? I'm asking this because some Articles are saying, it won't work and just crush. Please kindly advise. ### Response: No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to. Hope this helps.
On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com) Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please? I'm asking this because some Articles are saying, it won't work and just crush. Please kindly advise.
It's been a while, but i recall that the problem occurred because billing must be enabled for the application in the Google Developer Console. An easy catch once you get used to the platform but non-obvious when you're climbing the learning curve.
###Instruction: I'm trying to follow the directions to get the Google Cloud Platform kubernetes GuestBook example running. I've got a "kubernetes-guestbook-example" project ID with billing enabled in the Google Developer's Console under my account and i do a "gcloud auth login" to ensure i'm running as that account. Step zero says that i should make sure i have "turned up a Kubernetes cluster". I think i've done this. I also run "gcloud config set project kubernetes-guestbook-example". Then i run "hack/dev-build-and-up.sh" from the kubernetes subdirectory and it produces: Building local go components Building release tree Packaging release Building launch script Uploading to Google Storage Release pushed (devel/jamesfremen/r20141001-192247). Starting cluster using provider: gce Release: gs://kubernetes-releases-68782/devel/jamesfremen/r20141001-192247 Project: kubernetes-guestbook-example (autodetected from gcloud config) Error: The resource 'projects/kubernetes-guestbook-example' was not found It seems to be a minor path issue. I think it's looking for examples/guestbook but i'm not sure how to configure it properly. Any help would be appreciated! ### Response: It's been a while, but i recall that the problem occurred because billing must be enabled for the application in the Google Developer Console. An easy catch once you get used to the platform but non-obvious when you're climbing the learning curve.
I'm trying to follow the directions to get the Google Cloud Platform kubernetes GuestBook example running. I've got a "kubernetes-guestbook-example" project ID with billing enabled in the Google Developer's Console under my account and i do a "gcloud auth login" to ensure i'm running as that account. Step zero says that i should make sure i have "turned up a Kubernetes cluster". I think i've done this. I also run "gcloud config set project kubernetes-guestbook-example". Then i run "hack/dev-build-and-up.sh" from the kubernetes subdirectory and it produces: Building local go components Building release tree Packaging release Building launch script Uploading to Google Storage Release pushed (devel/jamesfremen/r20141001-192247). Starting cluster using provider: gce Release: gs://kubernetes-releases-68782/devel/jamesfremen/r20141001-192247 Project: kubernetes-guestbook-example (autodetected from gcloud config) Error: The resource 'projects/kubernetes-guestbook-example' was not found It seems to be a minor path issue. I think it's looking for examples/guestbook but i'm not sure how to configure it properly. Any help would be appreciated!
Yes, you would copy them according to a sample project structure: site.yml webservers.yml fooservers.yml kubernetes.yaml roles/ common/ files/ templates/ tasks/ handlers/ vars/ meta/ webservers/ files/ templates/ tasks/ handlers/ vars/ meta/ kubernetes/ files/ templates/ tasks/ handlers/ vars/ meta/ or you can just run ansible-galaxy with the -p ROLES_PATH or --roles-path=ROLES_PATH option to install it under /your/project/root You can also use the /etc/local/ansible directory as your project root if you'd like to. Additionally, you can get help by running the command ansible-galaxy install --help
###Instruction: So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to /etc/local/ansible/roles (I am on OSX). Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it? ### Response: Yes, you would copy them according to a sample project structure: site.yml webservers.yml fooservers.yml kubernetes.yaml roles/ common/ files/ templates/ tasks/ handlers/ vars/ meta/ webservers/ files/ templates/ tasks/ handlers/ vars/ meta/ kubernetes/ files/ templates/ tasks/ handlers/ vars/ meta/ or you can just run ansible-galaxy with the -p ROLES_PATH or --roles-path=ROLES_PATH option to install it under /your/project/root You can also use the /etc/local/ansible directory as your project root if you'd like to. Additionally, you can get help by running the command ansible-galaxy install --help
So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to /etc/local/ansible/roles (I am on OSX). Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it?
Kubernetes is an open source project that brings 'Google style' cluster management capabilities to the world of virtual machines, or 'on the metal' scenarios. It works very well with modern operating system environments (like CoreOS or Red Hat Atomic) that offer up lightweight computing 'nodes' that are managed for you. It is written in Golang and is lightweight, modular, portable and extensible. We (the Kubernetes team) are working with a number of different technology companies (including Mesosphere who curate the Mesos open source project) to establish Kubernetes as the standard way to interact with computing clusters. The idea is to reproduce the patterns that we see people needing to build cluster applications based on our experience at Google. Some of these concepts include: pods β€” a way to group containers together replication controllers β€” a way to handle the lifecycle of containers labels β€” a way to find and query containers, and services β€” a set of containers performing a common function. So with Kubernetes alone you will have something that is simple, easy to get up-and-running, portable and extensible that adds 'cluster' as a noun to the things that you manage in the lightest weight manner possible. Run an application on a cluster, and stop worrying about an individual machine. In this case, cluster is a flexible resource just like a VM. It is a logical computing unit. Turn it up, use it, resize it, turn it down quickly and easily. With Mesos, there is a fair amount of overlap in terms of the basic vision, but the products are at quite different points in their lifecycle and have different sweet spots. Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster. The great thing about it is that lots of modern scalable data processing application run well on Mesos (Hadoop, Kafka, Spark) and it is nice because you can run them all on the same basic resource pool, along with your new age container packaged apps. It is somewhat more heavy weight than the Kubernetes project, but is getting easier and easier to manage thanks to the work of folks like Mesosphere. Now what gets really interesting is that Mesos is currently being adapted to add a lot of the Kubernetes concepts and to support the Kubernetes API. So it will be a gateway to getting more capabilities for your Kubernetes app (high availability master, more advanced scheduling semantics, ability to scale to a very large number of nodes) if you need them, and is well suited to run production workloads (Kubernetes is still in an alpha state). When asked, I tend to say: Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc). If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps. Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework.
###Instruction: What exactly is the difference between Apache's Mesos and Google's Kubernetes? I understand both are server cluster management software. Can anyone elaborate where the main differences are - when would which framework be preferred? Why would you want to use Kubernetes on top of Mesosphere? ### Response: Kubernetes is an open source project that brings 'Google style' cluster management capabilities to the world of virtual machines, or 'on the metal' scenarios. It works very well with modern operating system environments (like CoreOS or Red Hat Atomic) that offer up lightweight computing 'nodes' that are managed for you. It is written in Golang and is lightweight, modular, portable and extensible. We (the Kubernetes team) are working with a number of different technology companies (including Mesosphere who curate the Mesos open source project) to establish Kubernetes as the standard way to interact with computing clusters. The idea is to reproduce the patterns that we see people needing to build cluster applications based on our experience at Google. Some of these concepts include: pods β€” a way to group containers together replication controllers β€” a way to handle the lifecycle of containers labels β€” a way to find and query containers, and services β€” a set of containers performing a common function. So with Kubernetes alone you will have something that is simple, easy to get up-and-running, portable and extensible that adds 'cluster' as a noun to the things that you manage in the lightest weight manner possible. Run an application on a cluster, and stop worrying about an individual machine. In this case, cluster is a flexible resource just like a VM. It is a logical computing unit. Turn it up, use it, resize it, turn it down quickly and easily. With Mesos, there is a fair amount of overlap in terms of the basic vision, but the products are at quite different points in their lifecycle and have different sweet spots. Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster. The great thing about it is that lots of modern scalable data processing application run well on Mesos (Hadoop, Kafka, Spark) and it is nice because you can run them all on the same basic resource pool, along with your new age container packaged apps. It is somewhat more heavy weight than the Kubernetes project, but is getting easier and easier to manage thanks to the work of folks like Mesosphere. Now what gets really interesting is that Mesos is currently being adapted to add a lot of the Kubernetes concepts and to support the Kubernetes API. So it will be a gateway to getting more capabilities for your Kubernetes app (high availability master, more advanced scheduling semantics, ability to scale to a very large number of nodes) if you need them, and is well suited to run production workloads (Kubernetes is still in an alpha state). When asked, I tend to say: Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc). If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps. Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework.
What exactly is the difference between Apache's Mesos and Google's Kubernetes? I understand both are server cluster management software. Can anyone elaborate where the main differences are - when would which framework be preferred? Why would you want to use Kubernetes on top of Mesosphere?
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions. I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
###Instruction: I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos). ### Response: The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions. I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos).
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
###Instruction: I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful. ### Response: You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
There's a few things that could go wrong. The best option to figure out what's wrong in your situation is to try using the gcloud command line tool, which gives better error information. Information about how to install and use it is in Container Engine's documentation. Other than the default network being removed (as mentioned by Robert Bailey), you may be trying to create more VM instances than you have quota for. You can check what your quota is on the developer console under Compute > Compute Engine > Quota. You're most likely to go over quota on either CPUs or in-use IP addresses, since each VM created is given an ephemeral IP address.
###Instruction: I tried to create a new cluster in Container Engine in the Google Developers Console. It finished pretty quickly with a yellow triangle with an exclamation point. I'm assuming that means it didn't work. Any idea what I could be doing wrong? ### Response: There's a few things that could go wrong. The best option to figure out what's wrong in your situation is to try using the gcloud command line tool, which gives better error information. Information about how to install and use it is in Container Engine's documentation. Other than the default network being removed (as mentioned by Robert Bailey), you may be trying to create more VM instances than you have quota for. You can check what your quota is on the developer console under Compute > Compute Engine > Quota. You're most likely to go over quota on either CPUs or in-use IP addresses, since each VM created is given an ephemeral IP address.
I tried to create a new cluster in Container Engine in the Google Developers Console. It finished pretty quickly with a yellow triangle with an exclamation point. I'm assuming that means it didn't work. Any idea what I could be doing wrong?
You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone> gcloud config set container/cluster <your-cluster-name> gcloud container clusters get-credentials <your-cluster-name> Tag your image docker tag <your-image> gcr.io/<your-project>/<your-image> Push your image gcloud docker push gcr.io/<your-project>/<your-image> Create a pod manifest for your container: my-pod.yaml id: my-pod kind: Pod apiVersion: v1 desiredState: manifest: containers: - name: <container-name> image: gcr.io/<your-project>/<your-image> ... Schedule this pod kubectl create -f my-pod.yaml Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
###Instruction: How do I run a docker image that I built locally on Google Container Engine? ### Response: You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone> gcloud config set container/cluster <your-cluster-name> gcloud container clusters get-credentials <your-cluster-name> Tag your image docker tag <your-image> gcr.io/<your-project>/<your-image> Push your image gcloud docker push gcr.io/<your-project>/<your-image> Create a pod manifest for your container: my-pod.yaml id: my-pod kind: Pod apiVersion: v1 desiredState: manifest: containers: - name: <container-name> image: gcr.io/<your-project>/<your-image> ... Schedule this pod kubectl create -f my-pod.yaml Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
How do I run a docker image that I built locally on Google Container Engine?
As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler. There are actually two different kinds of scaling: Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster Scaling pods up and down. Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...
###Instruction: I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?) ### Response: As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler. There are actually two different kinds of scaling: Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster Scaling pods up and down. Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...
I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?)
It's possible to create a different number of worker nodes by using the --num-nodes option when you create the cluster, like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME --num-nodes=5
###Instruction: If I start a Google Container Engine cluster like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME I get three worker nodes. How can I create a cluster with more? ### Response: It's possible to create a different number of worker nodes by using the --num-nodes option when you create the cluster, like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME --num-nodes=5
If I start a Google Container Engine cluster like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME I get three worker nodes. How can I create a cluster with more?
Kubernetes roadmap is here: https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md Unfortunately, Google Container Engine hasn't released a roadmap yet.
###Instruction: I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted? ### Response: Kubernetes roadmap is here: https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md Unfortunately, Google Container Engine hasn't released a roadmap yet.
I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted?
There is an example of mounting a PD in the github documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd PDs in GCE can only be attached to a single VM (in read/write mode), so if the disk is already attached to your master then it will not be possible to also attach it to the node on which your pod is scheduled. Try detaching the PD from the master and then scheduling the pod.
###Instruction: There is problem , I can't link my pod container with persistent storage This is config of my pod, where elastic is the name of the attached disk (same region, mounted and formatted as should), when I start the pod with this config I have this error: Unable to mount volumes for pod elastic.etcd I could link my container to any other type of volume either emptyDir or hostDir and all work fine. But in the case of the mounted disk not. And I really can't find some good example about persitsentDisk volumes. id: elastic kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: elastic volumes: - name: elastic-persistent-storage source: persistentDisk: pdName : elastic fsType : ext4 containers: - name: elastic image: dockerfile/elasticsearch cpu: 1000 volumeMounts: - name: elastic-persistent-storage mountPath: /data ports: - name: elastic containerPort: 9200 hostPort: 9200 labels: name: elastic role: storage elastic is the name of the disk in the same project, same region and attached to master node of claster. Also it is formatted and mounted. Thanks! ### Response: There is an example of mounting a PD in the github documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd PDs in GCE can only be attached to a single VM (in read/write mode), so if the disk is already attached to your master then it will not be possible to also attach it to the node on which your pod is scheduled. Try detaching the PD from the master and then scheduling the pod.
There is problem , I can't link my pod container with persistent storage This is config of my pod, where elastic is the name of the attached disk (same region, mounted and formatted as should), when I start the pod with this config I have this error: Unable to mount volumes for pod elastic.etcd I could link my container to any other type of volume either emptyDir or hostDir and all work fine. But in the case of the mounted disk not. And I really can't find some good example about persitsentDisk volumes. id: elastic kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: elastic volumes: - name: elastic-persistent-storage source: persistentDisk: pdName : elastic fsType : ext4 containers: - name: elastic image: dockerfile/elasticsearch cpu: 1000 volumeMounts: - name: elastic-persistent-storage mountPath: /data ports: - name: elastic containerPort: 9200 hostPort: 9200 labels: name: elastic role: storage elastic is the name of the disk in the same project, same region and attached to master node of claster. Also it is formatted and mounted. Thanks!
It is probably due to a not implemented feature, see this issue: https://github.com/GoogleCloudPlatform/kubernetes/issues/1886 you can copy the files from /usr/share/nginx/... on the master into your home dir and try again.
###Instruction: I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful. ### Response: It is probably due to a not implemented feature, see this issue: https://github.com/GoogleCloudPlatform/kubernetes/issues/1886 you can copy the files from /usr/share/nginx/... on the master into your home dir and try again.
I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes. For kubernetes to work, the following network set up must be satisfied: Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap. Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes. It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly. There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel. We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.
###Instruction: I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions. I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries. After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking. Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception. Can anybody help me why the containers cannot reach each other among the hosts? I hope I could describe my setup enough accurately and thanks in advance. ### Response: The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes. For kubernetes to work, the following network set up must be satisfied: Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap. Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes. It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly. There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel. We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.
I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions. I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries. After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking. Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception. Can anybody help me why the containers cannot reach each other among the hosts? I hope I could describe my setup enough accurately and thanks in advance.
Kubernetes is really three things: A way to dynamically schedule containers (actually, sets of containers called pods) to a cluster of machines. Manage and horizontally scale a lot of those pods using labels and helpers (ReplicationController) Communicate between sets of pods via services, expose a set of pods externally on a public IP and easily consume external services. This is necessary to deal with the horizontal scaling and the dynamic nature of how pods get placed/scheduled. This is all very much a tool set for managing compute across a set of machines. It isn't a full application PaaS. Kubernetes doesn't have any idea what an "application" is. Generally PaaS systems provide an easy way to take code and get it deployed and managed as an application. In fact, I expect to see specialized PaaS systems built on top of Kubernetes -- that is what RedHat OpenShift is doing. One way to think about Kubernetes is as a system for "logical" infrastructure (vs. traditional VM cloud systems which are
###Instruction: I have read some introduction of these projects, but still cannot get a clear idea of the difference between Kubernetes and Flynn/Deis. Can anyone help? ### Response: Kubernetes is really three things: A way to dynamically schedule containers (actually, sets of containers called pods) to a cluster of machines. Manage and horizontally scale a lot of those pods using labels and helpers (ReplicationController) Communicate between sets of pods via services, expose a set of pods externally on a public IP and easily consume external services. This is necessary to deal with the horizontal scaling and the dynamic nature of how pods get placed/scheduled. This is all very much a tool set for managing compute across a set of machines. It isn't a full application PaaS. Kubernetes doesn't have any idea what an "application" is. Generally PaaS systems provide an easy way to take code and get it deployed and managed as an application. In fact, I expect to see specialized PaaS systems built on top of Kubernetes -- that is what RedHat OpenShift is doing. One way to think about Kubernetes is as a system for "logical" infrastructure (vs. traditional VM cloud systems which are
I have read some introduction of these projects, but still cannot get a clear idea of the difference between Kubernetes and Flynn/Deis. Can anyone help?
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself. The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like: ./kubectl get pods. With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver: KUBERNETES_MASTER=http://IPADDRESS:8080. The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running: journalctl -f -u kube-apiserver from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with: systemctl start kube-apiserver
###Instruction: I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos). I installed kubernetes according to the guide I found here and created the json for the pod using my images. When I execute sudo ./kubecfg list /pods I get the following error: F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods EDIT: Update Instead of running the commands myself I integrated into the vagrant file (as such) . This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above. EDIT 2: Update I managed to get it to run again, however I am unsure if it will run smoothly I had to re-execute the following commands. sudo systemctl start etcd sudo systemctl start download-kubernetes sudo systemctl start apiserver sudo systemctl start controller-manager sudo systemctl start kubelet sudo systemctl start proxy I believe it is in fact the apiserver that needs restarting What is the source of this "timeout"? (Where are any logs I can find for this matter) ### Response: Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself. The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like: ./kubectl get pods. With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver: KUBERNETES_MASTER=http://IPADDRESS:8080. The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running: journalctl -f -u kube-apiserver from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with: systemctl start kube-apiserver
I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos). I installed kubernetes according to the guide I found here and created the json for the pod using my images. When I execute sudo ./kubecfg list /pods I get the following error: F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods EDIT: Update Instead of running the commands myself I integrated into the vagrant file (as such) . This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above. EDIT 2: Update I managed to get it to run again, however I am unsure if it will run smoothly I had to re-execute the following commands. sudo systemctl start etcd sudo systemctl start download-kubernetes sudo systemctl start apiserver sudo systemctl start controller-manager sudo systemctl start kubelet sudo systemctl start proxy I believe it is in fact the apiserver that needs restarting What is the source of this "timeout"? (Where are any logs I can find for this matter)
I have not worked with Kubernetes, but as far as I know, I think you have to upload to containers to a docker image repository. You can use the official hub and upload your images there (public images are free to upload, and you can have one private for free or pay for more). Also you can have a private docker image repository, and configure kubernetes to use it. As I understand from this post, you can run link a Google Cloud bucket with a docker repository running locally, then configure a kubernetes pod to point to that bucket, and all the rest of your pods can consume docker images from there.
###Instruction: I have been working with Docker the last days and I have created a basic Docker image with a Dockerfile. It just starts a web server. Now, I have been reading further and created an account at Google Application Engine and want to deploy this image to Kubernetes. I feel lost. I don't understand how my Dockerfiles that I have locally is ment to be transferred to this platform? Don't I use them at all? I have seen some examples of Pod configuration but as far I can see they refer to images at Docker Hub? Could someone point me in the right direction on what to do? ### Response: I have not worked with Kubernetes, but as far as I know, I think you have to upload to containers to a docker image repository. You can use the official hub and upload your images there (public images are free to upload, and you can have one private for free or pay for more). Also you can have a private docker image repository, and configure kubernetes to use it. As I understand from this post, you can run link a Google Cloud bucket with a docker repository running locally, then configure a kubernetes pod to point to that bucket, and all the rest of your pods can consume docker images from there.
I have been working with Docker the last days and I have created a basic Docker image with a Dockerfile. It just starts a web server. Now, I have been reading further and created an account at Google Application Engine and want to deploy this image to Kubernetes. I feel lost. I don't understand how my Dockerfiles that I have locally is ment to be transferred to this platform? Don't I use them at all? I have seen some examples of Pod configuration but as far I can see they refer to images at Docker Hub? Could someone point me in the right direction on what to do?
To answer your first question, multiple pods can be scheduled on each node. One of the best parts about Google Container Engine / Kubernetes is that it is really flexible, so you can structure your services in the way that works best for you. For your specific use case, I think that a single cluster would make sense because all of the applications that you want to run are closely related. You'll want to think a bit about choosing an appropriate size for your cluster (both the number of VMs and the size of each VM) to fit your entire workload. You can experiment with creating a single cluster for both your QA and Prod workloads, or you can split them across clusters. Until Kubernetes has better support for QoS (for scheduling pods), it probably makes more sense to keep the QA environment separate (and probably sized more modestly).
###Instruction: What is the usual way to organize pods in a cluster in Kubernetes? I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? So my question is: Would you create one cluster that holds all these services, or multiple clusters? The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster? ### Response: To answer your first question, multiple pods can be scheduled on each node. One of the best parts about Google Container Engine / Kubernetes is that it is really flexible, so you can structure your services in the way that works best for you. For your specific use case, I think that a single cluster would make sense because all of the applications that you want to run are closely related. You'll want to think a bit about choosing an appropriate size for your cluster (both the number of VMs and the size of each VM) to fit your entire workload. You can experiment with creating a single cluster for both your QA and Prod workloads, or you can split them across clusters. Until Kubernetes has better support for QoS (for scheduling pods), it probably makes more sense to keep the QA environment separate (and probably sized more modestly).
What is the usual way to organize pods in a cluster in Kubernetes? I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? So my question is: Would you create one cluster that holds all these services, or multiple clusters? The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster?
I think the "simple" answer to your question is that you will need shared storage under you Kubernetes cluster, so that every pods access the same data. Then it wouldn't matter where the pods are running and which pod is actually executing the service. May be another solution would be Flocker, they describe themself in short: Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux. Anyway I think the storage question on Kubernetes or any other dockerized infrastructure is very interesting. It looks like the google-app-engine doesn't support sharing data store between their apps by default like they pointed out in this SO Question
###Instruction: How does pods that are controlled by a replication controller and "hidden" behind a service in Kubernetes write/read data? If I have an application that recieves images from the user that needs to be persisted, where do I store that? Because of the service in front I have no control over which node it is stored at if I use volumes. ### Response: I think the "simple" answer to your question is that you will need shared storage under you Kubernetes cluster, so that every pods access the same data. Then it wouldn't matter where the pods are running and which pod is actually executing the service. May be another solution would be Flocker, they describe themself in short: Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux. Anyway I think the storage question on Kubernetes or any other dockerized infrastructure is very interesting. It looks like the google-app-engine doesn't support sharing data store between their apps by default like they pointed out in this SO Question
How does pods that are controlled by a replication controller and "hidden" behind a service in Kubernetes write/read data? If I have an application that recieves images from the user that needs to be persisted, where do I store that? Because of the service in front I have no control over which node it is stored at if I use volumes.
This is still a work in progress. Please see: https://github.com/GoogleCloudPlatform/kubernetes/pull/2672 For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated. Thanks!
###Instruction: It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration: { ... "createExternalLoadBalancer": true ... } This doesn't seem to work for AWS. I'm getting the following error when running the service create: requested an external service, but no cloud provider supplied I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB. What's the recommended way of doing this on AWS? ### Response: This is still a work in progress. Please see: https://github.com/GoogleCloudPlatform/kubernetes/pull/2672 For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated. Thanks!
It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration: { ... "createExternalLoadBalancer": true ... } This doesn't seem to work for AWS. I'm getting the following error when running the service create: requested an external service, but no cloud provider supplied I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB. What's the recommended way of doing this on AWS?
Kubernetes currently does not allow specification of container startup dependencies. There has been some discussion in GitHub issues 1996 and 1589 that might help you out.
###Instruction: I am trying to run two Dockers on the same Kubernetes pod and I want one of the Docker container always to run before the other. I remember learning about specifying such dependency on the pod configuration file, but can not find that now. Kubernetes documentation does not explain it either. Here is the example pod configuration with two containers I adopted from another Stackoverflow question. How should I change this pod configuration to run container type1 before type2? { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } Thanks in advance, Nodir. ### Response: Kubernetes currently does not allow specification of container startup dependencies. There has been some discussion in GitHub issues 1996 and 1589 that might help you out.
I am trying to run two Dockers on the same Kubernetes pod and I want one of the Docker container always to run before the other. I remember learning about specifying such dependency on the pod configuration file, but can not find that now. Kubernetes documentation does not explain it either. Here is the example pod configuration with two containers I adopted from another Stackoverflow question. How should I change this pod configuration to run container type1 before type2? { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } Thanks in advance, Nodir.
I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.
###Instruction: I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE. However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application. So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem. My question is that: How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service? If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container? Any advice or information you could provide would be greatly appreciated. Thank you in advance. my-ros-service.yaml kind: Service apiVersion: v1beta1 id: my-ros-service port: 11311 selector: name: my-ros containerPort: 11311 createExternalLoadBalancer: true ### Response: I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.
I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE. However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application. So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem. My question is that: How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service? If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container? Any advice or information you could provide would be greatly appreciated. Thank you in advance. my-ros-service.yaml kind: Service apiVersion: v1beta1 id: my-ros-service port: 11311 selector: name: my-ros containerPort: 11311 createExternalLoadBalancer: true
However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. I think you're talking about HA/replicated/sharded apps here. At the moment, in kubernetes, you can accomplish this by making an api call listing all the "endpoints" of the service; that will tell you where your peers are running. We'd eventually like to support the use case you describe in a more first-class manner. I filed https://github.com/GoogleCloudPlatform/kubernetes/issues/3419 to maybe get something more standardized started here.
###Instruction: We are looking into using Docker plus either Mesos/Marathon or Kubernetes for hosting a cluster. However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. All of the ones that I have seen need to know about at least one other node before they can join the cluster. Some need to know about every node. However, in Kubernetes and Mesos, there's no way to know what those IP addresses are ahead of time. So, are there any best practices for this? If it helps, some technologies we're looking into deploying as containers are ElasticSearch, ActiveMQ, and MongoDB. There may be others. ### Response: However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. I think you're talking about HA/replicated/sharded apps here. At the moment, in kubernetes, you can accomplish this by making an api call listing all the "endpoints" of the service; that will tell you where your peers are running. We'd eventually like to support the use case you describe in a more first-class manner. I filed https://github.com/GoogleCloudPlatform/kubernetes/issues/3419 to maybe get something more standardized started here.
We are looking into using Docker plus either Mesos/Marathon or Kubernetes for hosting a cluster. However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. All of the ones that I have seen need to know about at least one other node before they can join the cluster. Some need to know about every node. However, in Kubernetes and Mesos, there's no way to know what those IP addresses are ahead of time. So, are there any best practices for this? If it helps, some technologies we're looking into deploying as containers are ElasticSearch, ActiveMQ, and MongoDB. There may be others.
You can install Kubernetes on a normal Amazon EC2 server. The new container service is a separate offering by Amazon, called ECS. EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/ Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
###Instruction: I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md Which is right? ### Response: You can install Kubernetes on a normal Amazon EC2 server. The new container service is a separate offering by Amazon, called ECS. EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/ Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md Which is right?
The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster. Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently. While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.
###Instruction: Kubernetes is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that. So what is Kubernetes, exactly, and what are some specific problems that it solves? ### Response: The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster. Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently. While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.
Kubernetes is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that. So what is Kubernetes, exactly, and what are some specific problems that it solves?
This behavior is due to a buggy implementation of the iter_lines method in the requests library. iter_lines iterates over the response content in chunk_size blocks of data using the iter_content iterator. If there are less than chunk_size bytes of data available for reading from the remote server (which will typically be the case when reading the last line of output), the read operation will block until chunk_size bytes of data are available. I have written my own iter_lines routine that operates correctly: import os def iter_lines(fd, chunk_size=1024): '''Iterates over the content of a file-like object line-by-line.''' pending = None while True: chunk = os.read(fd.fileno(), chunk_size) if not chunk: break if pending is not None: chunk = pending + chunk pending = None lines = chunk.splitlines() if lines and lines[-1]: pending = lines.pop() for line in lines: yield line if pending: yield(pending) This works because os.read will return less than chunk_size bytes of data rather than waiting for a buffer to fill.
###Instruction: I am trying to consume an event stream provided by the Kubernetes api using the requests module. I have run into what looks like a buffering problem: the requests module seems to lag by one event. I have code that looks something like this: r = requests.get('http://localhost:8080/api/v1beta1/watch/services', stream=True) for line in r.iter_lines(): print 'LINE:', line As Kubernetes emits event notifications, this code will only display the last event emitted when a new event comes in, which makes it almost completely useless for code that needs to respond to service add/delete events. I have solved this by spawning curl in a subprocess instead of using the requests library: p = subprocess.Popen(['curl', '-sfN', 'http://localhost:8080/api/watch/services'], stdout=subprocess.PIPE, bufsize=1) for line in iter(p.stdout.readline, b''): print 'LINE:', line This works, but at the expense of some flexibility. Is there a way to avoid this buffering problem with the requests library? ### Response: This behavior is due to a buggy implementation of the iter_lines method in the requests library. iter_lines iterates over the response content in chunk_size blocks of data using the iter_content iterator. If there are less than chunk_size bytes of data available for reading from the remote server (which will typically be the case when reading the last line of output), the read operation will block until chunk_size bytes of data are available. I have written my own iter_lines routine that operates correctly: import os def iter_lines(fd, chunk_size=1024): '''Iterates over the content of a file-like object line-by-line.''' pending = None while True: chunk = os.read(fd.fileno(), chunk_size) if not chunk: break if pending is not None: chunk = pending + chunk pending = None lines = chunk.splitlines() if lines and lines[-1]: pending = lines.pop() for line in lines: yield line if pending: yield(pending) This works because os.read will return less than chunk_size bytes of data rather than waiting for a buffer to fill.
I am trying to consume an event stream provided by the Kubernetes api using the requests module. I have run into what looks like a buffering problem: the requests module seems to lag by one event. I have code that looks something like this: r = requests.get('http://localhost:8080/api/v1beta1/watch/services', stream=True) for line in r.iter_lines(): print 'LINE:', line As Kubernetes emits event notifications, this code will only display the last event emitted when a new event comes in, which makes it almost completely useless for code that needs to respond to service add/delete events. I have solved this by spawning curl in a subprocess instead of using the requests library: p = subprocess.Popen(['curl', '-sfN', 'http://localhost:8080/api/watch/services'], stdout=subprocess.PIPE, bufsize=1) for line in iter(p.stdout.readline, b''): print 'LINE:', line This works, but at the expense of some flexibility. Is there a way to avoid this buffering problem with the requests library?
Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. This is going to change significantly when Issue #6087 is implemented.
###Instruction: Kubernetes has master and minion nodes. Will (can) Kubernetes run specified Docker containers on the master node(s)? I guess another way of saying it is: can a master also be a minion? Thanks for any assistance. ### Response: Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. This is going to change significantly when Issue #6087 is implemented.
Kubernetes has master and minion nodes. Will (can) Kubernetes run specified Docker containers on the master node(s)? I guess another way of saying it is: can a master also be a minion? Thanks for any assistance.
Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details. Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine. Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.
###Instruction: What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched. For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run: docker run --rm -ti custom-container gsutil ls You are prompted to run "gsutil config" to gain authorization, which I expect. However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources. I am looking to replicate that behavior in a minimal build Centos container for mass deployment. ### Response: Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details. Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine. Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.
What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched. For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run: docker run --rm -ti custom-container gsutil ls You are prompted to run "gsutil config" to gain authorization, which I expect. However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources. I am looking to replicate that behavior in a minimal build Centos container for mass deployment.
Container names and port names must conform to the RFC 1123 definition of a DNS label. Names must be no longer than 63 characters, must start and end with a lowercase letter or number, and may contain lowercase letters, numbers, and hyphens. Expressed as a regular expression: [a-z0-9]([-a-z0-9]*[a-z0-9])? Here's the applicable code in GitHub for checking container names, checking port names, and defining acceptable names.
###Instruction: What patterns are valid in kubernetes for the names of containers and ports? I had underscores in the names of ports and containers and got an error. Replacing the underscores with hyphens worked. ### Response: Container names and port names must conform to the RFC 1123 definition of a DNS label. Names must be no longer than 63 characters, must start and end with a lowercase letter or number, and may contain lowercase letters, numbers, and hyphens. Expressed as a regular expression: [a-z0-9]([-a-z0-9]*[a-z0-9])? Here's the applicable code in GitHub for checking container names, checking port names, and defining acceptable names.
What patterns are valid in kubernetes for the names of containers and ports? I had underscores in the names of ports and containers and got an error. Replacing the underscores with hyphens worked.
I don't see any examples .yaml files, but Kubernetes API v1 describes the lifecycle events in the same manner. Currently, only PostStart and PreStop are defined and you should be able to use them by adding a lifecycle section to a container in your pod definition. Based on reading the API definition, something like this should work (disclaimer: I haven't actually tried it myself): containers: - name: lifecycle image: busybox lifecycle: postStart: exec: command: - "touch" - "/var/log/lifecycle/post-start" preStop: httpGet: path: "/abort" port: 8080
###Instruction: I would like to try Kubernetes' hooks but I didn't find any example how I should do it. As far as I know, with this hooks I can run bash scripts in freshly created containers and prior to terminate them. I've found just a short documentation which say this is possible but that's all. Do somebody have an example or something useful info? Thanks in advance. ### Response: I don't see any examples .yaml files, but Kubernetes API v1 describes the lifecycle events in the same manner. Currently, only PostStart and PreStop are defined and you should be able to use them by adding a lifecycle section to a container in your pod definition. Based on reading the API definition, something like this should work (disclaimer: I haven't actually tried it myself): containers: - name: lifecycle image: busybox lifecycle: postStart: exec: command: - "touch" - "/var/log/lifecycle/post-start" preStop: httpGet: path: "/abort" port: 8080
I would like to try Kubernetes' hooks but I didn't find any example how I should do it. As far as I know, with this hooks I can run bash scripts in freshly created containers and prior to terminate them. I've found just a short documentation which say this is possible but that's all. Do somebody have an example or something useful info? Thanks in advance.
You can change the labels on individual pods using the kubectl label command, documented here. Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes. So in other words, yes you can :)
###Instruction: I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels. For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B. I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels. ### Response: You can change the labels on individual pods using the kubectl label command, documented here. Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes. So in other words, yes you can :)
I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels. For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B. I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels.
Autoscaling of containers is not yet supported and is not part of the near term 1.0 roadmap for Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome).
###Instruction: is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ? ### Response: Autoscaling of containers is not yet supported and is not part of the near term 1.0 roadmap for Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome).
is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ?
Disclosure: I'm a lead engineer on Kubernetes I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem. Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers. In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781 Join us on IRC @ #google-containers if you want to talk more.
###Instruction: I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they? I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions. Kubernetes vs Mesos: This link What's the difference between Apache's Mesos and Google's Kubernetes provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions? Kubernetes vs Core-OS Fleet: If I use kubernetes, is fleet required? How does Docker-Swarm fit into all the above? ### Response: Disclosure: I'm a lead engineer on Kubernetes I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem. Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers. In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781 Join us on IRC @ #google-containers if you want to talk more.
I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they? I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions. Kubernetes vs Mesos: This link What's the difference between Apache's Mesos and Google's Kubernetes provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions? Kubernetes vs Core-OS Fleet: If I use kubernetes, is fleet required? How does Docker-Swarm fit into all the above?
I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later. There is an hint at what I wanted inside the Cassandra example. The command line below the image: id: cassandra kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: cassandra containers: - name: cassandra image: kubernetes/cassandra command: - /run.sh cpu: 1000 ports: - name: cql containerPort: 9042 - name: thrift containerPort: 9160 env: - key: MAX_HEAP_SIZE value: 512M - key: HEAP_NEWSIZE value: 100M labels: name: cassandra Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
###Instruction: Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command. Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables. Is it possible to specify the command a kubernetes pod runs within the Docker image at startup? ### Response: I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later. There is an hint at what I wanted inside the Cassandra example. The command line below the image: id: cassandra kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: cassandra containers: - name: cassandra image: kubernetes/cassandra command: - /run.sh cpu: 1000 ports: - name: cql containerPort: 9042 - name: thrift containerPort: 9160 env: - key: MAX_HEAP_SIZE value: 512M - key: HEAP_NEWSIZE value: 100M labels: name: cassandra Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command. Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables. Is it possible to specify the command a kubernetes pod runs within the Docker image at startup?
Apache Mesos is a 2-level scheduler. The purpose of a framework is to provide the intelligence of high-level scheduling. Marathon provides the ability to schedule a task in the cluster, queue that task for scheduling and re-queue tasks that have failed. It is great at keeping long running processes running. It is like the init of the datacenter. As such, it is commonly used to make sure other frameworks are up and running such as Kubernetes-Mesos or Jenkins. There are many applications for which this level of scheduling is insufficient. Marathon can and often is used for running things like Apache Kafka, however this often falls short in many failure modes. Additionally, Marathon doesn't care if task runs multiple times on the same node, however running multiple Kafka nodes on the same slave is a bad idea. Using Hadoop as another example (since you referred it), HDFS has several types of nodes that need to be managed; NameNode, DataNode and JournalNode. Marathon does not know the order to start these in, or if these can be co-located on the same node or not. It doesn't know how to scale this application. The HDFS framework manages that intelligence. As far as scheduling efficiency, I'm not sure that is the goal. Apache Mesos is a 2-level scheduler for a reason. It is a highly efficient 2-level scheduler. The value of 2-level scheduling is to abstract the type of concerns I described above to a higher-level scheduler (which is termed by Mesos as frameworks). Marathon is still a great way to schedule and ensure high availability to other frameworks.
###Instruction: I see Mesosphere building all kinds of applications on the Mesos Framework like Hadoop, Kubernetes, etc. but since there is the Marathon applications for long-running services, why not just use that? E.g. why not setup Kubernetes nodes on a bunch of Marathon services? Why implement Kubernetes directly on Framework API? Because scheduling is more efficient that way? Same question goes for Jenkins implementation, why not just run Jenkins master/slaves on top of Marathon... ### Response: Apache Mesos is a 2-level scheduler. The purpose of a framework is to provide the intelligence of high-level scheduling. Marathon provides the ability to schedule a task in the cluster, queue that task for scheduling and re-queue tasks that have failed. It is great at keeping long running processes running. It is like the init of the datacenter. As such, it is commonly used to make sure other frameworks are up and running such as Kubernetes-Mesos or Jenkins. There are many applications for which this level of scheduling is insufficient. Marathon can and often is used for running things like Apache Kafka, however this often falls short in many failure modes. Additionally, Marathon doesn't care if task runs multiple times on the same node, however running multiple Kafka nodes on the same slave is a bad idea. Using Hadoop as another example (since you referred it), HDFS has several types of nodes that need to be managed; NameNode, DataNode and JournalNode. Marathon does not know the order to start these in, or if these can be co-located on the same node or not. It doesn't know how to scale this application. The HDFS framework manages that intelligence. As far as scheduling efficiency, I'm not sure that is the goal. Apache Mesos is a 2-level scheduler for a reason. It is a highly efficient 2-level scheduler. The value of 2-level scheduling is to abstract the type of concerns I described above to a higher-level scheduler (which is termed by Mesos as frameworks). Marathon is still a great way to schedule and ensure high availability to other frameworks.
I see Mesosphere building all kinds of applications on the Mesos Framework like Hadoop, Kubernetes, etc. but since there is the Marathon applications for long-running services, why not just use that? E.g. why not setup Kubernetes nodes on a bunch of Marathon services? Why implement Kubernetes directly on Framework API? Because scheduling is more efficient that way? Same question goes for Jenkins implementation, why not just run Jenkins master/slaves on top of Marathon...
Auto scaling of pods is not yet available, but it's definitely on our roadmap, as mentioned by Brendan in a previous answer. It could actually be easily built outside of the core of Kubernetes, using the public Kubernetes API. If you'd rather wait for someone else to build it, though, it looks like a contributor has started planning a design for one.
###Instruction: Can Kubernetes automatically add or reduce the number of pods,when it monitors for increases or decreases in load (i.e. CPU load, traffic)? If it's possible, how can I configure it? ### Response: Auto scaling of pods is not yet available, but it's definitely on our roadmap, as mentioned by Brendan in a previous answer. It could actually be easily built outside of the core of Kubernetes, using the public Kubernetes API. If you'd rather wait for someone else to build it, though, it looks like a contributor has started planning a design for one.
Can Kubernetes automatically add or reduce the number of pods,when it monitors for increases or decreases in load (i.e. CPU load, traffic)? If it's possible, how can I configure it?
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube). So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair. The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.
###Instruction: I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me. 1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig. 2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig? Kinds regards and thank you for your patience. ### Response: OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube). So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair. The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.
I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me. 1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig. 2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig? Kinds regards and thank you for your patience.
There's been a lot of talk within the project about creating a more straightforward feature to do this (and how to work around it in the meantime), but nothing official has been added yet, so these are really the two best options at the moment: You can put files containing the "manifest" section of your pod's config into each node's /etc/kubernetes/manifests directory. The kubelet on the machine will detect it and run it. You can use a host port and set the number of replicas to a number larger than the number of minions. It won't hurt the cluster to have a few too many replicas, as they won't be able to run anywhere until more nodes are added.
###Instruction: If you know the number of minions ahead of time, I suppose you could create x number of replicas, provided that you give a host port to ensure that there is one replica per minion, but is there a way to say that a pod should run on every minion if you don't know the number of minions ahead of time (or if minions are added later)? ### Response: There's been a lot of talk within the project about creating a more straightforward feature to do this (and how to work around it in the meantime), but nothing official has been added yet, so these are really the two best options at the moment: You can put files containing the "manifest" section of your pod's config into each node's /etc/kubernetes/manifests directory. The kubelet on the machine will detect it and run it. You can use a host port and set the number of replicas to a number larger than the number of minions. It won't hurt the cluster to have a few too many replicas, as they won't be able to run anywhere until more nodes are added.
If you know the number of minions ahead of time, I suppose you could create x number of replicas, provided that you give a host port to ensure that there is one replica per minion, but is there a way to say that a pod should run on every minion if you don't know the number of minions ahead of time (or if minions are added later)?
The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today. The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable. The scheduler is configurable. It has two types of policies, FitPredicate (see master/pkg/scheduler/predicates.go) and PriorityFunction (see master/pkg/scheduler/priorities.go). I'll describe them. Fit predicates are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in PodSelectorMatches() in predicates.go), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in PodFitsResources() in predicates.go; note that "requested resources" is defined as pod.Spec.Containers[n].Resources.Limits, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines. After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in LeastRequestedPriority() in priorities.go). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty. When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in predicates.go and priorities.go.
###Instruction: How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple? My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct? I found a short description, but it is not terribly informative: The kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary. ### Response: The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today. The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable. The scheduler is configurable. It has two types of policies, FitPredicate (see master/pkg/scheduler/predicates.go) and PriorityFunction (see master/pkg/scheduler/priorities.go). I'll describe them. Fit predicates are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in PodSelectorMatches() in predicates.go), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in PodFitsResources() in predicates.go; note that "requested resources" is defined as pod.Spec.Containers[n].Resources.Limits, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines. After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in LeastRequestedPriority() in priorities.go). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty. When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in predicates.go and priorities.go.
How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple? My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct? I found a short description, but it is not terribly informative: The kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary.
Disclosure: I'm a lead engineer on Kubernetes Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
###Instruction: From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers? ### Response: Disclosure: I'm a lead engineer on Kubernetes Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Not sure I fully understand the question, but neither kubernetes (the framework on which Google Container Engine runs) nor docker require a particular cloud provider. AFAIK, you can use docker containers on any linux distro, and kubernetes supports a number of configurations for running on your own machines. See kubernetes getting started guides for details.
###Instruction: I'm researching: Docker Container Google Containers The goal is to use something of these 2 on our own physical boxes with Linux in the enterprise for Dev/Prod. However, I've read that Google reimplemented LXC (Linux Containers) and use their own lmctfy instead. Is it possible to use Google Containers on my Linux boxes without their cloud space? Your experience is highly appreciated. ### Response: Not sure I fully understand the question, but neither kubernetes (the framework on which Google Container Engine runs) nor docker require a particular cloud provider. AFAIK, you can use docker containers on any linux distro, and kubernetes supports a number of configurations for running on your own machines. See kubernetes getting started guides for details.
I'm researching: Docker Container Google Containers The goal is to use something of these 2 on our own physical boxes with Linux in the enterprise for Dev/Prod. However, I've read that Google reimplemented LXC (Linux Containers) and use their own lmctfy instead. Is it possible to use Google Containers on my Linux boxes without their cloud space? Your experience is highly appreciated.
Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development. Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there. After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes. It seems like there was recently implemented a method of scheduling that considers services and various other parameters. https://github.com/GoogleCloudPlatform/kubernetes/pull/2906 Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? https://github.com/GoogleCloudPlatform/kubernetes/pull/4674 Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. https://github.com/GoogleCloudPlatform/kubernetes/issues/4301 That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed. Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.
###Instruction: I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts. If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet. In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen. It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability. But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there. So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon? Or should I be thinking about this another way? Do I have to write my own scheduler? ### Response: Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development. Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there. After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes. It seems like there was recently implemented a method of scheduling that considers services and various other parameters. https://github.com/GoogleCloudPlatform/kubernetes/pull/2906 Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? https://github.com/GoogleCloudPlatform/kubernetes/pull/4674 Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. https://github.com/GoogleCloudPlatform/kubernetes/issues/4301 That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed. Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.
I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts. If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet. In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen. It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability. But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there. So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon? Or should I be thinking about this another way? Do I have to write my own scheduler?
As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
###Instruction: I have a kubernetes cluster running with 2 minions. Currently I make my service accessible in 2 steps: Start replication controller & pod Get minion IP (using kubectl get minions) and set it as publicIPs for the Service. What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly. To set up the replication controller & pod I use: id: frontend-controller kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 2 replicaSelector: name: frontend-pod podTemplate: desiredState: manifest: version: v1beta1 id: frontend-pod containers: - name: sinatra-docker-demo image: madisn/sinatra_docker_demo ports: - name: http-server containerPort: 4567 labels: name: frontend-pod To set up the service (after getting minion ip-s): kind: Service id: frontend-service apiVersion: v1beta1 port: 8000 containerPort: http-server selector: name: frontend-pod labels: name: frontend publicIPs: [10.245.1.3, 10.245.1.4] ### Response: As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
I have a kubernetes cluster running with 2 minions. Currently I make my service accessible in 2 steps: Start replication controller & pod Get minion IP (using kubectl get minions) and set it as publicIPs for the Service. What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly. To set up the replication controller & pod I use: id: frontend-controller kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 2 replicaSelector: name: frontend-pod podTemplate: desiredState: manifest: version: v1beta1 id: frontend-pod containers: - name: sinatra-docker-demo image: madisn/sinatra_docker_demo ports: - name: http-server containerPort: 4567 labels: name: frontend-pod To set up the service (after getting minion ip-s): kind: Service id: frontend-service apiVersion: v1beta1 port: 8000 containerPort: http-server selector: name: frontend-pod labels: name: frontend publicIPs: [10.245.1.3, 10.245.1.4]
In the new API (v1beta3), we've added many more volume types, including NFS volumes. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!
###Instruction: I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion. My question is: How can I use persistent storage with the pods when not using Google Cloud? For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC. My try: apiVersion: v1beta1 id: ghost kind: Pod desiredState: manifest: version: v1beta1 id: ghost containers: - name: ghost image: ghost volumeMounts: - name: ghost-persistent-storage mountPath: /var/lib/ghost ports: - hostPort: 8080 containerPort: 2368 volumes: - name: ghost-persistent-storage source: emptyDir: {} Found this: Persistent Installation of MySQL and WordPress on Kubernetes Can't figure it out how to add storage (NFS?) to my testing install. ### Response: In the new API (v1beta3), we've added many more volume types, including NFS volumes. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!
I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion. My question is: How can I use persistent storage with the pods when not using Google Cloud? For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC. My try: apiVersion: v1beta1 id: ghost kind: Pod desiredState: manifest: version: v1beta1 id: ghost containers: - name: ghost image: ghost volumeMounts: - name: ghost-persistent-storage mountPath: /var/lib/ghost ports: - hostPort: 8080 containerPort: 2368 volumes: - name: ghost-persistent-storage source: emptyDir: {} Found this: Persistent Installation of MySQL and WordPress on Kubernetes Can't figure it out how to add storage (NFS?) to my testing install.
The problem you are experiencing is caused by the fact that you are unable to access the Docker socket /var/run/docker.sock as a non-root user. When you run sudo docker version you are running the Docker client as root so it does not experience this problem. This is a basic Unix permissions problem and there are the standard solutions: You could run the Kubernetes build as root with sudo make release. You can fix the permissions on the socket such that you are able to use Docker without sudo. If you look at the permissions on the Docker socket, you will probably see something like: $ ls -l /var/run/docker.sock /var/run/docker.sock srw-rw----. 1 root docker 0 Mar 17 12:26 /var/run/docker.sock This shows a socket that is readable by root and by members of the docker group. In this case, I am a member of the docker group so I can run the docker client without sudo. You could set up the same thing in your environment. Note that of course you always need to start the Docker daemon as root, but in general you would expect to have this configured to start automatically when your system boots, rather than starting it manually.
###Instruction: I was trying to build kubernetes from source: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source I have docker installed on my ubuntu. royalharsh95@ubuntu:~$ sudo docker version Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.2.1 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.2.1 Git commit (server): 990021a I tried after sudo service docker start but got the same error. royalharsh95@ubuntu:~$ cd kubernetes royalharsh95@ubuntu:~/kubernetes$ make release build/release.sh +++ Verifying Prerequisites.... Can't connect to 'docker' daemon. please fix and retry. Possible causes: - On Mac OS X, boot2docker VM isn't installed or started - On Mac OS X, docker env variable isn't set appropriately. Run: $(boot2docker shellinit) - On Linux, user isn't in 'docker' group. Add and relogin. - Something like 'sudo usermod -a -G docker royalharsh95' - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8 - On Linux, Docker daemon hasn't been started or has crashed make: *** [release] Error 1 ### Response: The problem you are experiencing is caused by the fact that you are unable to access the Docker socket /var/run/docker.sock as a non-root user. When you run sudo docker version you are running the Docker client as root so it does not experience this problem. This is a basic Unix permissions problem and there are the standard solutions: You could run the Kubernetes build as root with sudo make release. You can fix the permissions on the socket such that you are able to use Docker without sudo. If you look at the permissions on the Docker socket, you will probably see something like: $ ls -l /var/run/docker.sock /var/run/docker.sock srw-rw----. 1 root docker 0 Mar 17 12:26 /var/run/docker.sock This shows a socket that is readable by root and by members of the docker group. In this case, I am a member of the docker group so I can run the docker client without sudo. You could set up the same thing in your environment. Note that of course you always need to start the Docker daemon as root, but in general you would expect to have this configured to start automatically when your system boots, rather than starting it manually.
I was trying to build kubernetes from source: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source I have docker installed on my ubuntu. royalharsh95@ubuntu:~$ sudo docker version Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.2.1 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.2.1 Git commit (server): 990021a I tried after sudo service docker start but got the same error. royalharsh95@ubuntu:~$ cd kubernetes royalharsh95@ubuntu:~/kubernetes$ make release build/release.sh +++ Verifying Prerequisites.... Can't connect to 'docker' daemon. please fix and retry. Possible causes: - On Mac OS X, boot2docker VM isn't installed or started - On Mac OS X, docker env variable isn't set appropriately. Run: $(boot2docker shellinit) - On Linux, user isn't in 'docker' group. Add and relogin. - Something like 'sudo usermod -a -G docker royalharsh95' - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8 - On Linux, Docker daemon hasn't been started or has crashed make: *** [release] Error 1
According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution. In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script. Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.
###Instruction: I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP? I have the following setup. +----+ +----+ +----+ +----+ | W1 | | W2 | | W3 | | W4 | Web application servers +----+ +----+ +----+ +----+ \ | | / \ | | / \ | | / +---------+ | HAProxy | +---------+ / \ \ +----+ +----+ +----+ | P1 | | P2 | | P3 | K8S pods = Redis + Sentinel +----+ +----+ +----+ Which is very similar to the setup described on the haproxy blog. ### Response: According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution. In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script. Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.
I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP? I have the following setup. +----+ +----+ +----+ +----+ | W1 | | W2 | | W3 | | W4 | Web application servers +----+ +----+ +----+ +----+ \ | | / \ | | / \ | | / +---------+ | HAProxy | +---------+ / \ \ +----+ +----+ +----+ | P1 | | P2 | | P3 | K8S pods = Redis + Sentinel +----+ +----+ +----+ Which is very similar to the setup described on the haproxy blog.
There's a little bit of magic involved that's intended to make DNS in Kubernetes more convenient from within a pod. Let me try to explain. The way that the DNS names are constructed within Kubernetes is <service-name>.<namespace>.kubernetes.local. This is why kubernetes.local is resolving from on your node, but my-service-name isn't. Assuming your service is defined in the default namespace (it will be unless you explicitly created it in a different namespace), you should be able to resolve it at my-service-name.default.kubernetes.local. The docs around DNS assume that you care about how to resolve service names from within a pod rather than directly on the host. Within your pod, DNS should be set up to first search for names you specify relative to default.kubernetes.local and kubernetes.local, meaning that from within any pod in the cluster that isn't kube-dns (it's handled specially) you should be able to resolve your service using either my-service-name or my-service-name.default.kubernetes.local. If you want to try it out, attach to one of your cluster's fluentd pods using docker exec and try looking up your service from within the container. Note that the namespace changed from kubernetes.local to cluster.local between versions 0.17.0 and 0.18.0, so check your cluster's version (using kubectl version) if your first attempt doesn't work.
###Instruction: I'm using google container engine and I can create pods and services in my cluster. But when I try to use the DNS feature (skydns) to lookup my services nothing is being found. If I log in to the non-master node, I can see the DNS container and can use 'host' command to do DNS lookup (installed with apt-get). But I can't find my service by it's name. It associates kubernetes.local with the IP of the service. Actually it associates kubernetes.local with the IP of every one of my services (I have 9). But it does not associate the service name "my-service-name". Anyone know the trick to get this to work? Either creating the service isn't causing skydns to create the DNS entry (maybe there is some magic to make it work)...or I'm just completely clueless (less magical, perhaps more likely). I don't know which. b ### Response: There's a little bit of magic involved that's intended to make DNS in Kubernetes more convenient from within a pod. Let me try to explain. The way that the DNS names are constructed within Kubernetes is <service-name>.<namespace>.kubernetes.local. This is why kubernetes.local is resolving from on your node, but my-service-name isn't. Assuming your service is defined in the default namespace (it will be unless you explicitly created it in a different namespace), you should be able to resolve it at my-service-name.default.kubernetes.local. The docs around DNS assume that you care about how to resolve service names from within a pod rather than directly on the host. Within your pod, DNS should be set up to first search for names you specify relative to default.kubernetes.local and kubernetes.local, meaning that from within any pod in the cluster that isn't kube-dns (it's handled specially) you should be able to resolve your service using either my-service-name or my-service-name.default.kubernetes.local. If you want to try it out, attach to one of your cluster's fluentd pods using docker exec and try looking up your service from within the container. Note that the namespace changed from kubernetes.local to cluster.local between versions 0.17.0 and 0.18.0, so check your cluster's version (using kubectl version) if your first attempt doesn't work.
I'm using google container engine and I can create pods and services in my cluster. But when I try to use the DNS feature (skydns) to lookup my services nothing is being found. If I log in to the non-master node, I can see the DNS container and can use 'host' command to do DNS lookup (installed with apt-get). But I can't find my service by it's name. It associates kubernetes.local with the IP of the service. Actually it associates kubernetes.local with the IP of every one of my services (I have 9). But it does not associate the service name "my-service-name". Anyone know the trick to get this to work? Either creating the service isn't causing skydns to create the DNS entry (maybe there is some magic to make it work)...or I'm just completely clueless (less magical, perhaps more likely). I don't know which. b
Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster. There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to have nodes dynamically added but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release. In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs.
###Instruction: I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances. My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes. I can see that there is a handy gcloud alpha container kubectl resize command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them. If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something? Also, are there plans to support auto-scaling? ### Response: Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster. There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to have nodes dynamically added but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release. In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs.
I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances. My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes. I can see that there is a handy gcloud alpha container kubectl resize command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them. If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something? Also, are there plans to support auto-scaling?
fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms http://fabric8.io/ There is clustered ActiveMQ out of the box http://fabric8.io/guide/fabric8MQ.html As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.
###Instruction: ActiveMQ built-in cluster discovery mechanisms are basically based on multicast (excepting LDAP here). Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure. Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ? I saw the project jboss-openshift/openshift-ping enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ. ### Response: fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms http://fabric8.io/ There is clustered ActiveMQ out of the box http://fabric8.io/guide/fabric8MQ.html As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.
ActiveMQ built-in cluster discovery mechanisms are basically based on multicast (excepting LDAP here). Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure. Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ? I saw the project jboss-openshift/openshift-ping enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ.
You would typically only ever have one minion/host. So if you are deploying your minions on physical hardware, there is a 1:1 mapping between minions and physical hosts. If you are deploying into a virtual cluster on your laptop, you will want to make sure that each virtual minion has enough memory to run at least a single instance of whatever containers you plan on deploying. "How much is enough?" is a question that only you can answer.
###Instruction: I am planning to test Kubernetes locally, but would like to ask some theoretic questions before. I created a pipeline in python that takes as input a whole bunch of files from a directory, and created a docker image out of it (this is my Pod) What I understood from the documentation is that the Kubernetes scheduler will choose automatically the minion to deploy for a given task, my question is, using an 8G memory laptop, is there a 'rule' to follow before creating the minion (specifying the number of minions to deploy) based on the amount of memory available in a machine (regardless if it is a laptop or a cluster) ? Thanks ### Response: You would typically only ever have one minion/host. So if you are deploying your minions on physical hardware, there is a 1:1 mapping between minions and physical hosts. If you are deploying into a virtual cluster on your laptop, you will want to make sure that each virtual minion has enough memory to run at least a single instance of whatever containers you plan on deploying. "How much is enough?" is a question that only you can answer.
I am planning to test Kubernetes locally, but would like to ask some theoretic questions before. I created a pipeline in python that takes as input a whole bunch of files from a directory, and created a docker image out of it (this is my Pod) What I understood from the documentation is that the Kubernetes scheduler will choose automatically the minion to deploy for a given task, my question is, using an 8G memory laptop, is there a 'rule' to follow before creating the minion (specifying the number of minions to deploy) based on the amount of memory available in a machine (regardless if it is a laptop or a cluster) ? Thanks
On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration. Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore. Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/
###Instruction: I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide. Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC. ### Response: On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration. Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore. Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/
I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide. Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC.
A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod. If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes. As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.
###Instruction: I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access . First, Where is the service deployed which will work as load balancer for group of pods ? Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ? ### Response: A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod. If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes. As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.
I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access . First, Where is the service deployed which will work as load balancer for group of pods ? Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?
It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master. This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master). You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster.
###Instruction: I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service. How this pattern fit in containerized application managed by kubernetes? If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved. If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)? OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync) So, how this should be done? ### Response: It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master. This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master). You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster.
I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service. How this pattern fit in containerized application managed by kubernetes? If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved. If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)? OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync) So, how this should be done?
Currently, adding labels to the hosts and restricting your pods to nodes with the appropriate label (e.g. 'Large' / 'Small') is the best way to do this. For an example, see examples/node-selection Kubernetes has a resource model which, once implemented, will allow you to tell the scheduler what resources each pod needs and the system will ensure that the pod is placed on a node with available resources. Unfortunately, as you can tell from the design document on Github, this isn't fully implemented yet.
###Instruction: Based on the following setup of Kubernetes on Microsoft Azure. I was able to deploy my Docker containers, using the same configuration settings. We have 2 categories of containers, front-end and back-end, where the back-end consist of high intensive processing. The latter we want to run on Large instances, whereas the front-end will be run on Small instances. What is the best option to separate these, my gues would be with labeling the hosts. But was not able to find this in the docs, or in examples. ### Response: Currently, adding labels to the hosts and restricting your pods to nodes with the appropriate label (e.g. 'Large' / 'Small') is the best way to do this. For an example, see examples/node-selection Kubernetes has a resource model which, once implemented, will allow you to tell the scheduler what resources each pod needs and the system will ensure that the pod is placed on a node with available resources. Unfortunately, as you can tell from the design document on Github, this isn't fully implemented yet.
Based on the following setup of Kubernetes on Microsoft Azure. I was able to deploy my Docker containers, using the same configuration settings. We have 2 categories of containers, front-end and back-end, where the back-end consist of high intensive processing. The latter we want to run on Large instances, whereas the front-end will be run on Small instances. What is the best option to separate these, my gues would be with labeling the hosts. But was not able to find this in the docs, or in examples.
You mentioned in your question that you are using google container engine, but in fact you are using the container vm (which is a bit different). If you want to use container engine, please check out the documentation to create a container cluster. I ran your example, and in /var/log/kubelet.log saw the following error: E0519 17:05:41.285556 2414 http.go:54] Failed to read URL: http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-cont ainer-manifest: received 'version: v1beta3 containers: - name: talks image: sheki/talks ports: - name: http-port containerPort: 8080 hostPort: 80' ', but couldn't parse as neither single (error unmarshaling JSON: json: cannot unmarshal string into Go value of type int: {Version:v1beta3 ID: UUID: Volumes:[] Containers:[{Name:talks Image:sheki/talks Entrypoint:[] Command:[] WorkingDir: Ports:[{Name:http-port HostPort:0 ContainerPort:8080 Proto col: HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} CPU:0 Memory:0 VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath: Privileged:false ImagePullPolicy: Capabilities:{Add:[] Drop:[]}}] RestartPolicy:{Always:<nil> OnFailure:<nil> Never:<nil>} D NSPolicy: HostNetwork:false}) or multiple manifests (error unmarshaling JSON: json: cannot unmarshal object into Go value of type []v1beta1.Container Manifest: []) nor single (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"containerPort":8080,"hostPort":"80'","name ":"http-port"}]}],"version":"v1beta3"}') or multiple pods (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"container Port":8080,"hostPort":"80'","name":"http-port"}]}],"version":"v1beta3"}'). It looks like the documentation for container vms is out of date.
###Instruction: I am trying to run a docker image in Google Container Engine. The instance comes up with no running docker images. I can ssh and run the docker commands and the service comes up. But nothing happens when I just launch the instance from the terminal. Can some one take a look at what I am doing wrong. My docker file looks like FROM golang RUN mkdir -p /app COPY . /app RUN go get golang.org/x/tools/cmd/present ENTRYPOINT cd /app && /go/bin/present -http=":8080" EXPOSE 8080 containers.yaml looks like version: v1beta3 containers: - name: talks image: sheki/talks ports: - name: http-port containerPort: 8080 hostPort: 80' The command to launch the instance is gcloud compute instances create zoop \ --image container-vm \ --metadata-from-file google-container-manifest=containers.yaml \ --zone us-central1-a \ --machine-type f1-micro ### Response: You mentioned in your question that you are using google container engine, but in fact you are using the container vm (which is a bit different). If you want to use container engine, please check out the documentation to create a container cluster. I ran your example, and in /var/log/kubelet.log saw the following error: E0519 17:05:41.285556 2414 http.go:54] Failed to read URL: http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-cont ainer-manifest: received 'version: v1beta3 containers: - name: talks image: sheki/talks ports: - name: http-port containerPort: 8080 hostPort: 80' ', but couldn't parse as neither single (error unmarshaling JSON: json: cannot unmarshal string into Go value of type int: {Version:v1beta3 ID: UUID: Volumes:[] Containers:[{Name:talks Image:sheki/talks Entrypoint:[] Command:[] WorkingDir: Ports:[{Name:http-port HostPort:0 ContainerPort:8080 Proto col: HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} CPU:0 Memory:0 VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath: Privileged:false ImagePullPolicy: Capabilities:{Add:[] Drop:[]}}] RestartPolicy:{Always:<nil> OnFailure:<nil> Never:<nil>} D NSPolicy: HostNetwork:false}) or multiple manifests (error unmarshaling JSON: json: cannot unmarshal object into Go value of type []v1beta1.Container Manifest: []) nor single (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"containerPort":8080,"hostPort":"80'","name ":"http-port"}]}],"version":"v1beta3"}') or multiple pods (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"container Port":8080,"hostPort":"80'","name":"http-port"}]}],"version":"v1beta3"}'). It looks like the documentation for container vms is out of date.
I am trying to run a docker image in Google Container Engine. The instance comes up with no running docker images. I can ssh and run the docker commands and the service comes up. But nothing happens when I just launch the instance from the terminal. Can some one take a look at what I am doing wrong. My docker file looks like FROM golang RUN mkdir -p /app COPY . /app RUN go get golang.org/x/tools/cmd/present ENTRYPOINT cd /app && /go/bin/present -http=":8080" EXPOSE 8080 containers.yaml looks like version: v1beta3 containers: - name: talks image: sheki/talks ports: - name: http-port containerPort: 8080 hostPort: 80' The command to launch the instance is gcloud compute instances create zoop \ --image container-vm \ --metadata-from-file google-container-manifest=containers.yaml \ --zone us-central1-a \ --machine-type f1-micro
Update (2016-07-15) With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development. You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
###Instruction: Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such). During development you want to be as close as possible to production environment with some important changes: Deployed locally (or at least somewhere where you and only you can access) Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman). Similarly one may want a non-public environment to do continuous integration. Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work? ### Response: Update (2016-07-15) With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development. You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such). During development you want to be as close as possible to production environment with some important changes: Deployed locally (or at least somewhere where you and only you can access) Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman). Similarly one may want a non-public environment to do continuous integration. Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one. We are very actively working on making cluster upgrades reliable (both nodes and the master), but upgrades are unlikely to work for the majority of currently existing clusters.
###Instruction: I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this? ### Response: Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one. We are very actively working on making cluster upgrades reliable (both nodes and the master), but upgrades are unlikely to work for the majority of currently existing clusters.
I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this?
I'll try to break down the unique aspects of each container orchestration framework on Mesos. Use Docker Swarm if: You want to use the familiar Docker API to launch Docker containers on Mesos. Swarm may eventually provide an API to talk to Kubernetes (even K8s-Mesos) too. See: http://www.techrepublic.com/article/docker-and-mesos-like-peanut-butter-and-jelly/ Use Kubernetes-Mesos if: You want to launch K8s Pods, which are groups of containers co-scheduled and co-located together, sharing resources. You want to launch a service alongside one or more sidekick containers (e.g. log archiver, metrics monitor) that live next to the parent container. You want to use the K8s label-based service-discovery, load-balancing, and replication control. See http://kubernetesio.blogspot.com/2015/04/kubernetes-and-mesosphere-dcos.html Use Marathon if: You want to launch Docker or non-Docker long-running apps/services. You want to use Mesos attributes for constraint-based scheduling. You want to use Application Groups and Dependencies to launch, scale, or upgrade related services. You want to use health checks to automatically restart unhealthy services or rollback unhealthy deployments/upgrades. You want to integrate HAProxy or Consul for service discovery. You want to launch and monitor apps through a web UI or REST API. You want to use a framework built from the start with Mesos in mind. Use Chronos if: You want to launch Docker or non-Docker tasks that are expected to exit. You want to schedule a task to run at a specific time/schedule (a la cron). You want to schedule a DAG workflow of dependent tasks. You want to launch and monitor jobs through a web UI or REST API. You want to use a framework built from the start with Mesos in mind.
###Instruction: I'm looking for some pros and cons of whether to go with Marathon and Chronos, Docker Swarm or Kubernetes when running Docker containers on DC/OS. For example, when is it better to use Marathon/Chronos than Kubernetes and vice versa? Right now I'm mostly into experimenting but hopefully we'll start using one of these services in production after the summer. This may disqualify Docker Swarm since I'm not sure if it'll be production ready by then. What I like about Docker Swarm is that it's essentially just "Docker commands" and you don't have to learn something completely new. We're already using docker-compose and that will work out of the box with Docker Swarm (at least in theory) so that would be a big plus. My main concern with Docker Swarm is if it'll cover all use cases required to run a system in production. ### Response: I'll try to break down the unique aspects of each container orchestration framework on Mesos. Use Docker Swarm if: You want to use the familiar Docker API to launch Docker containers on Mesos. Swarm may eventually provide an API to talk to Kubernetes (even K8s-Mesos) too. See: http://www.techrepublic.com/article/docker-and-mesos-like-peanut-butter-and-jelly/ Use Kubernetes-Mesos if: You want to launch K8s Pods, which are groups of containers co-scheduled and co-located together, sharing resources. You want to launch a service alongside one or more sidekick containers (e.g. log archiver, metrics monitor) that live next to the parent container. You want to use the K8s label-based service-discovery, load-balancing, and replication control. See http://kubernetesio.blogspot.com/2015/04/kubernetes-and-mesosphere-dcos.html Use Marathon if: You want to launch Docker or non-Docker long-running apps/services. You want to use Mesos attributes for constraint-based scheduling. You want to use Application Groups and Dependencies to launch, scale, or upgrade related services. You want to use health checks to automatically restart unhealthy services or rollback unhealthy deployments/upgrades. You want to integrate HAProxy or Consul for service discovery. You want to launch and monitor apps through a web UI or REST API. You want to use a framework built from the start with Mesos in mind. Use Chronos if: You want to launch Docker or non-Docker tasks that are expected to exit. You want to schedule a task to run at a specific time/schedule (a la cron). You want to schedule a DAG workflow of dependent tasks. You want to launch and monitor jobs through a web UI or REST API. You want to use a framework built from the start with Mesos in mind.
I'm looking for some pros and cons of whether to go with Marathon and Chronos, Docker Swarm or Kubernetes when running Docker containers on DC/OS. For example, when is it better to use Marathon/Chronos than Kubernetes and vice versa? Right now I'm mostly into experimenting but hopefully we'll start using one of these services in production after the summer. This may disqualify Docker Swarm since I'm not sure if it'll be production ready by then. What I like about Docker Swarm is that it's essentially just "Docker commands" and you don't have to learn something completely new. We're already using docker-compose and that will work out of the box with Docker Swarm (at least in theory) so that would be a big plus. My main concern with Docker Swarm is if it'll cover all use cases required to run a system in production.
For me the difference between a master and a minion are the processes that get started and the orchestration (the master signals the minions that it is done and they can now come up, the minions wait for the signal). To bring up a minion after things are up would not be any different, except the signaling is already done, so the minion will just come up. This looked like a reasonable experiment to me, so I thought I'd try it! I am running in the digital ocean vps, using coreos (stable) and cloud-config to bring up fleet + flanneld + k8s. On the master I have: etcd fleet flanneld docker kube-apiserver kube-controller-manager kube-scheduler kube-register The important process here is kube-register. I look at its output with : journalctl -u kube-register to see when minions register. To see all the minions that I currently have : core@pa1 ~/tmp $ kubectl get minions NAME LABELS STATUS 104.236.214.244 <none> Ready 104.236.31.77 <none> Ready On the minion for processes I have: fleet flanneld docker kube-proxy kube-kubelet I think the kube-kubelet is the thing that registers with the master, but I could be wrong. I do know that the master constantly queries the kubelet for healthz, so it knows if a minion is ready for work. Anyway, I have a script that creates my cloud config for the master and for the minions. I just modified that script to 'skip' the master cloud config and just do the minion config. I run it and here is my minions now: core@pa1 / $ kubectl get minions NAME LABELS STATUS 104.236.214.244 <none> Ready 104.236.31.77 <none> Ready 45.55.164.39 <none> Ready So, long story even longer, yes, it is possible to add a minion to an existing node cluster. I don't know how you are doing discovery, but, whatever method you used to 'introduce' your minion to the master in the first place will work.
###Instruction: I have a Kubernetes cluster running on 3 servers, a master and 2 minions. I would like to add another minion. Is it possible to add a minion without having to do the complete installation again? So far when searching for guides to do this, I can only find excellent guides on getting the whole cluster up. ### Response: For me the difference between a master and a minion are the processes that get started and the orchestration (the master signals the minions that it is done and they can now come up, the minions wait for the signal). To bring up a minion after things are up would not be any different, except the signaling is already done, so the minion will just come up. This looked like a reasonable experiment to me, so I thought I'd try it! I am running in the digital ocean vps, using coreos (stable) and cloud-config to bring up fleet + flanneld + k8s. On the master I have: etcd fleet flanneld docker kube-apiserver kube-controller-manager kube-scheduler kube-register The important process here is kube-register. I look at its output with : journalctl -u kube-register to see when minions register. To see all the minions that I currently have : core@pa1 ~/tmp $ kubectl get minions NAME LABELS STATUS 104.236.214.244 <none> Ready 104.236.31.77 <none> Ready On the minion for processes I have: fleet flanneld docker kube-proxy kube-kubelet I think the kube-kubelet is the thing that registers with the master, but I could be wrong. I do know that the master constantly queries the kubelet for healthz, so it knows if a minion is ready for work. Anyway, I have a script that creates my cloud config for the master and for the minions. I just modified that script to 'skip' the master cloud config and just do the minion config. I run it and here is my minions now: core@pa1 / $ kubectl get minions NAME LABELS STATUS 104.236.214.244 <none> Ready 104.236.31.77 <none> Ready 45.55.164.39 <none> Ready So, long story even longer, yes, it is possible to add a minion to an existing node cluster. I don't know how you are doing discovery, but, whatever method you used to 'introduce' your minion to the master in the first place will work.
I have a Kubernetes cluster running on 3 servers, a master and 2 minions. I would like to add another minion. Is it possible to add a minion without having to do the complete installation again? So far when searching for guides to do this, I can only find excellent guides on getting the whole cluster up.
A replication controller ensures that the desired number of pods with the same template are kept running in the system. The replication controller itself does not know anything about what it is running, and doesn't have any special hooks for containers running databases. This means that if you want to run a container with a database with more than one replica, then it is easiest to run a database that can natively do replication and discovery (possibly with the injection of some environment variables). An alternative is to run a pod with two containers, where one container is a vanilla database, and the second "side-car" container is used to implement the necessary replication / synchronization / master election or whatever extra functionality you need to provide to make the database run in a clustered environment. This is more flexible (you can run a database that wasn't initially designed to run in a clustered environment) but also requires more custom work to make it scale.
###Instruction: If I setup a replication controller for something like a database, how does it keep the data in the replicas in-sync? If one of the replica goes down, how does it bring it back up with the latest data? ### Response: A replication controller ensures that the desired number of pods with the same template are kept running in the system. The replication controller itself does not know anything about what it is running, and doesn't have any special hooks for containers running databases. This means that if you want to run a container with a database with more than one replica, then it is easiest to run a database that can natively do replication and discovery (possibly with the injection of some environment variables). An alternative is to run a pod with two containers, where one container is a vanilla database, and the second "side-car" container is used to implement the necessary replication / synchronization / master election or whatever extra functionality you need to provide to make the database run in a clustered environment. This is more flexible (you can run a database that wasn't initially designed to run in a clustered environment) but also requires more custom work to make it scale.
If I setup a replication controller for something like a database, how does it keep the data in the replicas in-sync? If one of the replica goes down, how does it bring it back up with the latest data?
Since you are in China, you won't be able to fetch images from gcr.io (including the pause image). If you want to compile your own pause image and upload it to an image registry that you have access, you can specify --pod_infra_container_image="<registry>/pause:latest" when launching the kubelet.
###Instruction: I followed kubernetes' guestbook, and changed image value like this, want to pull images from dockerhub. "image": "redis" But it failed. the log say: Failed to create pod infra container: image pull failed for gcr.io/google_c... And I set preloading images instructions: "imagePullPolicy":"Never" I am sure about the minion have the redis image But it also failed, and pulled from gcr.io ### Response: Since you are in China, you won't be able to fetch images from gcr.io (including the pause image). If you want to compile your own pause image and upload it to an image registry that you have access, you can specify --pod_infra_container_image="<registry>/pause:latest" when launching the kubelet.
I followed kubernetes' guestbook, and changed image value like this, want to pull images from dockerhub. "image": "redis" But it failed. the log say: Failed to create pod infra container: image pull failed for gcr.io/google_c... And I set preloading images instructions: "imagePullPolicy":"Never" I am sure about the minion have the redis image But it also failed, and pulled from gcr.io
Docker should create this file for you. Are you running this command on your OS X machine? or are you running it inside the boot2docker VM? I think you need to: boot2docker ssh Then: sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' You need to make sure that command runs on the Vagrant Linux box that boot2docker creates, not your OS X machine. Hope that helps!
###Instruction: I am currently trying to set up kubernetes on a multi-docker container on CoreOS stack for AWS. To do this I need to set up etcd for flannel and am currently using this guide but am having problems at the first stage where I am suggested to run sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' The problem is the 1st command docker -d -H unix:///var/run/docker-bootstrap.sock from within boot2docker. There is no docker-bootstrap.sock file in this directory and this error is thrown: FATA[0000] An error occurred trying to connect: Post https:///var/run/docker-bootstrap.sock/v1.18/containers/create: dial unix /var/run/docker-bootstrap.sock: no such file or directory Clearly the unix socket did not connect to this nonexistent socket. I will note this is a very similar problem to this ticket and other tickets regarding the FATA[0000] though none seem to have asked the question in the way I currently am. I am not an expert in unix sockets, but I am assuming there should be a file where there is not. Where can I get this file to solve my issue, or what is the recommended steps to resolve this. specs: running OSX Yosemite but calling all commands from boot2docker ### Response: Docker should create this file for you. Are you running this command on your OS X machine? or are you running it inside the boot2docker VM? I think you need to: boot2docker ssh Then: sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' You need to make sure that command runs on the Vagrant Linux box that boot2docker creates, not your OS X machine. Hope that helps!
I am currently trying to set up kubernetes on a multi-docker container on CoreOS stack for AWS. To do this I need to set up etcd for flannel and am currently using this guide but am having problems at the first stage where I am suggested to run sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' The problem is the 1st command docker -d -H unix:///var/run/docker-bootstrap.sock from within boot2docker. There is no docker-bootstrap.sock file in this directory and this error is thrown: FATA[0000] An error occurred trying to connect: Post https:///var/run/docker-bootstrap.sock/v1.18/containers/create: dial unix /var/run/docker-bootstrap.sock: no such file or directory Clearly the unix socket did not connect to this nonexistent socket. I will note this is a very similar problem to this ticket and other tickets regarding the FATA[0000] though none seem to have asked the question in the way I currently am. I am not an expert in unix sockets, but I am assuming there should be a file where there is not. Where can I get this file to solve my issue, or what is the recommended steps to resolve this. specs: running OSX Yosemite but calling all commands from boot2docker
If you install Docker on Windows (see the Docker instructions), you can then follow the guide to run Kubernetes locally via Docker and then once you are comfortable with that try running Multi-Node Kubernetes Using Docker.
###Instruction: Are there any up-to-date guides, or VM images of some Linux VM + Kubernetes that I could run on Windows? Both VMWare, VirtualBox or Vagrant images would help. I'm trying to set up a development environment. (There is no production environment yet, but it will be most likely self-hosted.) I tried installing several Vagrant templates for Kubernetes linked from their github documentation, but they were specifically marked as not supported on Windows; I tried compiling Kubernetes 0.15 from source under CoreOS and Boot2Docker, but ran into problems with either. Since my ops skill set is relatively low, I'd sleep easier if I could use a template that was set up by someone who knew what they're doing. ### Response: If you install Docker on Windows (see the Docker instructions), you can then follow the guide to run Kubernetes locally via Docker and then once you are comfortable with that try running Multi-Node Kubernetes Using Docker.
Are there any up-to-date guides, or VM images of some Linux VM + Kubernetes that I could run on Windows? Both VMWare, VirtualBox or Vagrant images would help. I'm trying to set up a development environment. (There is no production environment yet, but it will be most likely self-hosted.) I tried installing several Vagrant templates for Kubernetes linked from their github documentation, but they were specifically marked as not supported on Windows; I tried compiling Kubernetes 0.15 from source under CoreOS and Boot2Docker, but ran into problems with either. Since my ops skill set is relatively low, I'd sleep easier if I could use a template that was set up by someone who knew what they're doing.
So, I finally got this to work using the following .json files: frontend-service.json { "id": "frontend", "kind": "Service", "apiVersion": "v1beta1", "port": 443, "selector": { "name": "frontend" }, "publicIPs": [ "${servicehost}" ] } frontend-controller.json { "id": "frontend-controller", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 1, "replicaSelector": {"name": "frontend"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "frontend-controller", "containers": [{ "name": "pydio-v4", "image": "starblade/pydio-v4", "ports": [{"containerPort": 443, "hostPort": 31000}] }] } }, "labels": {"name": "frontend"} }}, "labels": {"name": "frontend"} } I now have pydio with SSL running in a Mesos-Kubernetes env on GCE. Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3. Hope this helps someone. Thanks, TT
###Instruction: I was wondering if anyone knows how to create a json file that would be the same as running: docker run -p 80:80 -p 443:443 starblade/pydio-v4 I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this. Thanks, TT Here are my json files: ` { "id": "frontend-controller", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 3, "replicaSelector": {"name": "frontend"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "frontend-controller", "containers": [{ "name": "pydio-v4", "image": "starblade/pydio-v4", "ports": [{"containerPort": 10001, "protocol": "TCP"}] }] } }, "labels": {"name": "frontend"} }}, "labels": {"name": "frontend"} } { "id": "frontend", "kind": "Service", "apiVersion": "v1beta1", "port": 80, "port": 443, "targetPort": 10001, "selector": { "name": "frontend" }, "publicIPs": [ "${servicehost}" ] } Docker container Env info pulled from docker inspect command: "Env": [ "FRONTEND_SERVICE_HOST=10.10.10.14", "FRONTEND_SERVICE_PORT=443", "FRONTEND_PORT=tcp://10.10.10.14:443", "FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443", "FRONTEND_PORT_443_TCP_PROTO=tcp", "FRONTEND_PORT_443_TCP_PORT=443", "FRONTEND_PORT_443_TCP_ADDR=10.10.10.14", "KUBERNETES_SERVICE_HOST=10.10.10.2", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.10.10.2:443", "KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2", "KUBERNETES_RO_SERVICE_HOST=10.10.10.1", "KUBERNETES_RO_SERVICE_PORT=80", "KUBERNETES_RO_PORT=tcp://10.10.10.1:80", "KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80", "KUBERNETES_RO_PORT_80_TCP_PROTO=tcp", "KUBERNETES_RO_PORT_80_TCP_PORT=80", "KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "PYDIO_VERSION=6.0.5" ], "ExposedPorts": { "443/tcp": {}, "80/tcp": {} }, ` The pod and service both start and run ok. However I am unable to access the running Pydio site on any of the master, minion or frontend ips. Note: I am running a modified version of the this docker container: https://registry.hub.docker.com/u/kdelfour/pydio-docker/ My container has been tested and it runs as expected. You should see the login screen once it is running. Please let me know if I can provide any other information. Thanks again. ### Response: So, I finally got this to work using the following .json files: frontend-service.json { "id": "frontend", "kind": "Service", "apiVersion": "v1beta1", "port": 443, "selector": { "name": "frontend" }, "publicIPs": [ "${servicehost}" ] } frontend-controller.json { "id": "frontend-controller", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 1, "replicaSelector": {"name": "frontend"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "frontend-controller", "containers": [{ "name": "pydio-v4", "image": "starblade/pydio-v4", "ports": [{"containerPort": 443, "hostPort": 31000}] }] } }, "labels": {"name": "frontend"} }}, "labels": {"name": "frontend"} } I now have pydio with SSL running in a Mesos-Kubernetes env on GCE. Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3. Hope this helps someone. Thanks, TT
I was wondering if anyone knows how to create a json file that would be the same as running: docker run -p 80:80 -p 443:443 starblade/pydio-v4 I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this. Thanks, TT Here are my json files: ` { "id": "frontend-controller", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 3, "replicaSelector": {"name": "frontend"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "frontend-controller", "containers": [{ "name": "pydio-v4", "image": "starblade/pydio-v4", "ports": [{"containerPort": 10001, "protocol": "TCP"}] }] } }, "labels": {"name": "frontend"} }}, "labels": {"name": "frontend"} } { "id": "frontend", "kind": "Service", "apiVersion": "v1beta1", "port": 80, "port": 443, "targetPort": 10001, "selector": { "name": "frontend" }, "publicIPs": [ "${servicehost}" ] } Docker container Env info pulled from docker inspect command: "Env": [ "FRONTEND_SERVICE_HOST=10.10.10.14", "FRONTEND_SERVICE_PORT=443", "FRONTEND_PORT=tcp://10.10.10.14:443", "FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443", "FRONTEND_PORT_443_TCP_PROTO=tcp", "FRONTEND_PORT_443_TCP_PORT=443", "FRONTEND_PORT_443_TCP_ADDR=10.10.10.14", "KUBERNETES_SERVICE_HOST=10.10.10.2", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.10.10.2:443", "KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2", "KUBERNETES_RO_SERVICE_HOST=10.10.10.1", "KUBERNETES_RO_SERVICE_PORT=80", "KUBERNETES_RO_PORT=tcp://10.10.10.1:80", "KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80", "KUBERNETES_RO_PORT_80_TCP_PROTO=tcp", "KUBERNETES_RO_PORT_80_TCP_PORT=80", "KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "PYDIO_VERSION=6.0.5" ], "ExposedPorts": { "443/tcp": {}, "80/tcp": {} }, ` The pod and service both start and run ok. However I am unable to access the running Pydio site on any of the master, minion or frontend ips. Note: I am running a modified version of the this docker container: https://registry.hub.docker.com/u/kdelfour/pydio-docker/ My container has been tested and it runs as expected. You should see the login screen once it is running. Please let me know if I can provide any other information. Thanks again.
After you build the docker image, you can "side-load" it into your locally available images by running docker load -i /path/to/image.tar. Once you've done this, Kubernetes will be able to load the image without reaching out to an external hub.
###Instruction: I started Kubernetes master and minion on local machine using Vagrant. I can create a json file for my Kubernetes pod where I can start several public containers. However, one Docker container is local one, ontop on java:8-jdk, configured with DockerFile. How can I reference this local Docker container in the kubernetes json pod so Kubernetes can run it? In other words, does Kubernetes support docker build ;) ### Response: After you build the docker image, you can "side-load" it into your locally available images by running docker load -i /path/to/image.tar. Once you've done this, Kubernetes will be able to load the image without reaching out to an external hub.
I started Kubernetes master and minion on local machine using Vagrant. I can create a json file for my Kubernetes pod where I can start several public containers. However, one Docker container is local one, ontop on java:8-jdk, configured with DockerFile. How can I reference this local Docker container in the kubernetes json pod so Kubernetes can run it? In other words, does Kubernetes support docker build ;)
How are the replica set admin helpers implemented? The rs.* replica set admin helpers in the mongo shell are wrappers for MongoDB commands which you can send from any driver. You can see which command(s) each shell helper wraps by referring to the MongoDB documentation: rs.initiate() provides a wrapper around the replSetInitiate database command. rs.add() provides a wrapper around some of the functionality of the replSetReconfig database command and the corresponding mongo shell helper rs.reconfig(). rs.conf() wraps the replSetGetConfig database command. Note that the mongo shell helpers may do some extra validation or manipulation of configs as they are intended to be used via the interactive mongo shell. You can confirm how any of the shell helpers are implemented by invoking the command in the shell without trailing parentheses, eg: > rs.initiate function (c) { return db._adminCommand({ replSetInitiate: c }); } Calling replica set database commands from Node.js The equivalent logic can be implemented via the Node.js driver API using command(): // Rough equivalent of rs.initiate() var MongoClient = require('mongodb').MongoClient; MongoClient.connect('mongodb://localhost:27017/test', function(err, db) { // Use the admin database for commands var adminDb = db.admin(); // Default replica set conf var conf = {}; adminDb.command({replSetInitiate: conf}, function(err, info) { console.log(info); }); }); Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts. Rather than reimplementing the replica set helpers in Node.js, you could invoke a mongo shell with the --eval command to run the shell helper (tip: include --quiet to suppress unnecessary messages). For example, calling from your Node app: var exec = require('child_process').exec; var rsAdmin = exec('mongo --eval "var res = rs.initiate(); printjson(res)" --quiet', function (error, stdout, stderr) { // output is in stdout console.log(stdout); });
###Instruction: I'm looking to automate the process of setting up a MongoDb replica set via a sidecar when using Docker and Kubernetes. The above setup isn't terribly important, what it boils down to is that I need to be able to call the mongo replica set commands (e.g. rs.initiate(), rs.add('anotherserver'), rs.conf(), rs.reconfig(), etc) from a node.js application. Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts. UPDATE: I was able to get this working and have made the sidecar open source for others to use. https://github.com/leportlabs/mongo-k8s-sidecar https://registry.hub.docker.com/u/leportlabs/mongo-k8s-sidecar ### Response: How are the replica set admin helpers implemented? The rs.* replica set admin helpers in the mongo shell are wrappers for MongoDB commands which you can send from any driver. You can see which command(s) each shell helper wraps by referring to the MongoDB documentation: rs.initiate() provides a wrapper around the replSetInitiate database command. rs.add() provides a wrapper around some of the functionality of the replSetReconfig database command and the corresponding mongo shell helper rs.reconfig(). rs.conf() wraps the replSetGetConfig database command. Note that the mongo shell helpers may do some extra validation or manipulation of configs as they are intended to be used via the interactive mongo shell. You can confirm how any of the shell helpers are implemented by invoking the command in the shell without trailing parentheses, eg: > rs.initiate function (c) { return db._adminCommand({ replSetInitiate: c }); } Calling replica set database commands from Node.js The equivalent logic can be implemented via the Node.js driver API using command(): // Rough equivalent of rs.initiate() var MongoClient = require('mongodb').MongoClient; MongoClient.connect('mongodb://localhost:27017/test', function(err, db) { // Use the admin database for commands var adminDb = db.admin(); // Default replica set conf var conf = {}; adminDb.command({replSetInitiate: conf}, function(err, info) { console.log(info); }); }); Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts. Rather than reimplementing the replica set helpers in Node.js, you could invoke a mongo shell with the --eval command to run the shell helper (tip: include --quiet to suppress unnecessary messages). For example, calling from your Node app: var exec = require('child_process').exec; var rsAdmin = exec('mongo --eval "var res = rs.initiate(); printjson(res)" --quiet', function (error, stdout, stderr) { // output is in stdout console.log(stdout); });
I'm looking to automate the process of setting up a MongoDb replica set via a sidecar when using Docker and Kubernetes. The above setup isn't terribly important, what it boils down to is that I need to be able to call the mongo replica set commands (e.g. rs.initiate(), rs.add('anotherserver'), rs.conf(), rs.reconfig(), etc) from a node.js application. Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts. UPDATE: I was able to get this working and have made the sidecar open source for others to use. https://github.com/leportlabs/mongo-k8s-sidecar https://registry.hub.docker.com/u/leportlabs/mongo-k8s-sidecar
Update: kubectl rolling-update has been deprecated and the replacement command is kubectl rollout. Also note that since I wrote the original answer the Deployment resource has been added and is a better choice than ReplicaSets as the rolling update is performed server side instead of by the client. You should use kubectl rolling-update. We recently added a feature to do a "simple rolling update" which will update the image in a replication controller without renaming it. It's the last example shown in the kubectl help rolling-update output: // Update the pods of frontend by just changing the image, and keeping the old name $ kubectl rolling-update frontend --image=image:v2 This command also supports recovery -- if you cancel your update and restart it later, it will resume from where it left off. Even though it creates a new replication controller behind the scenes, at the end of the update the new replication controller takes the name of the old replication controller so it appears as pure update rather than switching to an entirely new replication controller.
###Instruction: What is the preferred way of updating a set of pods (e.g. after making code changes & pushing underlying docker image to docker hub) controlled by a replication controller in kubernetes cluster? I can see 2 ways: Deleting & re-creating replication controller manually Using kubectl rolling-update With the rolling-update I have to change the replication controller name. Since I'm storing replication controller definition in YAML file and not generating it manually, having to change the file to push out a code update seems to bring about bad habits like alternating between 2 names for the replication controller (e.g. controllerA and controllerB) to avoid name conflict. What is the better way? ### Response: Update: kubectl rolling-update has been deprecated and the replacement command is kubectl rollout. Also note that since I wrote the original answer the Deployment resource has been added and is a better choice than ReplicaSets as the rolling update is performed server side instead of by the client. You should use kubectl rolling-update. We recently added a feature to do a "simple rolling update" which will update the image in a replication controller without renaming it. It's the last example shown in the kubectl help rolling-update output: // Update the pods of frontend by just changing the image, and keeping the old name $ kubectl rolling-update frontend --image=image:v2 This command also supports recovery -- if you cancel your update and restart it later, it will resume from where it left off. Even though it creates a new replication controller behind the scenes, at the end of the update the new replication controller takes the name of the old replication controller so it appears as pure update rather than switching to an entirely new replication controller.
What is the preferred way of updating a set of pods (e.g. after making code changes & pushing underlying docker image to docker hub) controlled by a replication controller in kubernetes cluster? I can see 2 ways: Deleting & re-creating replication controller manually Using kubectl rolling-update With the rolling-update I have to change the replication controller name. Since I'm storing replication controller definition in YAML file and not generating it manually, having to change the file to push out a code update seems to bring about bad habits like alternating between 2 names for the replication controller (e.g. controllerA and controllerB) to avoid name conflict. What is the better way?
Turns out the example master config is looking for etcd2.service, while the actual file is etcd.service so I changed it in the example units and everything worked. EDIT The reason why this worked and was an issue to begin with was that I was using the coresos vagrant box for parallels which is 300~ builds old from the current stable coreos build so it was missing etcd2 all together.
###Instruction: So I'm trying to setup a master Kubernetes node on coreos in vagrant. I'm using the example master cloud-config, found here https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml with the addition of this as the first units: - name: etcd.service command: start - name: fleet.service command: start - name: docker-tcp.socket command: start enable: true content: | [Unit] Description=Docker Socket for the API [Socket] ListenStream=2375 Service=docker.service BindIPv6Only=both [Install] WantedBy=sockets.target Once I vagrant up and vagrant ssh, I run sudo systemctl status kube-apiserver and find that kube-apiserver is down due to the fact it can't find etcd.service; however when I do ps -ef | grep etcd etcd is clearly running. Is there some specific location for etcd.service in systemd or do I have to add a content field to the unit in the cloud-config or something else? ### Response: Turns out the example master config is looking for etcd2.service, while the actual file is etcd.service so I changed it in the example units and everything worked. EDIT The reason why this worked and was an issue to begin with was that I was using the coresos vagrant box for parallels which is 300~ builds old from the current stable coreos build so it was missing etcd2 all together.
So I'm trying to setup a master Kubernetes node on coreos in vagrant. I'm using the example master cloud-config, found here https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml with the addition of this as the first units: - name: etcd.service command: start - name: fleet.service command: start - name: docker-tcp.socket command: start enable: true content: | [Unit] Description=Docker Socket for the API [Socket] ListenStream=2375 Service=docker.service BindIPv6Only=both [Install] WantedBy=sockets.target Once I vagrant up and vagrant ssh, I run sudo systemctl status kube-apiserver and find that kube-apiserver is down due to the fact it can't find etcd.service; however when I do ps -ef | grep etcd etcd is clearly running. Is there some specific location for etcd.service in systemd or do I have to add a content field to the unit in the cloud-config or something else?
TL;DR: Use the Internal IP of your node as the public IP in your service definition. If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule: I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80 I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970] I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970] I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80 I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80 Listing the iptables entries using -L -t shows the public IP turned into the reverse DNS name like you saw: Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353 REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605 REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723 REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126 REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970 REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970 But adding the -n option shows the IP address (by default, -L does a reverse lookup on the ip address, which is why you see the DNS name): Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353 REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605 REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723 REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126 REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970 REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970 At this point, you can access the service from within the cluster using both the internal and external IPs: $ curl 10.119.244.130:80 app-frontend-5pl5s $ curl 23.251.156.36:80 app-frontend-5pl5s Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused: $ curl 23.251.156.36 curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused If you enable some iptables logging: sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport 80 -j LOG --log-prefix "WTF: " And then grep the output of dmesg for WTF it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service. It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match. The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. 10.240.121.42) you will be able to hit your application from outside of the GCE network.
###Instruction: I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the createExternalLoadBalancer: true to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something? controller.yaml kind: ReplicationController apiVersion: v1beta3 metadata: name: app-frontend labels: name: app-frontend app: app role: frontend spec: replicas: 1 selector: name: app-frontend template: metadata: labels: name: app-frontend app: app role: frontend spec: containers: - name: node-frontend image: gcr.io/project_id/app-frontend ports: - name: app-frontend-port containerPort: 3000 targetPort: 3000 protocol: TCP service.yaml kind: Service apiVersion: v1beta3 metadata: name: app-frontend-service labels: name: app-frontend-service app: app role: frontend spec: ports: - port: 80 targetPort: app-frontend-port protocol: TCP publicIPs: - 123.45.67.89 selector: name: app-frontend Edit (additional details): Creating this service adds these additional rules, found when I run iptables -L -t nat Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859 DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859 I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for 89.67.45.123.bc.googleusercontent.com resolves to 123.45.67.89. kubectl get services shows the IP address and port I specified: NAME IP(S) PORT(S) app-frontend-service 10.247.243.151 80/TCP 123.45.67.89 Nothing recent from external IPs is showing up in /var/log/kube-proxy.log ### Response: TL;DR: Use the Internal IP of your node as the public IP in your service definition. If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule: I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80 I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970] I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970] I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80 I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970] I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80 Listing the iptables entries using -L -t shows the public IP turned into the reverse DNS name like you saw: Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353 REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605 REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723 REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126 REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970 REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970 But adding the -n option shows the IP address (by default, -L does a reverse lookup on the ip address, which is why you see the DNS name): Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353 REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605 REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723 REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126 REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970 REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970 At this point, you can access the service from within the cluster using both the internal and external IPs: $ curl 10.119.244.130:80 app-frontend-5pl5s $ curl 23.251.156.36:80 app-frontend-5pl5s Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused: $ curl 23.251.156.36 curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused If you enable some iptables logging: sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport 80 -j LOG --log-prefix "WTF: " And then grep the output of dmesg for WTF it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service. It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match. The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. 10.240.121.42) you will be able to hit your application from outside of the GCE network.
I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the createExternalLoadBalancer: true to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something? controller.yaml kind: ReplicationController apiVersion: v1beta3 metadata: name: app-frontend labels: name: app-frontend app: app role: frontend spec: replicas: 1 selector: name: app-frontend template: metadata: labels: name: app-frontend app: app role: frontend spec: containers: - name: node-frontend image: gcr.io/project_id/app-frontend ports: - name: app-frontend-port containerPort: 3000 targetPort: 3000 protocol: TCP service.yaml kind: Service apiVersion: v1beta3 metadata: name: app-frontend-service labels: name: app-frontend-service app: app role: frontend spec: ports: - port: 80 targetPort: app-frontend-port protocol: TCP publicIPs: - 123.45.67.89 selector: name: app-frontend Edit (additional details): Creating this service adds these additional rules, found when I run iptables -L -t nat Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859 DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859 I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for 89.67.45.123.bc.googleusercontent.com resolves to 123.45.67.89. kubectl get services shows the IP address and port I specified: NAME IP(S) PORT(S) app-frontend-service 10.247.243.151 80/TCP 123.45.67.89 Nothing recent from external IPs is showing up in /var/log/kube-proxy.log
You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).
###Instruction: I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs. ### Response: You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).
I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.
All the containers in a pod are bound to the same network namespace. This means that (a) they all have the same ip address and (b) that localhost is the same across all the containers. In other words, if you have Apache running in one container in a pod and MysQL running in another, you can access MySQL at localhost:3306 from the Apache container (and you could access Apache at localhost:80 from the MySQL container). While the containers share networking, they do not share filesystems. If you want to share files between containers you will need to make use of volumes. There is a simple volume example here.
###Instruction: When I define multiple containers in a pod/pod template like one container running nginx and another php-fpm, how can they access each other? Do I have to define some links in the definition (I could not find docs explaining all available config options) or can they each other by default? If yes what values do I have to put in the config files? I read the sharing a network namespace but I'm not aware of what that really means? I also could not find any example for that. ### Response: All the containers in a pod are bound to the same network namespace. This means that (a) they all have the same ip address and (b) that localhost is the same across all the containers. In other words, if you have Apache running in one container in a pod and MysQL running in another, you can access MySQL at localhost:3306 from the Apache container (and you could access Apache at localhost:80 from the MySQL container). While the containers share networking, they do not share filesystems. If you want to share files between containers you will need to make use of volumes. There is a simple volume example here.
When I define multiple containers in a pod/pod template like one container running nginx and another php-fpm, how can they access each other? Do I have to define some links in the definition (I could not find docs explaining all available config options) or can they each other by default? If yes what values do I have to put in the config files? I read the sharing a network namespace but I'm not aware of what that really means? I also could not find any example for that.
Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.
###Instruction: I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API. But when I try kubectl api-versions command, I am getting following error. Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused MyIP - this has set accordingly. What could be the reason for this? ### Response: Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.
I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API. But when I try kubectl api-versions command, I am getting following error. Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused MyIP - this has set accordingly. What could be the reason for this?
In the official documentation I found this: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod Apparently I was missing a security token that I didn't need in a previous version of Kubernetes. From that, I devised what I think is a simpler solution than running a proxy or installing golang on my container. See this example that gets the information, from the api, for the current container: KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME I also use include a simple binary, jq (http://stedolan.github.io/jq/download/), to parse the json for use in bash scripts.
###Instruction: I used to be able to curl https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/ as my base URL, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine (http://172.17.8.101:8080/api/v1beta3/namespaces/default/), it works just fine. ### Response: In the official documentation I found this: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod Apparently I was missing a security token that I didn't need in a previous version of Kubernetes. From that, I devised what I think is a simpler solution than running a proxy or installing golang on my container. See this example that gets the information, from the api, for the current container: KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME I also use include a simple binary, jq (http://stedolan.github.io/jq/download/), to parse the json for use in bash scripts.
I used to be able to curl https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/ as my base URL, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine (http://172.17.8.101:8080/api/v1beta3/namespaces/default/), it works just fine.
After a bit of searching I figured out a solution to the problem. A lot of the AWS configuration goes into the config-default.sh file. However there is no option to set the Access Key ID, Secret Access Key there (which might make some sense as cluster/aws/config-default.sh file is a part of the source code and the credentials should be saved somewhere else, at a safer place) I realized after seeing the kubernetes/cluster/aws/util.sh source code that kubernetes actually calls the aws command line tool internally to make changes to AWS infrastructure. So having the AWS command line tool installed and configured correctly will do the job of solving this problem. Once I issued the following command: aws configure and answered prompts for entering ID/Key it saved these values to this file: ~/.aws/credentials See details here This solved my problem and kube-up.sh worked perfectly after I did this.
###Instruction: I am trying to setup a Kubernetes cluster on my AWS account using the kube-up.sh setup script that is bundled with kubernetes source at kubernetes/cluster/kube-up.sh But when I ran kube-up.sh I am getting the following error: pranjal:~/go/src/github.com/GoogleCloudPlatform/kubernetes/cluster$ ./kube-up.sh Starting cluster using os distro: ubuntu Starting cluster using provider: aws ... calling verify-prereqs ... calling kube-up Uploading to Amazon S3 Creating kubernetes-staging-6b790c161af2b2c39939b542c73b775a make_bucket failed: s3://kubernetes-staging-6b790c161af2b2c39939b542c73b775 I am sure that my the tool is not able to read my AWS Access Key and Secret. I stored it in .aws/config. I am not sure where I should set it for it to be able to read it correctly and work. ### Response: After a bit of searching I figured out a solution to the problem. A lot of the AWS configuration goes into the config-default.sh file. However there is no option to set the Access Key ID, Secret Access Key there (which might make some sense as cluster/aws/config-default.sh file is a part of the source code and the credentials should be saved somewhere else, at a safer place) I realized after seeing the kubernetes/cluster/aws/util.sh source code that kubernetes actually calls the aws command line tool internally to make changes to AWS infrastructure. So having the AWS command line tool installed and configured correctly will do the job of solving this problem. Once I issued the following command: aws configure and answered prompts for entering ID/Key it saved these values to this file: ~/.aws/credentials See details here This solved my problem and kube-up.sh worked perfectly after I did this.
I am trying to setup a Kubernetes cluster on my AWS account using the kube-up.sh setup script that is bundled with kubernetes source at kubernetes/cluster/kube-up.sh But when I ran kube-up.sh I am getting the following error: pranjal:~/go/src/github.com/GoogleCloudPlatform/kubernetes/cluster$ ./kube-up.sh Starting cluster using os distro: ubuntu Starting cluster using provider: aws ... calling verify-prereqs ... calling kube-up Uploading to Amazon S3 Creating kubernetes-staging-6b790c161af2b2c39939b542c73b775a make_bucket failed: s3://kubernetes-staging-6b790c161af2b2c39939b542c73b775 I am sure that my the tool is not able to read my AWS Access Key and Secret. I stored it in .aws/config. I am not sure where I should set it for it to be able to read it correctly and work.
If a TCP connection can't be opened to the pod's IP on the given port, a different pod will be connected to instead. In other words, as long as the pod closes the socket that was listening on the port, no requests should be sent to it after that point.
###Instruction: I have a piece of code to run on a k8s cluster. I need shutdown those k8s nodes when all of my code running in the pods get finished. I let my code to serve on a port until the job of the code doing completed, and I will keep the program running to avoid the replication controller starting another pod. and I defined a service in k8s to routing the request. Externally, I wrote a script to ping the service until the service can't response with code 200, then I will shutdown those k8s nodes to save resource. My question is when my code in the pod no longer to serve the port, will k8s service still route the incoming request to that pod or not. And, is there any other way to achieve the equivalent result? ### Response: If a TCP connection can't be opened to the pod's IP on the given port, a different pod will be connected to instead. In other words, as long as the pod closes the socket that was listening on the port, no requests should be sent to it after that point.
I have a piece of code to run on a k8s cluster. I need shutdown those k8s nodes when all of my code running in the pods get finished. I let my code to serve on a port until the job of the code doing completed, and I will keep the program running to avoid the replication controller starting another pod. and I defined a service in k8s to routing the request. Externally, I wrote a script to ping the service until the service can't response with code 200, then I will shutdown those k8s nodes to save resource. My question is when my code in the pod no longer to serve the port, will k8s service still route the incoming request to that pod or not. And, is there any other way to achieve the equivalent result?
It appears that you need to run gcloud components update kubectl. I'm not sure why (this didn't used to be required).
###Instruction: I installed gcloud SDK and everything went fine. However I'm trying to use kubectl and it hasn't been installed. I run the following commands: gcloud components update preview All components are up to date. gcloud components update alpha All components are up to date. So seems like everything is up to date. However command kubectl is not there. This is not a PATH issue because I checked the file in gcloud's bin folder and it is there as it is in other of my installations where it was installed correctly. I'm running Ubuntu 14.04 on AWS. ### Response: It appears that you need to run gcloud components update kubectl. I'm not sure why (this didn't used to be required).
I installed gcloud SDK and everything went fine. However I'm trying to use kubectl and it hasn't been installed. I run the following commands: gcloud components update preview All components are up to date. gcloud components update alpha All components are up to date. So seems like everything is up to date. However command kubectl is not there. This is not a PATH issue because I checked the file in gcloud's bin folder and it is there as it is in other of my installations where it was installed correctly. I'm running Ubuntu 14.04 on AWS.
I found this problem on GitHub: https://github.com/GoogleCloudPlatform/kubernetes/issues/9270 Here it was suggested to use the code in HEAD. I did that and now it is working fine.
###Instruction: I installed a Kubernetes cluster by following the instruction here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md Everything looks fine the first time. I'm able to see the nodes, pods, deploy new pods, etc. The problem shows up when I stop the cluster and try to start it again. I'm restarting the cluster as indicated on the documentation: vagrant halt ./cluster/kube-up.sh When I do that I see the following error: Comment: Source file salt://kubelet/kubeconfig not found ... Minion did not return. [No response] Then, when I check the status of nodes it says the minion is NotReady. If I have VirtualBox open while I run kube-up.sh, I see that the error is thrown before the minion VM is started. So it sounds like the minion is not running when it tries to configure it. That's just an observation, not sure what's the problem. In order to solve this issue I have to destroy the cluster and create it again, what downloads and install everything again, making it very slow to use. ### Response: I found this problem on GitHub: https://github.com/GoogleCloudPlatform/kubernetes/issues/9270 Here it was suggested to use the code in HEAD. I did that and now it is working fine.
I installed a Kubernetes cluster by following the instruction here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md Everything looks fine the first time. I'm able to see the nodes, pods, deploy new pods, etc. The problem shows up when I stop the cluster and try to start it again. I'm restarting the cluster as indicated on the documentation: vagrant halt ./cluster/kube-up.sh When I do that I see the following error: Comment: Source file salt://kubelet/kubeconfig not found ... Minion did not return. [No response] Then, when I check the status of nodes it says the minion is NotReady. If I have VirtualBox open while I run kube-up.sh, I see that the error is thrown before the minion VM is started. So it sounds like the minion is not running when it tries to configure it. That's just an observation, not sure what's the problem. In order to solve this issue I have to destroy the cluster and create it again, what downloads and install everything again, making it very slow to use.
Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways: Using kubectl, you can use the scale command. Using the Kubernetes API, you can update your config with a new value in the replicas field. Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.
###Instruction: I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed? I hope I managed to be coherent. Thanks! ### Response: Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways: Using kubectl, you can use the scale command. Using the Kubernetes API, you can update your config with a new value in the replicas field. Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.
I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed? I hope I managed to be coherent. Thanks!
You can use the describe command: kubectl describe pod [podname] That will specify which containers are in the pod, along with other information.
###Instruction: I am able to get a list of all pods running on a kubernetes cluster using: kubectl get pods How do I get all the containers running on a particular pod? ### Response: You can use the describe command: kubectl describe pod [podname] That will specify which containers are in the pod, along with other information.
I am able to get a list of all pods running on a kubernetes cluster using: kubectl get pods How do I get all the containers running on a particular pod?
If you want to access the container directly using the node VM's IP address, you need to specify a host port in addition to a container port, e.g. kubectl run-container wordpress --image=tutum/wordpress --port=80 --hostport=80 Alternatively, you can access wordpress via the proxy running on the master by running kubectl proxy and then pointing your web browser at http://localhost:8001/api/v1beta3/proxy/namespaces/default/pods/wordpress-3gaq6.
###Instruction: I'm trying to setup example from Running Wordpress with a Single Pod. I've done Before You Begin section: $ gcloud config list [compute] zone = europe-west1-c [core] account = user@email.com disable_usage_reporting = False project = com-project-default I've done the steps from the tutorial: "Step 1: Create your cluster" logs here "Step 2: Create your pod" logs here "Step 3: Allow external traffic" logs here More logs: $kubectl get pods - log (toggle text wrapping) $gcloud compute firewall-rules list - log So, when I try to connect to http://104.155.7.213/ I'm receiving "This web page is not available: ERR_CONNECTION_REFUSED". I tried to add "Allow HTTP traffic" explicitly to the node in Compute Engine VMs dashboard and also I tried to use "kubectl run" instead of deprecated "kubectl run-container", but it doesn't help. Also sometimes I'm receiving "last termination: exit code 2" (1 or 2) in "message" column when run "kubectl get pods" (but not this time)` Info: GKE from June 10, 2015 $ kubectl version Client Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.1", GitCommit:"befd1385e5af5f7516f75a27a2628272bb9e9f36", GitTreeState:"clean"} Server Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.2", GitCommit:"1f12b893876ad6c41396222693e37061f6e80fe1", GitTreeState:"clean"} $ gcloud version Google Cloud SDK 0.9.64 alpha 2015.06.02 bq 2.0.18 bq-nix 2.0.18 compute 2015.06.09 core 2015.06.09 core-nix 2015.06.02 dns 2015.06.02 gcloud 2015.06.09 gcutil-msg 2015.06.09 gsutil 4.13 gsutil-nix 4.12 kubectl kubectl-linux-x86_64 0.18.1 preview 2015.06.09 sql 2015.06.09 Thank you for your help in advance! ### Response: If you want to access the container directly using the node VM's IP address, you need to specify a host port in addition to a container port, e.g. kubectl run-container wordpress --image=tutum/wordpress --port=80 --hostport=80 Alternatively, you can access wordpress via the proxy running on the master by running kubectl proxy and then pointing your web browser at http://localhost:8001/api/v1beta3/proxy/namespaces/default/pods/wordpress-3gaq6.
I'm trying to setup example from Running Wordpress with a Single Pod. I've done Before You Begin section: $ gcloud config list [compute] zone = europe-west1-c [core] account = user@email.com disable_usage_reporting = False project = com-project-default I've done the steps from the tutorial: "Step 1: Create your cluster" logs here "Step 2: Create your pod" logs here "Step 3: Allow external traffic" logs here More logs: $kubectl get pods - log (toggle text wrapping) $gcloud compute firewall-rules list - log So, when I try to connect to http://104.155.7.213/ I'm receiving "This web page is not available: ERR_CONNECTION_REFUSED". I tried to add "Allow HTTP traffic" explicitly to the node in Compute Engine VMs dashboard and also I tried to use "kubectl run" instead of deprecated "kubectl run-container", but it doesn't help. Also sometimes I'm receiving "last termination: exit code 2" (1 or 2) in "message" column when run "kubectl get pods" (but not this time)` Info: GKE from June 10, 2015 $ kubectl version Client Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.1", GitCommit:"befd1385e5af5f7516f75a27a2628272bb9e9f36", GitTreeState:"clean"} Server Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.2", GitCommit:"1f12b893876ad6c41396222693e37061f6e80fe1", GitTreeState:"clean"} $ gcloud version Google Cloud SDK 0.9.64 alpha 2015.06.02 bq 2.0.18 bq-nix 2.0.18 compute 2015.06.09 core 2015.06.09 core-nix 2015.06.02 dns 2015.06.02 gcloud 2015.06.09 gcutil-msg 2015.06.09 gsutil 4.13 gsutil-nix 4.12 kubectl kubectl-linux-x86_64 0.18.1 preview 2015.06.09 sql 2015.06.09 Thank you for your help in advance!
Kubectl can automate the process of rolling updates for you. Check out the docs here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md A rolling update of an existing replication controller foo running Docker image bar:1.0 to image bar:2.0 can be as simple as running kubectl rolling-update foo --image=bar:2.0.
###Instruction: I have a kubernetes cluster, and I am wondering how (best practice) to update containers. I know the idea is to tear down the old containers and put up new ones, but is there a one-liner I can use, do I have to remove the replication controller or pod(s) and then spin up new ones (pods or replicaiton controllers)? With this I am using a self hosted private library that I know I have to build from the Dockerfile and the push to anyway, this I can automate with gulp (or any other build tool), can I automate kubernetes update/tear down and up? ### Response: Kubectl can automate the process of rolling updates for you. Check out the docs here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md A rolling update of an existing replication controller foo running Docker image bar:1.0 to image bar:2.0 can be as simple as running kubectl rolling-update foo --image=bar:2.0.
I have a kubernetes cluster, and I am wondering how (best practice) to update containers. I know the idea is to tear down the old containers and put up new ones, but is there a one-liner I can use, do I have to remove the replication controller or pod(s) and then spin up new ones (pods or replicaiton controllers)? With this I am using a self hosted private library that I know I have to build from the Dockerfile and the push to anyway, this I can automate with gulp (or any other build tool), can I automate kubernetes update/tear down and up?
The environment variables for a given service are put in every container that is started after the service was created. For example, if you create a pod foo and then later a service bar, the pod's containers won't have any environment variables for bar. If you instead create service bar and then a pod foo, the pod's containers should have environment variables something like: BAR_PORT=tcp://10.167.240.1:80 BAR_SERVICE_HOST=10.167.240.1 You can test this out by attaching a terminal to one of your containers, as explained here.
###Instruction: I'm creating a kubernetes cluster, and in it I have several services. I know based on https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services I have two options. use the environment variables set by the kubelet. use skydns I want to try to use the environment variables first before I go adding another dependency into the mix. However, I'm unsure where the environment variables are for each service. I haven't found them when doing env or sudo env on the kubelet. Are they within a certain container and/or pod? If so do I have to link the other pods to that one to get its environment variables for services? I have several NodeJS services in containers, so I'm wondering if talking to each service would require this to get the ip: process.env('SERVICE_X_PUBLIC_IPV4') once I have the environment variable thing sorted out. Not as important, but related, how does this all work across multiple nodes? ### Response: The environment variables for a given service are put in every container that is started after the service was created. For example, if you create a pod foo and then later a service bar, the pod's containers won't have any environment variables for bar. If you instead create service bar and then a pod foo, the pod's containers should have environment variables something like: BAR_PORT=tcp://10.167.240.1:80 BAR_SERVICE_HOST=10.167.240.1 You can test this out by attaching a terminal to one of your containers, as explained here.
I'm creating a kubernetes cluster, and in it I have several services. I know based on https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services I have two options. use the environment variables set by the kubelet. use skydns I want to try to use the environment variables first before I go adding another dependency into the mix. However, I'm unsure where the environment variables are for each service. I haven't found them when doing env or sudo env on the kubelet. Are they within a certain container and/or pod? If so do I have to link the other pods to that one to get its environment variables for services? I have several NodeJS services in containers, so I'm wondering if talking to each service would require this to get the ip: process.env('SERVICE_X_PUBLIC_IPV4') once I have the environment variable thing sorted out. Not as important, but related, how does this all work across multiple nodes?
It is a known issue - https://github.com/kubernetes/kubernetes/issues/9180 The kubelet webserver times out.
###Instruction: I'm having k8s cluster with 3 minions, master and haproxy in front. When I use kubectl exec -p $POD -i -t -- bash -il for accessing bash in the pod (it is a single container in this case) I get in and after something like 5 mins I get dropped out of the terminal. If I reenter the container I can see my old bash process running, with a new started for my new connection. Is there a way to prevent this from happening? When I'm using docker exec it works fine and doesn't drop me so I guess it is from kubernetes. As a bonus question - is there a way to increase the characters per line when using kubectl exec? I get truncated output that is different from docker exec. Thanks in advance! ### Response: It is a known issue - https://github.com/kubernetes/kubernetes/issues/9180 The kubelet webserver times out.
I'm having k8s cluster with 3 minions, master and haproxy in front. When I use kubectl exec -p $POD -i -t -- bash -il for accessing bash in the pod (it is a single container in this case) I get in and after something like 5 mins I get dropped out of the terminal. If I reenter the container I can see my old bash process running, with a new started for my new connection. Is there a way to prevent this from happening? When I'm using docker exec it works fine and doesn't drop me so I guess it is from kubernetes. As a bonus question - is there a way to increase the characters per line when using kubectl exec? I get truncated output that is different from docker exec. Thanks in advance!
You can force exclusivity by creating pod definitions that are unable to schedule on the same machine. The easiest way to do that is to assign each pod the same host port. Once you have the same host port set for both of your pod definitions, if you create two replication controllers with two replicas each, then the scheduler will run 2 copies of 2 pods spread across 4 machines.
###Instruction: I'd like to run two pods in exclusive nodes. For instance, I have 4 nodes (node-1, node-2, node-3, node-4) and 2 pods (pod-1, pod-2). I want only one pod to run in each node and each pod to run in two nodes, e.g. pod-1 in node-1 and node-2, pod-2 in node-3 and node-4. Is there a way to configure this way? ### Response: You can force exclusivity by creating pod definitions that are unable to schedule on the same machine. The easiest way to do that is to assign each pod the same host port. Once you have the same host port set for both of your pod definitions, if you create two replication controllers with two replicas each, then the scheduler will run 2 copies of 2 pods spread across 4 machines.
I'd like to run two pods in exclusive nodes. For instance, I have 4 nodes (node-1, node-2, node-3, node-4) and 2 pods (pod-1, pod-2). I want only one pod to run in each node and each pod to run in two nodes, e.g. pod-1 in node-1 and node-2, pod-2 in node-3 and node-4. Is there a way to configure this way?
It doesn't appear that local-cluster-up.sh supports DNS out of the box. For DNS to work, the kubelet needs to be passed the flags --cluster_dns=<ip-of-dns-service> and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec (see an example here).
###Instruction: I follow the example at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns But I cannot get the nslookup output as the example. When execute kubectl exec busybox -- nslookup kubernetes It suppose to return Server: 10.0.0.10 Address 1: 10.0.0.10 Name: kubernetes Address 1: 10.0.0.1 But I only get nslookup: can't resolve 'kubernetes' Server: 10.0.2.3 Address 1: 10.0.2.3 error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1 My Kubernetes is running on a VM, and its ifconfig output is as below: docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99 inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:50 errors:0 dropped:0 overruns:0 frame:0 TX packets:34 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2899 (2.8 KB) TX bytes:2343 (2.3 KB) eth0 Link encap:Ethernet HWaddr 08:00:27:ed:09:81 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4735 errors:0 dropped:0 overruns:0 frame:0 TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:367445 (367.4 KB) TX bytes:280749 (280.7 KB) eth1 Link encap:Ethernet HWaddr 08:00:27:1f:0d:84 inet addr:192.168.144.17 Bcast:192.168.144.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:330 (330.0 B) TX bytes:1746 (1.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:127976 errors:0 dropped:0 overruns:0 frame:0 TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13742978 (13.7 MB) TX bytes:13742978 (13.7 MB) veth142cdac Link encap:Ethernet HWaddr e2:b6:29:d1:f5:dc inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1336 (1.3 KB) TX bytes:1336 (1.3 KB) Here is the steps I tried to start the Kubernetes: vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh +++ [0623 11:18:47] Building go targets for linux/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/hyperkube cmd/kubernetes plugin/cmd/kube-scheduler cmd/kubectl cmd/integration cmd/gendocs cmd/genman cmd/genbashcomp cmd/genconversion cmd/gendeepcopy examples/k8petstore/web-server github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test +++ [0623 11:18:52] Placing binaries curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused API SERVER port is free, proceeding... Starting etcd etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null Waiting for etcd to come up. +++ [0623 11:18:53] etcd: {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}} Waiting for apiserver to come up +++ [0623 11:18:55] apiserver: { "kind": "PodList", "apiVersion": "v1beta3", "metadata": { "selfLink": "/api/v1beta3/pods", "resourceVersion": "11" }, "items": [] } Local Kubernetes cluster is running. Press Ctrl-C to shut it down. Logs: /tmp/kube-apiserver.log /tmp/kube-controller-manager.log /tmp/kube-proxy.log /tmp/kube-scheduler.log /tmp/kubelet.log To start using your cluster, open up another terminal/tab and run: cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local cluster/kubectl.sh Then in a new terminal window, I executed: cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local After that, I created the busybox Pod as kubectl create -f busybox.yaml The content of the busybox.yaml is from https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md ### Response: It doesn't appear that local-cluster-up.sh supports DNS out of the box. For DNS to work, the kubelet needs to be passed the flags --cluster_dns=<ip-of-dns-service> and --cluster_domain=cluster.local at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns flag as the portalIP field of the service spec (see an example here).
I follow the example at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns But I cannot get the nslookup output as the example. When execute kubectl exec busybox -- nslookup kubernetes It suppose to return Server: 10.0.0.10 Address 1: 10.0.0.10 Name: kubernetes Address 1: 10.0.0.1 But I only get nslookup: can't resolve 'kubernetes' Server: 10.0.2.3 Address 1: 10.0.2.3 error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1 My Kubernetes is running on a VM, and its ifconfig output is as below: docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99 inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:50 errors:0 dropped:0 overruns:0 frame:0 TX packets:34 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2899 (2.8 KB) TX bytes:2343 (2.3 KB) eth0 Link encap:Ethernet HWaddr 08:00:27:ed:09:81 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4735 errors:0 dropped:0 overruns:0 frame:0 TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:367445 (367.4 KB) TX bytes:280749 (280.7 KB) eth1 Link encap:Ethernet HWaddr 08:00:27:1f:0d:84 inet addr:192.168.144.17 Bcast:192.168.144.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:330 (330.0 B) TX bytes:1746 (1.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:127976 errors:0 dropped:0 overruns:0 frame:0 TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13742978 (13.7 MB) TX bytes:13742978 (13.7 MB) veth142cdac Link encap:Ethernet HWaddr e2:b6:29:d1:f5:dc inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1336 (1.3 KB) TX bytes:1336 (1.3 KB) Here is the steps I tried to start the Kubernetes: vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh +++ [0623 11:18:47] Building go targets for linux/amd64: cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/hyperkube cmd/kubernetes plugin/cmd/kube-scheduler cmd/kubectl cmd/integration cmd/gendocs cmd/genman cmd/genbashcomp cmd/genconversion cmd/gendeepcopy examples/k8petstore/web-server github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test +++ [0623 11:18:52] Placing binaries curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused API SERVER port is free, proceeding... Starting etcd etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null Waiting for etcd to come up. +++ [0623 11:18:53] etcd: {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}} Waiting for apiserver to come up +++ [0623 11:18:55] apiserver: { "kind": "PodList", "apiVersion": "v1beta3", "metadata": { "selfLink": "/api/v1beta3/pods", "resourceVersion": "11" }, "items": [] } Local Kubernetes cluster is running. Press Ctrl-C to shut it down. Logs: /tmp/kube-apiserver.log /tmp/kube-controller-manager.log /tmp/kube-proxy.log /tmp/kube-scheduler.log /tmp/kubelet.log To start using your cluster, open up another terminal/tab and run: cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local cluster/kubectl.sh Then in a new terminal window, I executed: cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local After that, I created the busybox Pod as kubectl create -f busybox.yaml The content of the busybox.yaml is from https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md
How about this the shell script below which is run by a Docker container? https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh ?
###Instruction: I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service? ### Response: How about this the shell script below which is run by a Docker container? https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh ?
I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service?
Currently, I have to manually register the minion prior to spinning up the minion instance. This is because there is an open issue as of right now not allowing the minion to self-register in certain cases. UPDATE Now I'm using kube-register to register each minion/node on start of the kubelet service.
###Instruction: I have a dev kubernetes cluster setup where I have a minion running kube-proxy and kubelet. Both only start if it can connect to the master's apiserver, which it can. Howerver I am getting error updating node status, will retry: error getting node "10.211.55.126": minion "10.211.55.126" not found I notice prior to that I get this: Server rejected event '&api.Event followed by a large json object with mostly empty string values. repeatedly when I try running the minion's kubelet. I have it pointing to a private ip and it is reporting that it can't fin the public ip. I imagine this is an etcd issue but I'm not sure, also it maybe flanneld? Update 1 I managed getting pass the initial error by registering the minion(node?) with the master. This allow it to receive pods from mast and run the containers,; however, the minion is still not fully connected and resulting in the master to continuously push more pods to the minion. The kubelet process is reporting: Cannot get host IP: Host IP unknown; known addresses: []. Is there a flag to run kubelet with to give it the host ip? ### Response: Currently, I have to manually register the minion prior to spinning up the minion instance. This is because there is an open issue as of right now not allowing the minion to self-register in certain cases. UPDATE Now I'm using kube-register to register each minion/node on start of the kubelet service.
I have a dev kubernetes cluster setup where I have a minion running kube-proxy and kubelet. Both only start if it can connect to the master's apiserver, which it can. Howerver I am getting error updating node status, will retry: error getting node "10.211.55.126": minion "10.211.55.126" not found I notice prior to that I get this: Server rejected event '&api.Event followed by a large json object with mostly empty string values. repeatedly when I try running the minion's kubelet. I have it pointing to a private ip and it is reporting that it can't fin the public ip. I imagine this is an etcd issue but I'm not sure, also it maybe flanneld? Update 1 I managed getting pass the initial error by registering the minion(node?) with the master. This allow it to receive pods from mast and run the containers,; however, the minion is still not fully connected and resulting in the master to continuously push more pods to the minion. The kubelet process is reporting: Cannot get host IP: Host IP unknown; known addresses: []. Is there a flag to run kubelet with to give it the host ip?
I think you laid out the issues pretty well. The two kinds of scaling you described are called "vertical scaling" (increasing memory or CPU of a single instance) and "horizontal scaling" (increasing number of instances). On availability: As you observed, you can achieve pretty good availability even with a single container, thanks to auto-restart (at the node level or replication controller level). But it can never be 100% because you will always have the downtime associated with restarting the process, either on the same machine or (if the machine failed) on a new machine. In contrast, horizontal scaling (running multiple replicas of the container) allows effectively "zero downtime" from the end-user's perspective, assuming you have some kind of load balancing or failover mechanism in place among the replicas, and your application is written in a way that allows replication. On scalability: This is highly application-dependent. For example, vertically scaling CPU for a single-threaded application will not increase the workload it can handle, but running multiple replicas of it behind a load balancer (horizontal scaling) will. On the other hand, some applications aren't written in a way that allows them to be replicated, so for those vertical scaling is your only choice. Many applications (especially "cloud native" applications) are amenable to both horizontal and vertical scaling, but the details are application-dependent. Note that once you need to scale beyond the workload that a single node can handle (due to CPU or memory), you have no choice but to replicate (horizontal scaling). So the short answer to your question is that people replicate for both availability and scalability.
###Instruction: When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers. However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size? ### Response: I think you laid out the issues pretty well. The two kinds of scaling you described are called "vertical scaling" (increasing memory or CPU of a single instance) and "horizontal scaling" (increasing number of instances). On availability: As you observed, you can achieve pretty good availability even with a single container, thanks to auto-restart (at the node level or replication controller level). But it can never be 100% because you will always have the downtime associated with restarting the process, either on the same machine or (if the machine failed) on a new machine. In contrast, horizontal scaling (running multiple replicas of the container) allows effectively "zero downtime" from the end-user's perspective, assuming you have some kind of load balancing or failover mechanism in place among the replicas, and your application is written in a way that allows replication. On scalability: This is highly application-dependent. For example, vertically scaling CPU for a single-threaded application will not increase the workload it can handle, but running multiple replicas of it behind a load balancer (horizontal scaling) will. On the other hand, some applications aren't written in a way that allows them to be replicated, so for those vertical scaling is your only choice. Many applications (especially "cloud native" applications) are amenable to both horizontal and vertical scaling, but the details are application-dependent. Note that once you need to scale beyond the workload that a single node can handle (due to CPU or memory), you have no choice but to replicate (horizontal scaling). So the short answer to your question is that people replicate for both availability and scalability.
When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers. However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size?
I'm fairly sure there isn't a hard cap of 5 containers per pod, so there's likely some other reason why the scheduler can't find a node to run your pod on. You should be able find a message saying why the pod is still pending by running kubectl describe pod $PODNAME to see the most recent 'event' that happened to the pod, or by running kubectl get events to see all the recent events from the cluster.
###Instruction: I've got a Pod configuration from Docker that involves 7 nodes. It gets stuck in Pending state unless I remove two of the containers from the config. It doesn't matter which two I remove. It only works with five containers, which seems like a hard limit that I can't find documented. How do I run more than 5 containers in a kubernetes Pod on Google Container Engine? ### Response: I'm fairly sure there isn't a hard cap of 5 containers per pod, so there's likely some other reason why the scheduler can't find a node to run your pod on. You should be able find a message saying why the pod is still pending by running kubectl describe pod $PODNAME to see the most recent 'event' that happened to the pod, or by running kubectl get events to see all the recent events from the cluster.
I've got a Pod configuration from Docker that involves 7 nodes. It gets stuck in Pending state unless I remove two of the containers from the config. It doesn't matter which two I remove. It only works with five containers, which seems like a hard limit that I can't find documented. How do I run more than 5 containers in a kubernetes Pod on Google Container Engine?
You can read the cluster config using kubectl. This will contain the username and password for the UI. kubectl config view
###Instruction: I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are? ### Response: You can read the cluster config using kubectl. This will contain the username and password for the UI. kubectl config view
I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?
The manifest that the documentation is referring to is here. Copy this file onto your local machine (or find it on your system if you've already downloaded a copy of the git repository). You can edit the file using your favorite web browser and then run kubectl create -f pod.yaml to tell the system to create the pod.
###Instruction: I'm new to Kubernetes. I installed it on my local Ubuntu 14.04 machine. I want to run nginx server and I want see it in my browser. I'm following this section. It's saying However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run curl within the docker container (try docker exec). I tried below instruction to check server is running. # docker exec -it d0ef46bcdb8b bash root@nginx:/# service ngix status nginx is running. Now I want to see it in webpage. You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: cluster/kubectl.sh create -f examples/pod.yaml But I don't how to edit the manifest. How I get nginx through browser? ### Response: The manifest that the documentation is referring to is here. Copy this file onto your local machine (or find it on your system if you've already downloaded a copy of the git repository). You can edit the file using your favorite web browser and then run kubectl create -f pod.yaml to tell the system to create the pod.
I'm new to Kubernetes. I installed it on my local Ubuntu 14.04 machine. I want to run nginx server and I want see it in my browser. I'm following this section. It's saying However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run curl within the docker container (try docker exec). I tried below instruction to check server is running. # docker exec -it d0ef46bcdb8b bash root@nginx:/# service ngix status nginx is running. Now I want to see it in webpage. You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: cluster/kubectl.sh create -f examples/pod.yaml But I don't how to edit the manifest. How I get nginx through browser?
Update: Privileged mode is now enabled by default starting with the 1.1 release of Kubernetes which is now available in Google Container Engine. Running privileged containers (including the NFS server in that example) isn't currently possible in Google Container Engine. We are looking at ways to solve this (adding a flag when creating your cluster to allow privileged containers; making privileged containers part of admission control; etc). For now, if you need to run privileged containers you'll need to launch your own cluster using the GCE provider.
###Instruction: I am using a Kubernetes cluster deployed through Google Container Engine (GKE) from the Google Cloud Developer's Console, cluster version 0.19.3. I would like to run a privileged container, like in the Kubernetes NFS Server example: apiVersion: v1 kind: Pod metadata: name: nfs-server labels: role: nfs-server spec: containers: - name: nfs-server image: jsafrane/nfs-data ports: - name: nfs containerPort: 2049 securityContext: privileged: true Since the default Google Container Engine configuration does not allow privileged containers, the Kubernetes API imediately returns the following error: Error from server: Pod "nfs-server" is invalid: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20a027396)true' How can I allow privileged containers in my Google Container Engine cluster? ### Response: Update: Privileged mode is now enabled by default starting with the 1.1 release of Kubernetes which is now available in Google Container Engine. Running privileged containers (including the NFS server in that example) isn't currently possible in Google Container Engine. We are looking at ways to solve this (adding a flag when creating your cluster to allow privileged containers; making privileged containers part of admission control; etc). For now, if you need to run privileged containers you'll need to launch your own cluster using the GCE provider.
I am using a Kubernetes cluster deployed through Google Container Engine (GKE) from the Google Cloud Developer's Console, cluster version 0.19.3. I would like to run a privileged container, like in the Kubernetes NFS Server example: apiVersion: v1 kind: Pod metadata: name: nfs-server labels: role: nfs-server spec: containers: - name: nfs-server image: jsafrane/nfs-data ports: - name: nfs containerPort: 2049 securityContext: privileged: true Since the default Google Container Engine configuration does not allow privileged containers, the Kubernetes API imediately returns the following error: Error from server: Pod "nfs-server" is invalid: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20a027396)true' How can I allow privileged containers in my Google Container Engine cluster?
An alternative to specifying all mesh seed IP addresses is to use the asinfo tip command. Please see: http://www.aerospike.com/docs/reference/info/#tip the tip command asinfo -v 'tip:host=172.16.121.138;port=3002' The above command could be added to a script or orchestration tool with correct ips. You may also find addtional info on the aerospike Forum: Aerospike Forum
###Instruction: I would like to run Aerospike cluster on Docker containers managed by Kubernetes on CoreOS on Google Compute Engine (GCE). But since GCE does not permit multicast, I have to use Mesh heartbeat as described here, which has to be set up by specifying all node's IP addresses and ports; it seems so inflexible to me. Is there any recommended cloud-config settings for Aerospike cluster on Kubernetes/CoreOS/GCE with flexibility of the cluster being kept? ### Response: An alternative to specifying all mesh seed IP addresses is to use the asinfo tip command. Please see: http://www.aerospike.com/docs/reference/info/#tip the tip command asinfo -v 'tip:host=172.16.121.138;port=3002' The above command could be added to a script or orchestration tool with correct ips. You may also find addtional info on the aerospike Forum: Aerospike Forum
I would like to run Aerospike cluster on Docker containers managed by Kubernetes on CoreOS on Google Compute Engine (GCE). But since GCE does not permit multicast, I have to use Mesh heartbeat as described here, which has to be set up by specifying all node's IP addresses and ports; it seems so inflexible to me. Is there any recommended cloud-config settings for Aerospike cluster on Kubernetes/CoreOS/GCE with flexibility of the cluster being kept?
kube-up.sh created a managed instance group with size 4 which caused 4 nodes to be created. If you delete a node, the managed instance group will replace it so that you always have 4 nodes. You can change the number of nodes in the managed instance group by navigating to the "Instance Groups" link in the sidebar (under Compute -> Compute Engine), clicking on the group name, clicking Edit Group, and then changing the integer in the Number of Instances field. If you want to delete your cluster, including all node VMs, you should run kube-down.sh.
###Instruction: I created a cluster in google compute using the command: ./kube-up.sh Kubernetes has created 1 master and 4 minios servers. I try delete two minions in google cloud and are reloaded. I try also deleting the kubernetes minions with kubectl and after delete de VM. This fails. kubectl delete nodes kubernetes-minion-XXX ### Response: kube-up.sh created a managed instance group with size 4 which caused 4 nodes to be created. If you delete a node, the managed instance group will replace it so that you always have 4 nodes. You can change the number of nodes in the managed instance group by navigating to the "Instance Groups" link in the sidebar (under Compute -> Compute Engine), clicking on the group name, clicking Edit Group, and then changing the integer in the Number of Instances field. If you want to delete your cluster, including all node VMs, you should run kube-down.sh.
I created a cluster in google compute using the command: ./kube-up.sh Kubernetes has created 1 master and 4 minios servers. I try delete two minions in google cloud and are reloaded. I try also deleting the kubernetes minions with kubectl and after delete de VM. This fails. kubectl delete nodes kubernetes-minion-XXX
There are a variety of reasons for why you would scale an application up or down. The Kubernetes project is looking to provide auto-scaling in the future as a feature to dynamically size up and size down (potentially to 0) a replication controller in response to observed traffic. For a good discussion on auto-scaling, see the following write-up: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md
###Instruction: When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers. However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size? ### Response: There are a variety of reasons for why you would scale an application up or down. The Kubernetes project is looking to provide auto-scaling in the future as a feature to dynamically size up and size down (potentially to 0) a replication controller in response to observed traffic. For a good discussion on auto-scaling, see the following write-up: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md
When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers. However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size?
There's an example in OpenShift: https://github.com/openshift/postgresql/tree/master/examples/replica The principle is the same in pure Kube (it's not using anything truly OpenShift specific, and you can use the images in plain docker)
###Instruction: I'm still wrapping my head around Kubernetes and how that's supposed to work. Currently, I'm struggling to understand how to model something like a PostgreSQL cluster with streaming replication, scaling out and automatic failover/failback (pgpool-II, repmgr, pick your poison). My main problem with the approach is the dual nature of a PostgreSQL instance, configuration-wise -- it's either a master or a cold/warm/hot standby. If I increase the number of replicas, I'd expect them all to come up as standbys, so I'd imagine creating a postgresql-standby replication controller separately from a postgresql-master pod. However I'd also expect one of those standbys to become a master in case current master is down, so it's a common postgresql replication controller after all. The only idea I've had so far is to put the replication configuration on an external volume and manage the state and state changes outside the containers. (in case of PostgreSQL the configuration would probably already be on a volume inside its data directory, which itself is obviously something I'd want on a volume, but that's beside the point) Is that the correct approaach, or is there any other cleaner way? ### Response: There's an example in OpenShift: https://github.com/openshift/postgresql/tree/master/examples/replica The principle is the same in pure Kube (it's not using anything truly OpenShift specific, and you can use the images in plain docker)
I'm still wrapping my head around Kubernetes and how that's supposed to work. Currently, I'm struggling to understand how to model something like a PostgreSQL cluster with streaming replication, scaling out and automatic failover/failback (pgpool-II, repmgr, pick your poison). My main problem with the approach is the dual nature of a PostgreSQL instance, configuration-wise -- it's either a master or a cold/warm/hot standby. If I increase the number of replicas, I'd expect them all to come up as standbys, so I'd imagine creating a postgresql-standby replication controller separately from a postgresql-master pod. However I'd also expect one of those standbys to become a master in case current master is down, so it's a common postgresql replication controller after all. The only idea I've had so far is to put the replication configuration on an external volume and manage the state and state changes outside the containers. (in case of PostgreSQL the configuration would probably already be on a volume inside its data directory, which itself is obviously something I'd want on a volume, but that's beside the point) Is that the correct approaach, or is there any other cleaner way?