Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You are using theHyperkitminikube driver that uses the/usr/local/bin/hyperkitcommand line (in reality it uses thexhyveHypervisor). So a simple:$ ps -Af | grep hyperkit
0 9445 1 0 1:07PM ttys002 1:45.27 /usr/local/bin/hyperkit -A -u -F /Users/youruser/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2caa5ca9-d55c-11e8-92a0-186590def269 -s 2:0,virtio-blk,/Users/youruser/.minikube/machines/minikube/minikube.rawdisk -s 3,ahci-cd,/Users/youruser/.minikube/machines/minikube/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/youruser/.minikube/machines/minikube/tty,log=/Users/youruser/.minikube/machines/minikube/console-ring -f kexec,/Users/youruser/.minikube/machines/minikube/bzimage,/Users/youruser/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikubewill tell you how many Hyperkit processes/VMs you are running. AFAIK,minikube only supports one, but you could have another one if you haveDocker for Macinstalled.Then if you follow this:How to access the VM created by docker's HyperKit?. You can connect to VM an see what's running inside:$ sudo screen /Users/youruser/.minikube/machines/minikube/tty
Welcome to minikube
minikube login: root
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
# docker ps
... <== shows a bunch of K8s containers | I run a Kubernetes cluster on my mac using the latest Docker community edition. I usually do:$ minikube start --vm-driver=hyperkitand it works well for me.Today, I ran that command multiple times in a script. Now, how do I know how many minikube VMs are running on a mac? How do I delete all but one of them? Can I see a list of all minikube vms running?$ minikube statusshows:minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.64.3Is running minikube start twice not harmful?I am running minikube version: v0.30.0 on Mac OS High Sierra.$ kubectl versionshows:Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0",
GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}Thanks for reading. | How do I see a list of all minikube clusters running in Docker on my mac? |
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it | I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")but I'm having trouble with the second part--committing the changes to the image once thesleep 10command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?EDIT: As an alternative, is there a way to triggerctrl-p-qvia a shell script in the container to leave the container running but return to the host? | How can I run a docker container and commit the changes once a script completes? |
Try this command to get only error.log:docker logs -f nginx 1>/dev/nullAnd this one for access.log:docker logs -f nginx 2>/dev/null | Nginx Docker file is configured to send error.log to /dev/stderr.RUN ln -sf /dev/stdout /var/log/nginx/access.log
&& ln -sf /dev/stderr /var/log/nginx/error.logWhen we rundocker logs --tail=10 -f nginxit show a combination of both error log and access log. Is there a docker command so I can only see the logs of error.log or stderr? | In Nginx docker how do we see log only from error.log |
You need to addARG gpto your Dockerfile....
ARG gp
EXPOSE $gp
...https://docs.docker.com/engine/reference/builder/#argWorth mentioning that this isn't going to expose the port when you're running it via compose though, you would need to add aportsinstruction to your docker-compose.yml for that. | I'm trying to use parametrize my dockerfiles on build phase and use arguments in Docker-compose. For example in Docker compose I have defined one service called bpp as following:bpp:
build:
context: .
dockerfile: Dockerfile.bpp
args:
gp : 8080
image: serv/bpp
restart: always
depends_on:
- data
links:
- dataI'm trying to pass argument named gp to Dockerfile.bpp, where I'm using argument when starting a Python application, exposing a port etc.
For example in dockerfile.bpp in trying to expose port gp as following:EXPOSE gpHowever, when building docker file by commanddocker-compose buildI get following error:
ERROR: Service 'bpp' failed to build: Invalid containerPort: gpIt seems that argument gp is not visible in dockerfile. Any suggestions? | Passing arguments for Dockerfiles using Docker compose |
rundocker-compose buildafter changing docker-comopse.yml and thendocker-compose up | I have this folder structure:/home/me/composetest
/home/me/composetest/mywildflyimageInside composites I have this docker-compose.yml:web:
image: test/mywildfly
container_name: wildfly
ports:
- "8080:8080"
- "9990:9990"Inside mywildflyimage I have this docker image:FROM jboss/wildfly
EXPOSE 8080 9990
ADD standalone.xml /opt/jboss/wildfly/standalone/configuration/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]If i rundocker built -t test/mywildfly .
docker-compose upEverything works great, and the management part is minded to 0.0.0.0 (-bmanagement 0.0.0.0 part of the CMD command).If I change my docker-compose.yml:web:
build: mywildflyimage
container_name: wildfly
ports:
- "8080:8080"
- "9990:9990"and run
docker-compose upIt still boots, but the admin part is not bound to 0.0.0.0 anymore (this is the default behaviour for the image I inherited from).Why does it stop working when I use thebuildcommand in the docker-compose.ml?EDIT: It seems that it is ignoring all my docker file commands. | Docker compose ignores my Dockerfile when I use the build command |
Most propably the UID on your host formyuserdoes not match the UID formyuserinside the Container.SolutionIf you want to write from within your container into a directory of your host machine you must first create amyuserUser on your host and check its UID via$ sudo su - myuser -c "id"
uid=1000(myuser) gid=100(users) Gruppen=100(users)In this example UID=1000 and GID=100.Now you will need to create a Folder~/log/nginxwith owner/group ofmyuseron your host.$ sudo mkdir ~/log/nginx
$ sudo chown myuser ~/log/nginx
$ sudo chmod -R 0700 ~/log/nginx/Afterwards you can create a Dockerfile and your user with the same UID/GID.RUN useradd myuser -u 1000 -g 100 -m -s /bin/bash
USER myuserNow you should be able to write to your mounted volume with the specified user. You can check this via:docker run -v $(pwd)/log/nginx:/var/log/nginx --rm -it mynginx:v1 /bin/bashif you can now write to/var/log/nginx | I have Dockerfile with myuser from nginx image and I want to mount logs on mounted location, I am using docker-compose to start the container. My requirement is to use non-root user only and no sudo.My dockerfile with myuser, image tag I create is mynginx:v1RUN addgroup mygroup
RUN adduser myuser --disabled-password
USER myuserNon-Working docker compose with mynginx image with myuserversion: "2"
services:
nginx:
container_name: nginx
image: mynginx:v1
ports:
- "8888:80"
volumes:
- ./log/nginx:/var/log/nginxAlthough directory get mounted, nginx log files access.log and error.log are not seen on host machine.Docker logs gives below:nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
2021/04/09 12:46:08 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2021/04/09 12:46:08 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)However If I do the same with official nginx image which runs as root user, things work correctly.Working docker compose with official nginx image with root userversion: "2"
services:
nginx:
container_name: nginx
image: nginx
ports:
- "8888:80"
volumes:
- ./log/nginx:/var/log/nginxTried to look at various options but no luck so far. | docker can not write on mounted volume with non-root user |
In your docker-compose.yml file you are exposing ports from your pods on your hosts' network space by declaring them in theportsarray, such as:ports:
- "3306:3306"If you omit this part of the configuration, your containers will still be able to reach each other privately, but the ports won't be bound in your host machine, avoiding the port collision you are facing.If you require to expose your ports to the host for some or all your services, you'll have to handle the collisions yourself by changing the bound port on the host side. For instance, to avoid port collision on port 3306 you could simply do:ports:
- "3307:3306" | I hope the title is descriptive enough. I am trying to execute my node app (that uses mongo and mysql) in docker. I am usingdocker-composeto start the app anddocker-compose.ymlfile below:version: "3.3"
services:
app:
container_name: app
restart: always
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
links:
- mongo
- mysql
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
mysql:
container_name: mysql
image: mysql
ports:
- "3306:3306"Whenever I try to start this usingdocker-compose upI get the following error:ERROR: for mysql Cannot start service mysql: driver failed programming external connectivity on endpoint mysql (785b03daaa662bb3c344025f89fd28f49eabb43104b1c9a16ab425ab5120309f): Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use
ERROR: for mysql Cannot start service mysql: driver failed programming external connectivity on endpoint mysql (785b03daaa662bb3c344025f89fd28f49eabb43104b1c9a16ab425ab5120309f): Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use
ERROR: Encountered errors while bringing up the project.I did a little bit of research and it seems thatgitlab-runneris using the mysql service. My understand was that if I run this setup through docker container they are isolated from the host system so I won't have any port conflicts. The only ports that I am exposing are the ones in myDockerfile- in my case 3000. Am I missing something in mydocker-compose.yml? What else could be wrong? | How do I avoid 'port collision' when using docker? |
Unfortunately, it seems like this is currentlynot supported. | I'm trying to deploy my container to docker swarm cluster(docker engine 1.12.1).The features ofdocker swarm modereally are exciting, such as clustering docker, multi-host networking.However I find something can't be archived in swarm mode so far(docker 1.12.x), which works well when usingdocker runto start container.My host haseth0for Intranet network,eth1for Internet network. I would like to only publish the service deployed bydocker service createon Intranet network. But the service would listen botheth0andeth1interfaces after creating the service viadocker service create --name my_web --publish 8000:80 my_web_image.Any solution/workaround to archive themy_webservice only listening oneth0interface? | How to bind the published port to specific eth[x] in docker swarm mode |
Your Dockerfile works for me, installs all plugins and builds the image successfully:Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
> git depends on workflow-scm-step:1.14.2,mailer:1.17,matrix-project:1.7.1,ssh-credentials:1.12,parameterized-trigger:2.4;resolution:=optional,scm-api:1.2,token-macro:1.11;resolution:=optional,promoted-builds:2.27;resolution:=optional,credentials:2.1.4,git-client:1.21.0
Downloading plugin: workflow-scm-step from https://updates.jenkins.io/download/plugins/workflow-scm-step/latest/workflow-scm-step.hpi
...
Removing intermediate container 4f895c203944
Successfully built 31d58d1f586fTrydocker build --no-cachein case there's an issue with one of the layers in your image cache, or set up anautomated build on Docker Huband build it on Docker's servers. | I have a Dockerfile for a custom Jenkins master like so:FROM jenkins
MAINTAINER me
USER root
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get install -y vim \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
# COPY plugins.txt /usr/share/jenkins/plugins.txt
# RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountStartup=100 --handlerCountMax=300"
RUN /usr/local/bin/install-plugins.sh git:2.6.0Everything works fine until theRUN /usr/local/bin/install-plugins.sh git:2.6.0line. I get an error installing the plugins:Creating initial locks...
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
Downloading plugin: git-plugin from https://updates.jenkins.io/download/plugins/git-plugin/2.6.0/git-plugin.hpi
Failed to download plugin: git or git-plugin
WAR bundled plugins:
Installed plugins:
*:
Some plugins failed to download!
Not downloaded: git
The command '/bin/sh -c /usr/local/bin/install-plugins.sh git:2.6.0' returned a non-zero code: 1Am I doing something wrong or is this an issue with Jenkins/Docker? | When building Jenkins in Docker plugins fail to install |
Simply linking B to Adocker run -p 8081:8081 --link AppA --name AppB image2, then you can access the REST service usingAppA:8080.The reason is that Docker containers run on its own subnet (normally 172.17.0.0-255) and they cannot access the network that your host is on. Alsolocalhostwould be the container itself, not the host. | I have two applications, one of which has a RESTful interface that is used by the other. Both are running on the same machine.Application A runs in a docker container. I am running it using the command line:docker run -p 40000:8080 --name AppA image1When I test Application B outside a docker container (in other words, before it is dockerized) Application B successfully executes all RESTful requests and receives responses without problems.Unfortunately, when I dockerize and run Application B within a container:docker run -p 8081:8081 --name AppB image2whenever I attempt to send a RESTful request to Application A, I get the following:Connect to localhost:40000 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refusedOf course, I also tried making Application B connect using my machine's IP address. When I do that, I get the following failure:Connect to 192.168.1.101:40000 failed: No route to HostHas anyone seen this kind of behavior before? What causes an application that communicates perfectly well with another dockerized application outside a docker container to fail to communicate with that same dockerized application once it is itself dockerized???Someone please advise... | REST request from one docker container to another fails |
Keep in mind that thedocker runargument order is mandatory:$ docker help run
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]The environment setting fall under options:-e, --env list Set environment variables
--env-file list Read in a file of environmentHence:docker run --env-file stage.env deleter:localwill import the environment variables as expected. | Given this Dockerfile:FROM alpine:3.7
ENV LAST_UPDATED=2018-02-22
ARG XDG_CACHE_HOME=/tmp/cache/
RUN apk update && \
apk add libxslt && \
apk add sed && \
apk add py-pip && \
apk add mariadb-client && \
apk add bash bash-doc bash-completion && \
pip install httpie && \
rm -rf /var/cache/apk/*
WORKDIR /usr/deleter/
COPY delete.sh ./
ENTRYPOINT ["/usr/deleter/delete.sh"]I expected to be able to pass multiple variables through an.envfile with thekey=valueformat.$ cat stage.env
MYSQL_DATABASE=database
MYSQL_HOST=127.0.0.1:3306
MYSQL_PASSWORD=password
MYSQL_PORT=3306
MYSQL_USER=a_userMydelete.shonly looks like this:#!/bin/bash
set -e
set -o pipefail
echo "hello world"
echo ${MYSQL_DATABASE} ${MYSQL_HOST} ${MYSQL_PASSWORD} ${MYSQL_PORT} ${MYSQL_USER}
echo "ALL VARIABLES"
envI expected to see the env variables, yet they all are empty. The--env-fileoptions seems to be not working. The output of the script is:hello world
ALL VARIABLES
HOSTNAME=f52c5c2aa22b
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/usr/deleter
LAST_UPDATED=2018-02-22
SHLVL=1
HOME=/root
_=/usr/bin/envI create and run the docker container via:docker build -t deleter:local
docker run deleter:local --env-file stage.envI tried--env-file stage.env,--env-file=stage.env,--env-file ./stage.env, yet I don't see anything being included nor any thrown error. I also tried it with the absolute path.Thestage.envis on the same level as my Dockerfile.The env file is valid, I can source it on my local machine an access the variables there.Where is my mistake? | How to pass through environment variables in docker run through an env file? |
The problem does not appear when importing to the LocalMachine folder:Import-Certificate -FilePath C:\myCertificateToAdd.cert -CertStoreLocation Cert:\LocalMachine\Root\Like this, the certificate is importet to every "CurrentUser" on the machine. If this is ok, as for the typical DockerContainer, the problem is solved. | How can I add a.cer-Certificate inside a Docker container? It has to be done via powershell since the container has no interface to openmms.exe.Thisis a good tutorial for.pfx-Certificates. Since I have a.cer-file without private key, I have to adapt it slightly.
The powershell command from thedocumentationImport-Certificate -FilePath C:\myCertificateToAdd.cert -CertStoreLocation Cert:\CurrentUser\Root\gets stuck whenever called. | Add SSL Certificate to Windows Docker Container |
After a bit of research I found out that the WordPress container sets it's ports once since it needs to save the URLs(localhost:7006) in the db because I am persisting the db data.I ran thedocker-compose uponce with the default port80:80configuration which caused thelocalhost:80orlocalhostto be saved in the db. So when I changed the ports again and randocker-compose up, I actually messed up the URLs that are stored in the linked mysql db container with my WordPress container.I randocker-compose down --volumes(this causes the persisted data destruction)
and then changed the ports of my WordPress container in docker-compse.yml. Running the following command again created my WordPress container live on port 7006 (localhost:7006).docker-compose upwordpress:
depends_on:
- db
image: wordpress:4.7.1
restart: always
volumes:
- ./wp-content:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: p4ssw0rd!
ports:
- 7006:80 # Expose http and https
- 8443:443
networks:
- wp_nwkIMPORTANT: I am just playing with docker, so I don't want to save my
volumes data. Anyone wanting to keep their data must not use thedocker-compose down --volumesIt's running on the desired port now | I want to map some random port on my computer e.g.localhost:7006to my WordPress docker container'sport 80.When I change the port of WordPress from80:80to7006:80it's not only stops working onlocalhost(port 80)but also don't respond onlocalhost:7006.docker-compose.ymlfile looks like this:version: '3'
services:
wordpress:
depends_on:
- db
image: wordpress:4.7.1
restart: always
volumes:
- ./wp-content:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: p4ssw0rd!
ports:
- 80:80 # Expose http and https
- 8443:443
networks:
- wp_nwk
db:
image: mysql:5.7
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- wp_nwk
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 7005:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- wp_nwk
networks:
wp_nwk:
volumes:
db_data: | Docker: I can't map ports other than 80 to my WordPress container |
Trydocker system prune --allif you don't see any container or images withdocker psanddocker images, but be careful it removes all cache and unused containers, images and network.docker ps -aanddocker images -ashows you all the containers and images including ones that are currently not running or not in use.Check the docs if problem persists:Clean unused docker resources | I have been using theVSCode Remote Container Pluginfor some time without issue. But today when I tried to open my project the remote container failed to open with the following error:Command failed: docker exec -w /home/vscode/.vscode-server/bin/9833dd88 24d0faab /bin/sh -c echo 34503 >.devport
rejected promise not handled within 1 second: Error: ENOSPC: no space left on device, mkdir '/home/vscode/.vscode-server/data/logs/20191209T160810It looks like the container is out of disk space but I'm not sure how to add more.Upon further inspection I am a bit confused. When I rundffrom in the container it shows that I have used60Gof disk space but the size of my root directory is only ~9G.$ df
Filesystem Size Used Avail Use% Mounted on
overlay 63G 61G 0 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 63G 61G 0 100% /etc/hosts
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
$ du -h --max-depth=1 /
9.2G /What is the best way to resolve this issue? | VSCode Remote Container - Error: ENOSPC: No space left on device |
I will never recommend a hard memory limit while running container in ECS.Plus you can not determine memory for the idle state of the container so better to have look on some benchmark for Nginx while node memory vary from application to the application also poor code might consume more memory than good and managed application.NGINX used one worker, 15% CPU and 1MB of memory to serve 11,500 requests per second.Benchmarks have shown NGINX lightweightNow based on your trafficEXPECTED_REQUST/11500 = Required memoryWhile the memory ofNodejsis really critical and totally depend on your code if the application does not close file or request properly it will hit the max memory sooner then expected, so go for memory reservation.memoryReservationThe soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit; however,your container can consume more memory when neededFor example, if your container normally uses128 MiB of memory, but occasionally bursts to256 MiB of memoryfor short periods of time, you can set amemoryReservationof 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance,but also allow the container to consume more memory resources when needed.ECS memoryReservationSo better to do not set hard limit that is calledmemory.The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here,the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. | I want to create a task definition in aws ecs.How much memory and cpu I need to run nginx in conatiner? and nodejs in another container?nginx - just a proxy from 80 to 3000.nodejs - simple services that call to atlas mongodb | How much memory and cpu nginx and nodejs in each container needs? |
The way the grace period works is that the main docker process is immediately sent a SIGTERM signal, and then it is allowed a certain amount of time to exit on its own before it is more forcefully shutdown. If your app is quitting right away, it is because it quits when it gets this signal.Your app could catch the SIGTERM signal, and then quit on its own after all the open operations complete. Or it could catch the SIGTERM signal and just do nothing and wait for it to be forced down a different way. | I have a .NET Core console application running in a docker container that I am deploying through Kubernetes.
When I update the deployment image, I would like to keep the existing pod around for a while, without accepting new connections, but to keep existing connections alive for a period to allow existing users to finish.Looking at the Kubernetes docs, I thought that termination grace period seconds was the property to add, but it doesn't seem to be working. As soon as I change the image listed in the deployment then the existing pod is dropped - the grace period is not applied.Does anyone have any ideas as to what I'm doing wrong in this instance? I can't see anything in the docs.Bit from my .yml file below:spec:
terminationGracePeriodSeconds: 60
containers:
- name: myApplication | terminationGracePeriodSeconds not |
I found a solution to my problem. I specified docker running on IP x and Port y, but docker then only listens to that socket. I had to add another -H flag with the unix socket in order to listen to local requests:sudo /usr/bin/docker daemon -H tcp://0.0.0.0:5555 -H unix:///var/run/docker.sock | I´m new to docker and want to start it in daemon mode listening to a specific IP-adress and port. In thedocumentationit is said that this can be done by writingsudo /usr/bin/docker daemon -H 0.0.0.0:5555. It then says that I can list running containers with this commanddocker ps. If I try this I get the following message:Gethttp:///var/run/docker.sock/v1.20/containers/json?all=1: dial unix /var/run/docker.sock: no such file or directory.Are you trying to connect to a TLS-enabled daemon without TLS?Is your docker daemon up and running?I cannot interact with it. I´ve searched for a solution but with no luck. Any suggestions?P.S. How can I run this daemon in background? I tried appending an & but I´m stuck on the ouput till pressing ctrl+c.Thanks in advance | Correct way to start docker daemon listening to specific port |
I'd think changingwww-datas userid to your host-user's id is a good solution, as permissions for the host user are fairly easy to setup.#change www-data`s UID inside a Dockerfile
RUN usermod -u [USERID] www-datauser id 1000 is the default for most linux systems afaik... 501 on macyou can runid -uon the host system to find out.You could then log into the container to run symfony commands as www-datadocker exec -it -u www-data [CONTAINER] bashI was wondering how you could set the userid dynamically on container build.
I guess passing it via--build-argto docker-compose would be the waydocker-compose build --build-arg USERID=$(id -u)...but haven't managed to access that var in the Dockerfile yet. | I have a symfony setup for docker with docker-compose which is working well except when i runcache:clearfrom console, the webserver cant access the files.I can circumvent the permission problem by uncommentingumask(0000);in console and web/app_dev.php but i would like to run symfony as recommended.What i do is spin up the containersdocker-compose upThen i enter the container. The container contains the apache, php and the code via a data volume.docker exec -i -t apache_1 /bin/bashApparently i am logged in as root then and when i runapp/console cache:clearall files in cache belong to user root. www-data as webserver user now cant access the files anymore.I also can circumvent this by logging in as www-data then the files generated by the cache:clear belong to www-data and the webserver can access them.docker exec -u www-data -i -t apache_1 /bin/bashBut this has the downside that i dont land in bash but in/usr/sbin/nologinand dont have things like bash_history and so on.Searching around i found this as part of the Dockerfile to solve the permission issue but it as no effect for me.RUN usermod -u 1000 www-dataIf i understand correct this switches the user 1000 to www-data, but as i am root when i login to the container this does not work, i assume.So why am irootwhen i login to the container and how is thisusermodsuppose to work?the docker-compose.yml:proxy:
image: jwilder/nginx-proxy:latest
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
elastic:
build: docker/elasticsearch
ports:
- "9200:9200"
volumes:
- data/elasticsearch:/usr/local/elasticsearch/data
apache:
build: docker/apachephp
environment:
- VIRTUAL_HOST=myapp.dev
volumes:
- ./code:/var/www/app
- ./dotfiles/.bash_history:/.bash_history
- ./logs:/var/www/app/app/logs
links:
- elastic
expose:
- "80" | symfony docker permission problems for cache files |
Start with the syntax of thedocker runcommand, which is:docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]This means if you run:DOCKER_COMMAND='bash -c "(cd build && make)"'
docker run --rm -it myImage "$DOCKER_COMMAND"You are passing the entirety of the$DOCKER_COMMANDvariable as theCOMMAND. You are asking Docker to find a file matching the namebash -c "(cd build && make)", so it should be no surprise that it fails. It doesn't have anything to do with "docker run doesn't understand the substitution". This is all related to the way your shell parses command lines before executing them.When you remove the quotes around$DOCKER_COMMAND, you end up calling it like this (I'm putting each argument on a separate line to make it obvious):docker
run
--rm
-it
myImage
bash
-c
"(cd
build
&&
make)"And that's not going to work, because bash is going to try to run the script"(cd, which should make obvious the reason for theunexpected EOF while looking for matching"'error. Bash's-c` option only takes a single argument, but because of the way shell expansion works it's getting 4.You could do it this way:DOCKER_COMMAND='cd build && make'
docker run --rm -it myImage bash -c "$DOCKER_COMMAND"(I've removed the parentheses around your command because they don't do anything the way you're using them.)This way, you're callingdocker runwith a command ofbash, and you're giving bash's-coption a single argument (the contents of the$DOCKER_COMMANDvariable). | The issue I'm facing is how to pass a command with arguments todocker run. The problem is thatdocker rundoes not take command plus arguments as a single string. They need to be provided as individual first-class arguments todocker run, such as:#!/bin/bash
docker run --rm -it myImage bash -c "(cd build && make)"However consider the command and argument as the value of a variable:#!/bin/bash -x
DOCKER_COMMAND='bash -c "(cd build && make)"'
docker run --rm -it myImage "$DOCKER_COMMAND"Unfortunately this doesn't work becausedocker rundoesn't understand the substitution:+ docker run --rm -it myImage 'bash -c "(cd build && make)"'
docker: Error response from daemon: oci runtime error: exec: "bash -c \"(cd build && make)\"": stat bash -c "(cd build && make)": no such file or directory.A slight change, removing the quotation ofDOCKER_COMMAND:#!/bin/bash -x
DOCKER_COMMAND='bash -c "(cd build && make)"'
docker run --rm -it myImage $DOCKER_COMMANDResults in:+ docker run --rm -it myImage 'bash -c "(cd build && make)"'
build: -c: line 0: unexpected EOF while looking for matching `"'
build: -c: line 1: syntax error: unexpected end of fileHow can I expand a string from a variable so that it is passed as a distinct command and arguments todocker runinside a script? | Passing a command with arguments as a string to docker run |
I got it working by adding acontainer_namefor db container. Mydbcontainer have different name (app_name_db_1) and I was connecting to a container nameddb.After giving the hard-codedcontainer_name(db), it gets working. | I'm running a ruby on rails application in docker container. I want to create and then restore the database dump in postgres container.
But I'mBelow is what I've done so far:1)Added bash script in/docker-entrypoint-initdb.dfolder. Script is just to create database:psql -U docker -d postgres -c 'create database dbname;'RESULT:Database created but rails server exited with code 0. Error:web_1 exited with code 02)Added script to be executed beforedocker-compose up.# Run docker db container
echo "Running db container"
docker-compose run -d db
# Sleep for 10 sec so that container have time to run
echo "Sleep for 10 sec"
sleep 10
echo 'Copying db_dump.gz to db container'
docker cp db_dump/db_dump.gz $(docker-compose ps -q db):/
# Create database `dbname`
echo 'Creating database `dbname`'
docker exec -i $(docker-compose ps -q db) psql -U docker -d postgres -c 'create database dbname;'
echo 'importing database `dbname`'
docker exec -i $(docker-compose ps -q db) bash -c "gunzip -c /db_dump.gz | psql -U postgres dbname"RESULT: Database created and restored data. But another container runs while running web application server usingdocker-compose up.docker--compose.yml:version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=docker
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0' -d
image: uname/application
links:
- db
ports:
- "3000:3000"
depends_on:
- db
tty: trueCan some one please help to create and import database?EDIT:I've tried one more approach by addingPOSTGRES_DB=db_nameenvironment variable indocker-compose.ymlfile so that database will be created and after running the application (docker-compose up), I'll import the database. But getting an error:web_1 exited with code 0.I'm confused why I'm getting this error (in first and third approach), seems to be something is messed up indocker-composefile. | Error: Postgres database import in docker container |
First guess is the python program is explicitly binding to the loopback IP address127.0.0.1which disallows any remote connections. Check the docs for that python mock tornado server for something like--bind=0.0.0.0and adjust accordingly.You can confirm if this is the case by doing a docker exec and in the container runningnetstat -ntlp | grep 8888and seeing which IP is bound. If it's127.0.0.1, that confirms that is indeed the problem. | So here is the situation, I have a container running built with this dockerfile:FROM python:2-onbuild
EXPOSE 8888
CMD [ "nohup", "mock-server", "--dir=/usr/src/app", "&" ]I run it with this command:docker build -t mock_server .
docker run -d -p 8888:8888 --name mocky mock_serverI am using it on a mac so boot2docker is going and I hit it from the boot2docker ip on 8888. I tried boot2docker ssh and hitting the container from there. I randocker exec -it mocky bashandps auxshows:USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.9 113316 18576 ? Ss 15:16 0:00 /usr/local/bin/python2 /usr/local/bin/mock-server --dir=/usr/src/app &
root 5 1.6 0.1 21916 3440 ? Ss 17:52 0:00 bash
root 9 0.0 0.1 19180 2404 ? R+ 17:53 0:00 ps auxWhen I cURL it:curl -I -XGET localhost:8888/__manage
HTTP/1.1 200 OK
Content-Length: 183108
Set-Cookie: flash_msg_success=; expires=Thu, 04 Sep 2014 17:54:58 GMT; Path=/
Set-Cookie: flash_msg_error=; expires=Thu, 04 Sep 2014 17:54:58 GMT; Path=/
Server: TornadoServer/4.2.1
Etag: "efdb5b362491b8e4b8347b97ccafeca02db8d27d"
Date: Fri, 04 Sep 2015 17:54:58 GMT
Content-Type: text/html; charset=UTF-8So I the app is running inside the container but I can't get anything from outside it. What can be done here? | Docker container published ports not accessible? |
This is a rather broadly asked question, so I will (and can) answer only in a rather broad manner.There are a lot of key concepts that have changed. These are the most important ones and you'll need some time to get into it, but they are a big improvement to OpenShift v2.:Cartridges vs. Docker ContainersGears vs. Kubernetes PodsBroker vs. Kubernetes MasterRelease ofRed Hat Enterprise Linux Atomic
HostWhen you'll study the links below you will understand, that (really exaggerated) OpenShift v3 has basically nothing to do with v2 besides the name, the logo and the PaaS focus. But it's still a great tool and IMO has set new standards in the PaaS-world. (No, I don't work for RedHat ;)What's New:https://docs.openshift.com/enterprise/3.0/whats_new/overview.htmlhttps://docs.openshift.com/enterprise/3.0/architecture/overview.htmlFor starters; Docker & Kubernetes:https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/Pretty new:Creating a Kubernetes Cluster to Run Docker Formatted Container ImagesEDIT 2016_06_30:
Sorry for necro'ing this old post, but I wanted to add this quick, fun andveryinformative video about Kubernetes:https://youtu.be/4ht22ReBjno | Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed8 years ago.Improve this questionI'm searching for a main difference between OpenShift V3 and V2.
Is OpenShift V2 working like this?:https://www.openshift.com/walkthrough/how-it-worksAnd how are Docker and Kubernetes working in V3?Can someone give me a clear explanation about the build-up of OpenShift V2 and V3 | OpenShift V3 vs. OpenShift V2 [closed] |
Mount a volume in your container mapped to the desired path in your hostdocker run -d -v /host/path:/python_app/output your_docker_imageWhere/python_app/outputis the path inside the container where your app is writing the pdf file.Note that/host/pathshould have enough permissionschmod 777 /host/path | I have a python app running on a docker container and it generates a pdf file. I want to store the generated pdf file in a given path in the host machine.I am not sure on how can this be achieved. Any ideas? | Save a file generated by app running on docker to a given path in the host machine |
When a Docker container is run, it runs theENTRYPOINT(only), passing theCMDas command-line parameters, and when theENTRYPOINTcompletes the container exits. In the Dockerfile theENTRYPOINThas to be JSON-array syntax for it to be able to see theCMDarguments, and the script itself needs to actually run theCMD, typically with a line likeexec "$@".The single simplest thing you can do to clean this up is not to try to go back and forth between environment variables and positional parameters. TheENTRYPOINTscript will be able to directly read theENVvariables you set in the Dockerfile (or override withdocker run -eoptions). So if you delete the first lines of the script that set these variables from positional parameters, and make sure to run theCMD#!/bin/sh
# delete the lines that set CONTAINER_NAME et al.
rm -f /etc/nginx/sites-enabled/default
sed -ri 's@CONTAINER_NAME@'${CONTAINER_NAME}'@' /etc/nginx/sites-available/ssl
...
# and add this at the end
exec "$@"and then change the Dockerfile to not pass positional parameters but do use JSON-array syntax forENTRYPOINTENTRYPOINT ["/etc/nginx/docker-entrypoint.sh"]
CMD ["nginx"]that should get you off the ground.It's worth considering how much of this you actually need to be configurable. For instance, would you ever need a path different from the default/etc/nginx/certsinside the isolated container filesystem space? Usually with the standardnginxDocker Hub image you work with it by injecting an entire complete configuration file and if you choose to do that it simplifies your Docker setup.Other generic suggestions: remove theVOLUMEdeclarations (they potentially cause confusing behavior later in the Dockerfile and leak anonymous volumes and aren't otherwise necessary); don't make executable files world-writable (chmod 0755, not0777);RUN apt-get update && apt-get installin the same Dockerfile command. | I write shell script file and use this with docker ENTRYPOINT
but when I run docker image, it just stops without any error log because of entrypoint code linemy DockerfileFROM ubuntu:16.04
MAINTAINER limtaegeun <[email protected]>
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
ENV CONTAINER_NAME nodejs
ENV SERVER_NAME myserver.com
ENV PEM_PATH /etc/nginx/certs/cert.pem
ENV KEY_PATH /etc/nginx/certs/cert.key
WORKDIR /etc/nginx
ADD ./sites-available/ssl /etc/nginx/sites-available/ssl
ADD ./docker-entrypoint.sh /etc/nginx/docker-entrypoint.sh
RUN chmod 777 /etc/nginx/docker-entrypoint.sh
EXPOSE 80 443
ENTRYPOINT /etc/nginx/docker-entrypoint.sh ${CONTAINER_NAME} ${SERVER_NAME} ${PEM_PATH} ${KEY_PATH}
CMD ["nginx"]docker-entrypoint.sh#!/bin/sh
CONTAINER_NAME=$1
SERVER_NAME=$2
PEM_PATH=$3
KEY_PATH=$4
rm -f /etc/nginx/sites-enabled/default
sed -ri 's@CONTAINER_NAME@'${CONTAINER_NAME}'@' /etc/nginx/sites-available/ssl
sed -ri 's@SERVER_NAME@'${SERVER_NAME}'@' /etc/nginx/sites-available/ssl
sed -ri 's@PEM_PATH@'${PEM_PATH}'@' /etc/nginx/sites-available/ssl
sed -ri 's@KEY_PATH@'${KEY_PATH}'@' /etc/nginx/sites-available/ssl
# cp -f sites-available/ssl sites-available/default
ln -s /etc/nginx/sites-available/ssl /etc/nginx/sites-enabled/defaultmy docker run commanddocker run -v /home/ubuntu/Docker-nginx-cloudflare-ssl-proxy/certs:/etc/nginx/certs \
--name nginx-ssl -p 443:443 -p 80:80 --network nginx-net --rm -d nginx-cloudfare-ssl-proxywhat is the problem?? | How to use docker ENTRYPOINT with shell script file combine parameter |
TLDR:-tshould not be used unnecessarily.I think for your pg_dump, the-tis corrupting the data written to db.dump. For that matter, the-iis also redundant since pg_dump does not need to read from stdin.For your pg_restore, you need neither option. If you redirect stdin from outside the container, then you need-i.I had the same problem and solved it by using neither-inor-tfor the pg_dump.I think the following fixed pg_dump should work for your case:$ docker exec my_postgres_container pg_dump -Fc -U postgres -d postgress > db.dumpIncidentally for the pg_restore, you don't need to copy the dump file to the container. You can just redirect stdin from the dump file on the host:$ docker exec -i my_postgres_container pg_restore -Fc -c -U postgres -d postgres < db.dump | I am testing a backup/restore procedure for my postgres DB inside a docker container.I dump my db like this:$ docker exec -ti my_postgres_container pg_dump -Fc -U postgres > db.dumpAfterwards, I try to restore it like this:$ docker cp db.dump my_postgres_container:/db.dump
$ docker exec -ti my_postgres_container pg_restore -U postgres -c -d postgres db.dumpThe command returns without output or errors, but nothing happens.So instead, I tried to restore it manually like this:$ docker cp db.dump my_postgres_container:/db.dump
$ docker exec -ti my_postgres_container bash
root@fdaad610bee3:/# pg_restore -U postgres -c -d postgres db.dump
Segmentation fault (core dumped)Why is pg_restore segfaulting when trying to read my DB dump? | Why is pg_restore segfaulting in Docker? |
With recent versions of Docker you can see the space used with:docker system dfand prune it with:docker system pruneThe above command combines theprunecommand that exists for volumes, containers, images and networks:docker volume prunedocker container prunedocker image prunedocker network pruneEach command has a--helpoption documenting a-f(--force) option to avoid asking you questions. It must be used with care.-o-On older versions of Docker I ran the script:#!/bin/bash
# Remove dead containers (and their volumes)
docker ps -f status=dead --format '{{ .ID }}' | xargs -r docker rm -v
# Remove dangling volumes
docker volume ls -qf dangling=true | xargs -r docker volume rm
# Remove untagged ("") images
docker images --digests --format '{{.Repository}}:{{.Tag}}@{{.Digest}}' | sed -rne 's/([^>]):@/\1@/p' | xargs -r docker rmi
# Remove dangling images
docker images -qf dangling=true | xargs -r docker rmi
# Remove temporary files
rm -f /var/lib/docker/tmp/* | I have some problem about the storage. The folder/var/lib/docker/devicemapper/is taking 50% of my storage.In the folder/var/lib/docker/devicemapper/mnt, I have many empty folders.How can I properly clean dockerdevicemapperand remove all unused mapping ? | How to clean docker devicemapper folder properly ? |
You copied the contents of the .git folder into the /dist directory, not the .git folder itself. If you want to copy the folder, specify the target as the folder you want to create:COPY .git/ ./.git/ | We have got a node.js/typescript project and now I am supposed to provide data to our sonarqube analysis.We are using a docker container to run our tests in and after the tests are finished, we are running the sonarqube analysis so we can use the code coverage report from the tests for it.The only problem is, that the blame information is missing. This is why i tried to copy the .git folder into the docker container, but so far I did not succeed. This is the content of my Dockerfile.FROM node:11.6-alpine
RUN apk --update add openjdk8-jre
WORKDIR /dist
COPY package*.json ./
RUN npm install
COPY src/ ./src/
COPY test/ ./test/
COPY ts*.json ./
COPY sonar-project.properties ./
COPY test.sh /
COPY .git/ ./
RUN chmod +x /test.sh
CMD ["sh", "/test.sh"]I have read, that the .dockerignore file can be used to exclude files or folder but we are not using such a file and I also tried creating one that only contains node_modules, but that also did not work.Has anybody got any idea how to include it or any tipp what to google? I only find "how to exclude .git from the docker container" but reverting those tipps did not help so far.Edit: To make it even stranger usingCOPY . ./includes the .git folder into the image. | Why can't I copy my .git folder into my docker container |
You can't. From the v1 API specs:"ports": [
{
"name": "string",
"hostPort": 0,
"containerPort": 0,
"protocol": "string",
"hostIP": "string"
}
]Each port is uniquely identified and exposing host ports would be an anti-pattern in Kubernetes. | In docker, I can expose a range of ports using "-p 65000-65050:65000-65050". How do I achieve this for kubernetes in a pod.yml or replication-controller.yml? | How to allow a range of ports in Kubernetes in containerPort variable? |
I got it to work using the setup from:https://github.com/jupyter/docker-stacks/tree/master/minimal-notebookthe trick was to install tini and put the following code into a start-notebook.sh script:#!/bin/bash
exec jupyter notebook &> /dev/null &this is than added to the path with:COPY start-notebook.sh /usr/local/bin/andRUN chmod +x /usr/local/bin/start-notebook.shThen I could setCMD ["start-notebook.sh"]to start up the container with jupyter running in the background on start. | I am trying to run a jupyter notebook in the background without printing anything to the console. I found this solution in aquestionfor bash:jupyter notebook &> /dev/null &But I am running jupyter in a docker container and want it to start in the background viaCMD. How can I do the same in sh? | Run Jupyter Notebook in the Background on Docker |
Each step of the Dockerfile is run in it's own container that is discarded when that step is done, and volumes are discarded when the last (in this case only) container that uses them is deleted after it's command finishes.This makes volumes poorly suited to use in Dockerfilesbecause they loose their contents half way through. Docker files are intended to be able to be run anywhere, and if they used Volumes that persisted it would make this harder. On the other hand if you really want this, just back the volume with a directory on the host.PS: Initializing the host's data directory is best done outside of the Docker file.Last time I needed this I left this step out of the docker file because the idea of this step is to prepare thehostto run the Image produced by this Dockerfile. Then I made a container with docker run and within that container I ran the usual DB setup stuff.docker run -v /var/lib/mysql:/raid/.../mysql ...
/usr/bin/mysql_install_db
mysql_secure_installationNow when this container is moved to a new host, that Data dir can either be brought with it, or created using the same process on that host. Or if, as in my example, you wanted another mysql db for some other application you don't have to repeat the container creation.The important idea is to keep the container creation and host setup seperate. | Please consider the following Dockerfile:FROM phusion/baseimage
VOLUME ["/data"]
RUN touch /data/HELLO
RUN ls -ls /dataProblem: "/data" directory does not contain "HELLO" file. Moreover, any other attempts to write to volume directory (via echo, mv, cp, ...) are unsuccessful - the directory is always empty. No error messages shown.I could not find anything in documentation or on stackoverflow regarding this problem.Is this something well-known or new?docker versionreturns:Client version: 1.2.0
Client API version: 1.14
Go version (client): go1.3.1
Git commit (client): fa7b24f
OS/Arch (client): linux/amd64
Server version: 1.2.0
Server API version: 1.14
Go version (server): go1.3.1
Git commit (server): fa7b24f | Writing to docker volume from Dockerfile does not work |
Docker Swarm and Docker Compose are fundamentally different animals. Compose is a build tool that lets you define and configure a group of related containers, whereas swarm is an orchestration tool that manages multiple docker engines in a way that lets you treat them (somewhat) as a single unit. Swarm exposes an API that is mostly compatible with the Docker Remote API, which allows existing applications to use Swarm to scale horizontally without having to completely overhaul the existing interface to the container engine.That said, much of the functionality in Docker Compose that overlaps with Docker Swarm has been added incrementally. Compose has grown over time, and the distinction between the two has narrowed a bit. Swarm was eventually integrated into the Docker engine, and Docker Stack was introduced, allowingcompose.ymlfiles to be read directly by Docker, without using Compose.So the real question might be:what is the difference between docker compose and docker stack?Not a whole lot. Compose is actually a separate project, written in Python that uses the Docker API under the hood. Stack does much of the same things as Compose, but is integrated into Docker. Stack also wants pre-built images, while compose will handle those image builds for you, which makes compose very handy for development.What you are dealing with might be a product of a time when these 2 tools were a lot more distinct. Docker Swarm is part of Docker, and it allows for easy scaling if needed (even if you don't need it now, it might be good down the road). On the other hand, Compose (in my opinion anyway) is much more useful for development situations where you are making frequent tweaks to your images, and rebuilding. | Is there a reason to usedocker-swarminstead ofdocker-composefor deploying a single host in production?I'm currently rewriting an existing application. My predecessors set up the application using docker-swarm. But I do not understand why: the application will only consist of a single host running a couple of services. These services will only supply some local information on the customer network via a REST-Api to a kubernetes cluster (so no real load or reason to add additional hosts).I looked through the Docker website and could not find a reason to usedocker-swarmto deploy a single host, apart from testing a deployment on a single host dev environment.Are there benefits of usingdocker-swarmcompared todocker-composeregarding deployment, networking, etc...? | docker-swarm vs.docker-compose on single host in production |
If you're like myself and followed the examples on the Actix website, you might have written something like this, or some variation thereof:fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("127.0.0.1:8088")
.unwrap()
.run()
.unwrap();
}The issue here is that you're binding to a specific IP, rather than using0.0.0.0to bind to all IPs on the host container. I had the same issue as you and solved it by changing my code to:fn main() {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
.route("/again", web::get().to(index2))
})
.bind("0.0.0.0:8088")
.unwrap()
.run()
.unwrap();
}This might not be the issue for you, I couldn't know without seeing the code to run the server. | I'm trying to make a docker container of my rust programme, let's lookDockerfileFROM debian
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install git curl g++ build-essential
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y
WORKDIR /usr/src/app
RUN git clone https://github.com/unegare/rust-actix-rest.git
RUN ["/bin/bash", "-c", "source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo build --release; mkdir uploaded"]
EXPOSE 8080
ENTRYPOINT ["/bin/bash", "-c", "echo 'Hello there!'; source $HOME/.cargo/env; cd ./rust-actix-rest/; cargo run --release"]cmd to run:docker run -it -p 8080:8080 rust_rest_api/devbut curl from outsidecurl -i -X POST -F files[][email protected]127.0.0.1:8080/uploadresults intocurl: (56) Recv failure: Соединение разорвано другой сторонойi.e. refused by the other side of the channelbut inside the container:root@43598d5d9e85:/usr/src/app# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
actix_003 6 root 3u IPv4 319026 0t0 TCP localhost:http-alt (LISTEN)but running the programme without docker works properly and processes the same request from curl adequately.and inside the container:root@43598d5d9e85:/usr/src/app# curl -i -X POST -F files[][email protected]127.0.0.1:8080/upload
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
content-length: 70
content-type: application/json
date: Wed, 24 Jul 2019 08:00:54 GMT
{"keys":["uploaded/5nU1nHznvKRGbkQaWAGJKpLSG4nSAYfzCdgMxcx4U2mF.jpg"]}What is the problem from outside? | Rust actix_web inside docker isn't attainable, why? |
try to delete all stopped containers:docker rm -f $(docker ps -a -q)then delete the volumeyou can see stopped container usingdocker ps -ausingdocker pswill return only running containersEDIT since you are on Windowslist stopped containers:docker ps -adelete the stopped container - you need to replace CONTAINER_ID with your real ones -:docker rm -f CONTAINER_ID_1 CONTAINER_ID_2 | When I'm trying to remove a volume I get this error:Error response from daemon: remove myvol: volume is in use -
[2a177cb40a405db9f245fccd776dcdeacc d266ad624daf7cff510c9a1a1716fe]But bothdocker psanddocker container lsreturn an empty list.I've tried restarting the docker daemon.I use Docker Toolbox on Windows 10. | Docker: Error response from daemon: remove myvol: volume is in use |
As far as I know it's not possible to access things outside out your build context.You might have some luck by mixing thedockerfiledirective with thecontextdirective in your compose file in the root dir of your project as follows:build:
context: .
dockerfile: A/DockerfileYou may wish to include a.dockerignorein the project root dir to prevent the entire project being send to the docker daemon resulting in potentially much slower builds. | In my Maven project I have following structure:docker/
docker-compose.yml
A/
Dockerfile
B/
Dockerfile
src/
target/
foo.warIn A's Dockerfile I need access to war in/targetfolder with the following command:COPY ../../target/foo.war /usr/local/tomcat/webapps/foo.warwhen I rundocker-compose uptt gives me errorfailed to build: COPY failed: Forbidden path outside the build
context: ../../target/foo.wardocker-compose.ymlversion: '3.6'
services:
fooA:
build: ./docker/A
ports:
- "8080:8080"
depends_on:
- fooB
fooB:
build: ./docker/fooB
ports:
- "5433:5433"Can you tell me how to solve this? I don't want copy war file manually after every project build. | Access to outside of context in Dockerfile |
actually, I found that if I comment out the full Environment line it works for the private registry but not for docker hub anymore (of course, no more proxy). And here is the final solution that works for both private registry and docker hub public registry:In the NO_PROXY environment variable, only the domain name should be used, not the FQDN (including "archive." hostname prefix):Here is my config file now:[Service]
Environment="HTTP_PROXY=http://proxy.mycompany.com:8000/" "NO_PROXY=localhost,127.0.0.1,docker-registry.mycompany.com"Note that there is no more "archive." nor "portus." prefix in NO_PROXY anymore, just the domain name starting from "docker-registry".As I saw the docker login command line including "archive." prefix, it was misleading and I thought it had to be in the NO_PROXY environment variable... but no, it should not.Hope it helps someone. I wish I found the answer on google before, but I didn't so I'm just posting it here, it might help someone. | We have a private docker registry at work (based on portus, but whatever) and I try to push an image to this registry but it doesn't work. It fails with the following error message:$ sudo docker login archive.docker-registry.mycompany.com
Username: mylogin
Password:
Error response from daemon: Get https://archive.docker-registry.mycompany.com/v1/users/:
net/http: TLS handshake timeout
$I already configured the proxy in /etc/systemd/system/docker.service.d/http-proxy.conf (my docker is on centos 7):[Service]
Environment="HTTP_PROXY=http://proxy.mycompany.com:8000/" "NO_PROXY=localhost,127.0.0.1,archive.docker-registry.mycompany.com"but it still fails.I tried to use HTTPS_PROXY instead of HTTP_PROXY using either http or https in url, I tried to download certificate manually and configure them in system (update-ca-certs) but it keeps failing.When I changed this configuration file, as root, I executed:# systemctl daemon-reload
# systemctl restart docker | docker login behind proxy on private registry gives TLS handshake timeout |
The only time you need something like supervisord (or other process supervisor) in a Docker container is if you need to start up multiple independent processes inside the container when the it starts.For example, if you needed to startbothnginx and gunicorn in the same container, you would need to investigate some sort of process supervisor. However, a much more common solution would be to place these two services in two separate containers. A tool likedocker-composehelps manage multi-container applications.If a container exits because the main process exits, Docker will restart that container if you configured a restart policy when you first started it (e.g., viadocker run --restart=always ...). | I'm running django with gunicorn inside docker, my entry point for docker is:CMD ["gunicorn", "myapp.wsgi"]Assuming there is already a process that run the docker when the system starts and restart the docker container when it stops, do I even need to use supervisord? if gunicorn will crash won't it crash the docker and then restart? | Is supervisord needed for docker+gunicorn+nginx? |
It's worked for me:Create a new mariadb containerdocker container run \
--name sql-maria \
-e MYSQL_ROOT_PASSWORD=12345 \
-e MYSQL_USER=username \
-e MYSQL_PASSWORD=12345 \
-e MYSQL_DATABASE=dbname \
-p 3306:3306 \
-d mariadb:10Watch the logs and wait for mariadb server is updocker container logs -f sql-mariaThe tail of the log should look something like this2020-02-04 20:02:44 0 [Note] mysqld: ready for connections.Use a client of your choice to connect to mariadb. I'm using mysql client heremysql -h 127.0.0.1 -p -u username dbnameIf you are on a unix-based system it is mandatory to use the loopback address 127.0.0.1 instead of localhost | I've created a docker container containing an instance of mariadb, but i cannot access to the database from my phisical machine:I've got the ip address from docker inspect and the port from docker ps but Sequel Pro gave me the connection failed message (same thing with Visual Studio Code). Obviously from inside the docker container I can connect myself to the database engine.Where am i wrong? Thanks so much to everyone! :)[EDIT]Thanks to all comments...if I try to expose the port, the container doesn't run :/ | how to remote access to mariadb on docker? |
Following @Alex Blex answer: It works when you run it on all interfaces.php bin/console server:run 0.0.0.0:8000 | I have a php docker container where is my symfony project.Here is mydocker-compose.ymlphp-fpm:
build: ./php
container_name: php-fpm
links:
- db
ports:
- 9000:9000
- 8448:8448
- 8000:8000
working_dir: /var/www/html/
volumes:
- ../app:/var/www/html
volumes_from:
- data
tty: true
env_file:
- ./docker.env
entrypoint: /entrypoint.shI want to launch my symfony project with this command:php bin/console server:run localhost:8000But it's not working when I want to access the url. I have this error message:The localhost page isn’t workinglocalhost didn’t send any data.How can I fix that?PS: I'm using docker for macAndphp bin/console -vvv server:run localhost:8000outputs:[2016-08-06 14:09:53] php.DEBUG: fsockopen(): unable to connect to
localhost:8000 (Connection refused)
{"type":2,"file":"/var/www/html/symfony-test/vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Command/ServerCommand.php","line":59,"level":28928}[OK] Server running on http://localhost:8000// Quit the server with CONTROL-C.RUN '/usr/local/bin/php' '-S' 'localhost:8000'
'/var/www/html/symfony-test/vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Resources/config/router_dev.php' | Symfony server:run in php Docker container |
gcloud initis a wrapper command which runsgcloud config configurations create MY_CONFIG
gcloud config configurations activate MY_CONFIG
gcloud auth login
gcloud config set project MY_PROJECTwhich allows user to choose configuration, login (via browser) and choose a project.For your use case you probably do not want to usegcloud init, instead you should download service account key file fromhttps://console.cloud.google.com/iam-admin/serviceaccounts/project?project=MY_PROJECT, make it accessible inside docker container and activate it viagcloud auth activate-service-account --key-file my_service_account.json
gcloud config set project MY_PROJECT | I have made a Dockerfile for deploying my node.js application into google container engine .It looks like as belowFROM node:0.12
COPY google-cloud-sdk /google-cloud-sdk
RUN /google-cloud-sdk/bin/gcloud init
COPY bpe /bpe
CMD cd /bpe;npm startI should use gcloud init inside Dockerfile because my node.js application is using gcloud-node module for creating buckets in GCS .
When i am using the above dockerfile and doing docker built it is failing with following errorssudo docker build -t gcr.io/[PROJECT_ID]/test-node:v1 .
Sending build context to Docker daemon 489.3 MB
Sending build context to Docker daemon
Step 0 : FROM node:0.12
---> 57ef47f6c658
Step 1 : COPY google-cloud-sdk /google-cloud-sdk
---> f102b82812f5
Removing intermediate container 4433b0f3627f
Step 2 : RUN /google-cloud-sdk/bin/gcloud init
---> Running in 21aead97cf65
Welcome! This command will take you through the configuration of gcloud.
Your current configuration has been set to: [default]
To continue, you must log in. Would you like to log in (Y/n)?
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
ERROR: There was a problem with web authentication.
ERROR: (gcloud.auth.login) invalid_grant
ERROR: (gcloud.init) Failed command: [auth login --force --brief] with exit code [1]I done it working by hard coding the authentication key inside google-cloud-sdk source code.Please let me know the proper way to solve this issue . | not able to perform gcloud init inside dockerfile |
Seems you are Running Docker for Windows using "Windows Containers". If you switch to "Linux containers" you'll see "Shared Drives" option.Take a look this video.According Docker documentation:shared drives for Windows containers is not implemented.Volume mounting requires shared drives for Linux containers (not for
Windows containers).Update:Since 2018, Docker for Desktop is using anew UI. I recorded a new video showinghow to solve this problem.Update:If you are usingWSL2you will be experiencing same problem.Take a look this video. | How to set the shared drives in Docker for Windows? I am using the latest version 18. Stable and Edge. My settings screen is shown below. It's missing some options like Shared Drives, Advanced and Network, which are shown in the second image. Why am I missing these options?My settings:Screen from a website: | How to set the shared drives in Docker for Windows? |
Docker doesnotuse LXC (notsince Docker 0.9) but libcontainer (nowrunc), a built-in execution driver which manipulates namespaces, control groups, capabilities, apparmor profiles, network interfaces and firewalling rules – all in a consistent and predictable way, and without depending on LXC or any other userland package.A docker image represents a set of files winch will run as a container in their own memory and disk and user space, while accessing the host kernel.This differs from a VM, which does not access the host kernel but includes its own hardware/software stack through itshypervisor.A container has just to set limits (disk, memory, cpu) in the host. An actual VM has to build an entire new host.That docker image (group of files) can be anything, as long as:it does not depends on host libraries (since it is isolated in its own disk space, it does not have access to hosts files,unless volumes are mounted)it does only system calls: see "What is meant by shared kernel in Docker?"That means an image can beanything: another linux distro, or even a single executable file. Any executable compile in go (https://golang.org/) for instance, could be packaged in its own docker image without any linux distro:FROM scratch
COPY my_go_exe /
ENTRYPOINT /my_go_exescratchis the "empty" image, and a go executable is statically linked, so it is self-contained and only depends on system calls to the kernel. | As I understand, a Docker image (and consequently, a container) can be instantiated from different Linux distributions, such as Ubuntu, CentOS and others.Let's say my Docker host is running standard Ubuntu 14.04.What happens if I use container that is not based on the same Linux distribution?Not 14.04?Not Ubuntu (or any other Debian-based)?What are the disadvantages of using different base-images of images you use? (Let's say I use Image A that uses Ubuntu as a base image, Image B that used Debian as base image and Image C that uses CentOS as base image)?Bonus question: How can I tell what base image was used for an image if the developer didn't specify it in the Docker Hub description? | How happens when Linux distributions are different between the docker host and the docker image? |
I had the same error message. For me the fix was to cross build the for the right architecture. In my case amd64. Like this:RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o [OUTPUT] . | I am attempting to create a container with my Go binary in for use as a database migrator. If I run the binary it works perfectly, however, I am struggling to put it into a container and run it in my docker-compose stack.Below is my Dockerfile.FROM golang:1.11 AS build_base
WORKDIR /app
ENV GO111MODULE=on
# We want to populate the module cache based on the go.{mod,sum} files.
COPY go.mod .
COPY go.sum .
RUN go mod download
FROM build_base AS binary_builder
# Here we copy the rest of the source code
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
#In this last stage, we start from a fresh Alpine image, to reduce the image size and not ship the Go compiler in our production artifacts.
FROM alpine AS database-migrator
# We add the certificates to be able to verify remote instances
RUN apk add ca-certificates
COPY --from=binary_builder /app /app
ENTRYPOINT ["/app/binary-name"]When I run my docker-compose stack the MySQL database gets setup correctly but I receive this error in the logs for my database migrator container.data-migrator_1 | standard_init_linux.go:190: exec user process caused "exec format error" | standard_init_linux.go:190: exec user process caused "exec format error" when running Go binary |
You've set:dockerfile: .Just try to use a relative path to you Dockerfile from the set context:context: ../../
dockerfile: ./folder1/folder2/Dockerfile | I got a docker-compose file in which I want to set a context and docker file to look something like this:build:
context:
dockerfile: For now my file is in the root folder so its simply:build:
context: .
dockerfile: .This way it does work.The structure of the project is something like this:./
- folder1/
- folder2/
docker-compose.yaml
DockerFileI want to copy files as part of the commands in the DockerFile and I want the paths to be relative to the root folder of the project.How with this project structure do I set the context to be the root folder of the project? I tried doingcontext: ../../but I then got an error:Error response from daemon: unexpected error reading Dockerfile: read (path): is a directoryHow do I set the context correctly? | Setting context in docker-compose file for a parent folder |
In order to start container after reboot you need to add this property:--restart=alwaysto your container start script. For example:docker run -d -p 80:5000 --restart=always image_name | I have the following systemd script:[Unit]
Description=Hub docker container
After=docker.service
[Service]
User=root
ExecStart=/home/hub/hub.sh
ExecStop=/bin/docker stop hub
ExecStopPost=/bin/docker rm hub
[Install]
WantedBy=multi-user.targetRunning the command:systemctl start/stop hubworks fine. I also created the symlink by usingsystemctl enable hub. Why doesn't my service start up after I reboot the entire laptop? I followed the docker guide so that Docker starts up on reboot, but for some reason my container doesn't start up. Am I missing a field in my script?The command I am using my ExecStart, "/home/hub/hub.sh" script is:docker run --net=host --restart=always --name hub -t hubAfter reboot I get the following when I type systemctl status hub:● hub.service - Hub docker container
Loaded: loaded (/etc/systemd/system/hub.service; enabled; vendor preset: disabled)
Active: inactive (dead) | Docker container doesn't start after reboot with enabling systemd script |
Docker cleanup job is rather non-existing and you are basically in charge of doing it yourself. There are ways of doing that as pointed out inthis blog-post, yet I rather use third-party scripts, e.g.:docker-cleanto clean up some of the mess docker leaves behind. | I have few issues with storage spaces. I deleted few big files such as log files (after find unix of big files).The problem is that delete manually some file of Docker (in /var/lib/docker/...). After deletion of Docker files, I can see that the space left does not change. Docker does not release space.I restart the service Docker and I the problem persit.How can I force Docker to release space from (devicemapper, volume, images, ...) ? | How to Force Docker to release storage space after manual delete of file in volumes and containers? |
Solved it.By running the command using the -i and -t parameters you can be allowed to enter the password. using all 3 methods.so basicallydocker run -i -t | I am trying to create a docker image for my java application. At startup this application needs to be given a password (currently via console).I tried several methods of obtaining input however they have all failed. Is this a limitation of docker and if so is there a workaround?For this snippet:Console console = System.console();
if(console == null){
System.out.println("console is null!!");
} else {
System.out.println("Input password: ");
char[] password = console.readPassword("Pass: ");
}System.console()is returningnull.For this snippet:System.out.println("Creating InputStreamReader");
InputStreamReader s = new InputStreamReader(System.in);
System.out.println("Creating BufferedReader");
BufferedReader r = new BufferedReader(s);
System.out.println("Input password: ");
String password = r.readLine();
System.out.println("Password: "+password);the input is automatically skipped, (resulting in the String password to be null) with the program continuing execution as if there was no input requested. (password isnull)For this snippet:Scanner s = new Scanner(System.in);
System.out.println("Input password: ");
String password = s.next();I getException in thread "main" java.util.NoSuchElementException
at java.util.Scanner.throwFor(Scanner.java:907)
at java.util.Scanner.next(Scanner.java:1416)
at com.docker.test.DockerTest.testScanner(DockerTest.java:49)
etc...I am running the program from within my image usingdocker run test/plaintest1my dockerfile is as followsFROM centos
RUN yum install -y java-1.7.0-openjdk
ADD DockerTest.jar /opt/ssm
ENTRYPOINT ["java","-jar","/opt/ssm/DockerTest.jar"]
CMD [""] | Docker Java Application failing at obtaining input from console |
You can use this policy :on-failureTheon-failurepolicy is a bit interesting as it allows you to tell Docker to restart a container if the exit code indicates error but not if the exit code indicates success. You can also specify a maximum number of times Docker will automatically restart the container. likeon-failure:3It will retry 3 times.unless-stoppedTheunless-stoppedrestart policy behaves the same as always with one exception. When a container is stopped and the server is rebooted or the Docker service is restarted, the container will not be restarted.Hope this will help you in this problem.Thank you! | I have adocker-compose.ymlfile with a following:services:
kafka_listener:
build: .
command: bundle exec ./kafka foreground
restart: always
# other servicesThen I start containers with:docker-compose up -dOn my amazon instance kafka-server (for example) fails to start sometimes, so./kafka foregoundscript fails. When typingdocker psI see a message:Restarting (1) 11 minutes ago. I thought docker should restart failed container instantly, but it seems it doesn't. After all, container has been restarted in about 30 minutes since first failed attempt.Is there any way to tell Docker-Compose to restart container instantly after failure? | docker-compose restart interval |
Use a base image with .NET Core SDK installed. For example:microsoft/dotnet
microsoft/dotnet:1.1.2-sdkYou can't rundotnet testin a Runtime-based image without SDK. This is why an SDK-based image is required. Here is a fully-workableDockerfileexample:FROM microsoft/dotnet
WORKDIR /app
COPY . .
RUN dotnet restore
# run tests on docker build
RUN dotnet test
# run tests on docker run
ENTRYPOINT ["dotnet", "test"]RUNcommands are executed during a docker image build process.ENTRYPOINTcommand is executed when a docker container starts. | I have a .NET Core application containing MSTest unit tests. What would the command be to execute all tests using this Dockerfile?FROM microsoft/dotnet:1.1-runtime
ARG source
COPY . .
ENTRYPOINT ["dotnet", "test", "Unittests.csproj"]Folder structure is:/Dockerfile
/Unittests.csproj
/tests/*.cs | How to run .NET unit tests in a docker container |
There is a good reason for this: it's being interpreted as two commands. Try wrapping the printf command in a command string:docker exec my_docker bash -c 'printf "%sTest" >> /usr/local/src/test.txt'The key is that you've used a bash operator. Similar to any time you run something like:echo one two >> file.txtThe ">>" operator doesn't get passed as an argument to echo (like "one" and "two" do). Instead it executes your echo command and appends its output to a file. In this case, the ">>" operator is doing the same to your docker exec, and trying to output the results to/usr/local/src/scores.txtand reporting that the directory does not exist (on the host, not the container). | I am trying to run the following command for an existing docker container:docker exec my_docker printf '%sTest' >> /usr/local/src/test.txtIt gives me the following error:-bash: /usr/local/src/test.txt: No such file or directoryWhile when I do the following:docker exec -it my_docker bashAnd type the same command, everything works just fine. Is there anything that I am missing here? | Docker exec printf gives No such file or directory error |
I have created a docker file for this exact purpose:FROM php:7.3-apache
ENV ACCEPT_EULA=Y
RUN apt-get update && apt-get install -y gnupg2
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get -y --no-install-recommends install msodbcsql17 unixodbc-dev
RUN pecl install sqlsrv
RUN pecl install pdo_sqlsrv
RUN docker-php-ext-enable sqlsrv pdo_sqlsrv
COPY . /var/www/html/Enjoy! | I have a simple docker file, as follows:FROM php:7.2-apache
COPY src/ /var/www/html/Normally to install drivers for Mongo or MySQL connectivity I would do so by adding something like the below to the dockerfile:docker-php-ext-install mongoOn this occasion I want to connect my php application to a SQL Server database, and I understand the best way to do this for php 7.x is by using the PDO driver, however I am unfamiliar with how to do configure this in the dockerfile.I've tried doing a pecl install, like adding:RUN pecl install sqlsrv pdo_sqlsrvHowever this fails with a combination of errors that do not seem to point me in the right direction.I'm just looking for a simple way to get this done in a dockerfile or by using docker run.For added info, here's the error I'm getting:/tmp/pear/temp/sqlsrv/shared/xplat.h:30:17: fatal error: sql.h: No such file or directory
#include
^
compilation terminated.
Makefile:194: recipe for target 'conn.lo' failed
make: *** [conn.lo] Error 1
ERROR: `make' failed
The command '/bin/sh -c pecl install sqlsrv pdo_sqlsrv && docker-php-ext-enable pdo_sqlsrv' returned a non-zero code: 1Thanks all | Install / Configure SQL Server PDO driver for PHP docker image |
Credit to @Hans Kilian:Addextra_hoststo docker-compose fileChange URL to usehost.docker.internalinstead oflocalhostChange service to serve on0.0.0.0instead oflocalhost | I have an API running on my host machine on port 8000. Meanwhile, I have a docker compose cluster with one container that's supposed to connect said API. To get the url for the request, I use "host.docker.internal:8000" on my windows machine and it works wonderfully. However, I have a linux deployment server and in there, "host.docker.internal" doesn't resolve to anything, causing a connection error to the API. I saw on anotherpost on stackoverflow, that you solve this on linux by adding the following on yourdocker-compose.yamlservices:
service_name:
extra_hosts:
- host.docker.internal:host-gatewayThis added the docker0 IP to/etc/hosts, but when I try to do a GET request, the resulting message is:Failed to connect to host.docker.internal port 8000: Connection refusedI'm really confused right now. I don't know if this is a firewall issue, a docker issue, a docker compose issue, a docker on linux issue. Please help... | Connection Refused from Request Inside Docker Compose |
If I usenativein an image built by Dockerhub, I guess this will use the spec of the machine used by Dockerhub, and this will impact the image binary available for download?That's true. When the docker image is built, it is done on the host machine and using its resources, so-march=nativeand-mtune=nativewill take the specs of the host machine.For building docker images that may be used by a wide audience, and making them work as on many (X86) targets as possible, it's best to use a common instruction set. If you need to specifymarchandmtune, these would probably be the safest choice:-march=x86-64 -mtune=genericThere may be some performance hits compared to-march=native -mtune=nativein certain cases, but fortunately, on most applications, this change could go almost unnoticed (specific applications may be more affected, especially if they depend on a small piece of kernel functions that GCC is able to optimize well, for example by utilizing the CPU vector instruction sets).For reference, check this detailed benchmark comparison by Phoronix:GCC Compiler Tests At A Variety Of Optimization Levels Using Clear LinuxIt compares about a dozen benchmarks with GCC 6.3 using different optimization flags. Benchmarks run on an Intel Core-I7 6800K machine, which supports modern Intel instruction sets including SSE, AVX, BMI, etc. (seeherefor the complete list). Specifically,-O3vs.-O3 -march=nativeis the interesting metric.
You could see that in most benchmarks, the advantage of-O3 -march=nativeover-O3is minor to negligible (and in one case,-O3wins...).To conclude,-march=x86-64 -mtune=genericis a decent choice for Docker images and should provide good portability and a typically minor performance hit. | When compiling in a docker image (i.e. in the dockerfile), what shouldmarchandmtunebe set to?Note this is not about compiling in a running container, but compiling when the container is being built (e.g. building tools from source when the image is run).For example, currently when I rundocker buildand install R packages from source I get loads of (could beg++/gcc/f95...):g++ -std=gnu++14 [...] -O3 -march=native -mtune=native -fPIC [...]If I usenativein an image built by Dockerhub, I guess this will use the spec of the machine used by Dockerhub, and this will impact the image binary available for download?This is related tothis similar question about VMsbut containers aren't VMs. | mtune and march when compiling in a docker image |
I thinkcargois using a wrong linker due not detecting that it is a cross-compilation.Try to addENV RUSTFLAGS='-C linker=x86_64-linux-gnu-gcc'to your Dockerfile:FROM rust:latest AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN apt-get install -y build-essential
RUN yes | apt install gcc-x86-64-linux-gnu
# Create appuser
ENV USER=my-user
ENV UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
"${USER}"
WORKDIR /my-service
COPY ./ .
# set correct linker
ENV RUSTFLAGS='-C linker=x86_64-linux-gnu-gcc'
RUN cargo build --target x86_64-unknown-linux-musl --release | I am trying to generate an image for my Rust service from a Mac M1 Silicon to be run on my x86_64 box in a Kubernetes cluster.This is my Dockerfile:FROM rust:latest AS builder
RUN rustup target add x86_64-unknown-linux-musl
RUN apt update && apt install -y musl-tools musl-dev
RUN apt-get install -y build-essential
RUN yes | apt install gcc-x86-64-linux-gnu
# Create appuser
ENV USER=my-user
ENV UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/sbin/nologin" \
--no-create-home \
--uid "${UID}" \
"${USER}"
WORKDIR /my-service
COPY ./ .
RUN cargo build --target x86_64-unknown-linux-musl --release
...But keep getting the following error:#20 45.20 error: linking with `cc` failed: exit status: 1
[...]
#20 45.20 = note: "cc" "-m64" "/usr/local/rustup/toolchains/1.55.0-aarch64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/rcrt1.o"
[...]
#20 45.20 = note: cc: error: unrecognized command-line option '-m64' | Apple M1 to Linux x86_64: unrecognized command-line option '-m64' |
(This answer is the formalized version of mycomment.)Try to use%FirefoxVersion%ARG FirefoxVersion
RUN powershell -Command iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'));
RUN choco install -y firefox --version %FirefoxVersion% --ignore-checksumsReason:The error message"The command 'cmd /S /C choco install ...' returned a non-zero code: 1"indicates that thechoco installcommand is executed on cmd.exe (Windows' Command Prompt). Dockerfile'sARGvalue can be treated as an environment variable. On cmd.exe,%...%stands for env var. | I would like to pass an argument in my dockerfile to build my docker image. I've seen in other post and docker manual how to do this but it doesn't work in my case.
Here is an extract of my code where i use my argument:ARG FirefoxVersion
RUN powershell -Command iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'));
RUN choco install -y firefox --version $FirefoxVersion --ignore-checksumsI build my image with this command in powershellPrompt :docker build -t myimage --build-arg FirefoxVersion=61.0.1 .Finally I have this error :'$FirefoxVersion' is not a valid version string.
Parameter name: version
The command 'cmd /S /C choco install -y firefox --version $FirefoxVersion -- ignore-checksums' returned a non-zero code: 1Is someone know what is wrong with my code?
Thanks. | How to use the ARG instruction of Dockerfile for Windows image |
Indeed, the remote API does not have a 'detach' mode as the 'attach' mode is an extra endpoint.If you want to run in detach mode with the remote API, simply create and start your container without attaching to it.If the container still shuts down immediately, usedocker logs to check for errors. The problem might have nothing to do withdetach. | I'm trying to call docker commands via remote api.Docker remote api does not seem to have 'Detached mode' option.http://docs.docker.io/en/latest/commandline/command/run/I could use this app in the bash, and I would like to use this via remote api.https://github.com/grigio/docker-stringer | What is equivalent remote api command to 'docker run -d'? |
There is no docker environment variable named “MODEL_CONFIG_FILE” (that’s a tensorflow/serving variable, see docker imagelink), so the docker image will only use the default docker environment variables ("MODEL_NAME=model" and "MODEL_BASE_PATH=/models"), and run the model “/models/model” at startup of the docker image.
"config.conf" should be used as input at "tensorflow/serving" startup.
Try to run something like this instead:docker run -p 8500:8500 8501:8501 \
--mount type=bind,source=/path/to/models/first/,target=/models/first \
--mount type=bind,source=/path/to/models/second/,target=/models/second \
--mount type=bind,source=/path/to/config/config.conf,target=/config/config.conf\
-t tensorflow/serving --model_config_file=/config/config.conf | Having seenthisgithub issue andthisstackoverflow post I had hoped this would simply work.It seems as though passing in the environment variableMODEL_CONFIG_FILEhas no affect. I am running this throughdocker-composebut I get the same issue usingdocker-run.The error:I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models.
I tensorflow_serving/model_servers/server_core.cc:558] (Re-)adding model: model
E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable modelThe DockerfileFROM tensorflow/serving:nightly
COPY ./models/first/ /models/first
COPY ./models/second/ /models/second
COPY ./config.conf /config/config.conf
ENV MODEL_CONFIG_FILE=/config/config.confThe compose fileversion: '3'
services:
serving:
build: .
image: testing-models
container_name: tfThe config filemodel_config_list: {
config: {
name: "first",
base_path: "/models/first",
model_platform: "tensorflow",
model_version_policy: {
all: {}
}
},
config: {
name: "second",
base_path: "/models/second",
model_platform: "tensorflow",
model_version_policy: {
all: {}
}
}
} | Serving multiple tensorflow models using docker |
When you write:run: build
docker run -v $(CURDIR)/project:/project app-serverin a makefile make expects that that recipe will create a file by the name ofrun. make will then check that file's timestamp against the timestamp of its prerequisite files to determine if the recipe needs to be run the next time.Similarly with thebuildtarget you have in your makefile.build: Dockerfile
docker build -t app-server .Neither of those recipes create files with the name of the target however. This means that make cannot use the timestamp of that file to determine whether it needs to re-run the recipe. As such make has to assume that it needs to re-run the recipe (because assuming otherwise would mean the rule would never run).If you runmake -rRdyou will see what make thinks is going on and you should see indication of what I've just said.The solution to your problem, therefore, is to create stamp files in each of those targets.Simply addingtouch $@(optionally prefixed with@to silence the default make echoing of commands it runs) to each of those targets should be enough to get this to work for you.That being said it might make sense to putsudoon each of the recipe lines that need it instead of runningmakewithsudoif you don't want the stamp files to be owned as root as well.For the record this is discussed in the GNU Make Manual as section4.8 Empty Target Files to Record Events. | I playing with Docker and make utility and try to write rule which rebuilds docker image only on Dockerfile change.My project structure looks like:tree .
.
├── Dockerfile
├── Makefile
└── project
└── 1.jsMy Dockerfile is pretty simple:FROM ubuntu
RUN apt-get update
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get update
RUN apt-get install -y build-essential nodejs
VOLUME ["/project"]
ENTRYPOINT ["cat"]
CMD ["project/1.js"]It just creates simple ubuntu image with nodejs installation and run a script from shared directory.Now I want to run this image from Makefile. When I change a Dockerfile I want to rebuild the image.
Makefile looks like:default: run
run: build
docker run -v $(CURDIR)/project:/project app-server
build: Dockerfile
docker build -t app-server .Now when I executesudo makecommand it rebuild an image every time.How can I force make to execute build task only when Dockerfile changed? | Docker with make: build image on Dockerfile change |
According to thedocker documentation, the recommended way to specify port mapping is string declaration specially when a container port lower than 60. | I'm trying to publish 2 ports of a simple docker container to make some tests.Here are the steps to reproduce the issue.My simple Dockerfile:FROM bash:4
RUN echo okBuilt usingdocker build . -t essaiMy first version for the docker-compose.yml file, this one works:version: '3'
services:
essai:
image: essai
ports:
- 25432:5432But when I try to publish a second port like this:version: '3'
services:
essai:
image: essai
ports:
- 25022:22
- 25432:5432I get this strange error message:$ docker-compose up Creating network "sandbox_default" with the
default driver Creating sandbox_essai_1 ... Creating sandbox_essai_1
... errorERROR: for sandbox_essai_1 Cannot create container for service essai:
invalid port specification: "1501342"ERROR: for essai Cannot create container for service essai: invalid
port specification: "1501342" ERROR: Encountered errors while bringing
up the project.Where does it find the port1501342?Funny thing is when I write my docker-compose like this:version: '3'
services:
essai:
image: essai
ports:
- "25022:22"
- 25432:5432It works.What's the magic with these double quotes and the port number coming out of nowhere? | docker-compose: publish multiple ports |
I believe your container is running as some specific user other than root.In your docker-compose.yml you can add user: rootSeedocker-compose-reference | I have this image that writes into the /temp/config and I wanted to map those data into a shared volume in my hostdocker-compose downversion: '2'
services:
service-test:
image: service-test:latest
container_name: service-test
volumes:
- source_data:/temp/config/
volumes:
source_data:When my service-test:latest image tries to write into the /temp/config, I am getting a Permission Denied error.Question, how do I make this host shared volume writable?I checked the shared volume usingdocker volume inspect source_dataand I noticed that it has no write functionality.
This is a linux based distro.UPDATE 2:To verify this, I tried checking the permissions on the shared volume
and I noticed that it has no write permissions also.bash-4.2$ docker inspect volume service-test_source_data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/scratch/docker/volumes/service-test_source_data/_data",
"Name": "configservice-test_config_data",
"Options": {},
"Scope": "local"
}
]
bash-4.2$ ls -l /scratch/docker/volumes/service-test_source_data/
**drwxr-xr-x** 1 root root 0 Apr 18 01:43 _data | Docker Compose Make Shared Volume Writable Permission Denied |
It is possible to make a chroot inside a container... but, as mentioned in "debootstrap inside a docker container", you might need torun with the privileged mode.docker run --privilegedBy default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container.This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all devices.There was ahuge discussion for requesting docker to support privileged operations.So far, it is not happening. | I've a commercial app, that is shipped in a chroot environment : the startup script is making the chroot, and starting the exe.The App is pretty complex, and also for support purposes, I don't want to change the all environment.Is it possible to run chroot, and start the service in docker ? Or are the two incompatible ? | run chroot within docker |
The "shebang" line at the start of a script says what interpreter to use to run it. In your case, your script has specified#!/bin/bash, but Alpine-based Docker images don't typically include GNU bash; instead, they have a more minimal/bin/shthat includes just the functionality in the POSIX shell specification.Your script isn't using any of the non-standard bash extensions, so you can just change the start of the script to#!/bin/sh | DockerfileFROM python:3.7.4-alpine
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV LANG C.UTF-8
MAINTAINER "[email protected]"
RUN apk update && apk add postgresql-dev gcc musl-dev
RUN apk --update add build-base jpeg-dev zlib-dev
RUN pip install --upgrade setuptools pip
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
#CMD ["gunicorn", "--log-level=DEBUG", "--timeout 90", "--bind", "0.0.0.0:8000", "express_proj.wsgi:application"]
ENTRYPOINT ["./docker-entrypoint.sh"]docker-entrypoint.sh#!/bin/bash
# Prepare log files and start outputting logs to stdout
touch /code/gunicorn.log
touch /code/access.log
tail -n 0 -f /code/*.log &
# Start Gunicorn processes
echo Starting Gunicorn.
exec gunicorn express_proj.wsgi:application \
--name express \
--bind 0.0.0.0:8000 \
--log-level=info \
--log-file=/code/gunicorn.log \
--access-logfile=/code/access.log \
--workers 2 \
--timeout 90 \
"$@"Getting Errorstandard_init_linux.go:211: exec user process caused "no such file or directory"
Need help.
Some saying to use dos2unix(i do not know hoe to use it.) | standard_init_linux.go:211: exec user process caused "no such file or directory"? |
I found a simple workaround to this. Just create a Git user on the host machine and provide a proxy script that executes the given Git commands in the GitLab container using the host's SSH daemon and the.ssh/authorized_keysfrom the container volume.On the host machine, add the usergitusing the same UID & GID as in the GitLab docker container (998) and set your GitLabdatadirectory as the user's home:useradd -u 998 -s /bin/bash -d /your/gitlab/path/data gitAdd thegituser to the docker groupusermod -G docker gitAdd a proxy script/opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shellon the host machine with the following contents:#!/bin/bash
docker exec -i -u git sh -c "SSH_CONNECTION='$SSH_CONNECTION' SSH_ORIGINAL_COMMAND='$SSH_ORIGINAL_COMMAND' $0 $1" | I would like to configure sshd on my host machine to forward public key logins of a certain user to a Docker container that runs its own sshd service.To give some context, I have GitLab running in a Docker container and I dislike opening another port on the host machine for the SSH GitLab communication but instead have sshd on the host machine redirect user and key directly to the port the GitLab exposes on the local machine.My idea is to do something like this:Match User git
ForceCommand ssh -p git@localhost
...Help is greatly appreciated! | Have sshd forward logins of git user to a (GitLab) Docker container |
I turned my code-based app into a container by looking at some commands from this guide;https://learn.microsoft.com/en-us/azure/app-service/tutorial-custom-container?pivots=container-linuxThe important steps:Login and select subscription etcEnable Identity and assign AcrPull role so the App Service
can fetch the imageThis command:az webapp config container set --name --resource-group --docker-custom-image-name .azurecr.io/appsvc-tutorial-custom-image:latest --docker-registry-server-url https://.azurecr.ioNow the webapp "Deployment Center" shows Single Container with credentials, registry and Webhook URL setup | I have set up an Azure App Service (Linux) publish method being Code and have set up the appropriate pipeline to build and deploy my code (nodejs).
Now I need more control on the host running my code (need poppler). On dev + test I have created new App Services and have chosen Docker Container as publish methodMy question: for my PROD instance, is it possible to change the publish method of my existing App Service or do I have to create a new App Service ?Assuming the latter, I would need to update the client to point to the new App Service URL. To avoid that, could I first delete the existing App Service and create a new one with the same name ? This would make me lose all stats and logs.Any alternative suggestions? | Is it possible to convert the publish method of an App Service from Code to Docker? |
I'd add a bash script that has the commands you want to run during startup and use that as the default entry point in your image. It's usually best practice to call this scriptentrypoint.sh#!/usr/bin/env bash
python manage.py db upgrade
flask run --host=0.0.0.0And then, in your Dockerfile, replace the last line with the followingRUN chmod u+x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]If you want to run the upgrade command only in Docker compose then instead of changing the default entry point in the image you can just override it in the compose file like thisweb:
links:
- "db"
build: .
ports:
- "5000:5000"
volumes:
- .:/code
depends_on:
- db
entrypoint: /code/entrypoint.sh
env_file:
- .env | I have a project with the following structure:proj
src
application
app.py
manage.py
migrations
Dockerfile
docker-compose.yamlMy goal is to run migrations from the application directory to create tables in the database during docker-compose.python manage.py db upgradeDockerfileFROM python:3.7-alpine
ADD requirements.txt /code/
WORKDIR /code
RUN apk add --no-cache postgresql-dev gcc python3 musl-dev && \
pip3 install -r requirements.txt
ADD . /code
EXPOSE 5000
WORKDIR /code/src/application
CMD ["flask", "run", "--host=0.0.0.0"]docker-compose.yaml---
version: "3"
services:
web:
links:
- "db"
build: .
ports:
- "5000:5000"
volumes:
- .:/code
depends_on:
- db
env_file:
- .env
db:
image: postgres:10
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=app
ports:
- "5432:5432"
expose:
- 5432How can I do that? | How to run flask_migrate in Docker |
On Windows, Linux containers are created inside a virtual machine that runs on Windows host OS. This virtual machine gets assigned an IP. While doing the curl, you should use this IP instead oflocalhost. Here,localhostmeans the Windows host and not the virtual machine that we intend to hit on the port 8080.To know the IP assigned to the virtual machine, run thedocker-machine lscommand. You will get output similar to the following:$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v18.05.0-ceNote the IP in the above command output underURL-- it would be a different IP when you run the command on your machine. Then use it to do the curl:curl -i 192.168.99.100:8080 | I am new to docker. I am trying to get a simple node app running on docker. However I am facing an issue with the docker port publish.Docker version- 18.03.0-ce, build 0520e24302My simple app code:'use strict';
const express = require('express');
// Constants
const PORT = 8081;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);My docker file:FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8081
CMD [ "npm", "start" ]My docker ps output- 0.0.0.0:8080->8081/tcp, loving hugleOutput from curl command from my local- Failed to connect to localhost port 8080: Connection refused. | Docker port mapping is not working on windows 10 |
Full disclosure: I'm the author of Dockernel.By usingDockernelPut the following in a file calledDockerfile, in a separate directory.FROM python:3.7-slim-buster
RUN pip install --upgrade pip ipython ipykernel
CMD python -m ipykernel_launcher -f $DOCKERNEL_CONNECTION_FILEThen issue the following commands:docker build --tag my-docker-image /path/to/the/dockerfile/dir
pip install dockernel
dockernel install my-docker-imageYou should now see "my-docker-image" option when creating a new notebook in Jupyter.ManuallyIt is possible to do this kind of thing without much additional implementation/tooling, it just requires a bit of manual work:Use the followingDockerfile:FROM python:3.7-slim-buster
RUN pip install --upgrade pip ipython ipykernelBuild the image usingdocker build --tag my-docker-image .Create a directory for your kernelspec, e.g.~/.local/share/jupyter/kernels/docker_test(%APPDATA%\jupyter\kernels\docker_teston Windows)Put the following kernelspec intokernel.jsonfile in the directory you created (Windows users might need to changeargva bit){
"argv": [
"/usr/bin/docker",
"run",
"--network=host",
"-v",
"{connection_file}:/connection-spec",
"my-docker-image",
"python",
"-m",
"ipykernel_launcher",
"-f",
"/connection-spec"
],
"display_name": "docker-test",
"language": "python"
}Jupyter should now be able spin up a container using the docker image specified above. | I want to switch my notebook easily between different kernels. One use case is to quickly test a piece of code in tensorflow 2, 2.2, 2.3, and there are many similar use cases. However I prefer to define my environments as dockers these days, rather than as different (conda) environments.Now I know that you can start jupyter in a container, but that it not what I want. I would like to just clickKernel > use kernel > TF 2.2 (docker), and let jupyter connect to a kernel running in this container.Is something like that around? I have usedlivyto connect to remote spark kernels via ssh, so it feels like this should be possible. | Jupyter starting a kernel in a docker container? |
As I mentionedin this comment, the solution should be adding a proper user inside the container. Jenkins uses984:984for uid/gid on my machine (but may be different on yours - login to the host Jenkins is running on and executesudo -u jenkins id -ato detect them), so you need to replicate it in the container that should be run by Jenkins:FROM python:3.7
RUN mkdir /home/jenkins
RUN groupadd -g 984 jenkins
RUN useradd -r -u 984 -g jenkins -d /home/jenkins jenkins
RUN chown jenkins:jenkins /home/jenkins
USER jenkins
WORKDIR /home/jenkins
CMD ["/bin/bash"]Of course, since you aren't therootuser in the container anymore, either create a virtual environment:$ docker run --rm -it jenkins/python /bin/bash
jenkins@d0dc87c39810:~$ python -m venv myenv
jenkins@d0dc87c39810:~$ source myenv/bin/activate
jenkins@d0dc87c39810:~$ pip install numpyor use the--userargument:$ docker run --rm -it jenkins/python /bin/bash
jenkins@d0dc87c39810:~$ pip install --user --upgrade pip
jenkins@d0dc87c39810:~$ pip install --user numpyetc.Alternatively, youcan(but in most cases shouldn't) enter the container asroot, but withjenkinsgroup:$ docker run --user 0:984 ...This way, although the modified files will still change the owner, their group ownership will still be intact, so Jenkins will be able to clean up the files (or you can do it yourself, viash 'rm -f modified_file'in theJenkinsfile. | I have thisDockerfile:FROM python:3.7
CMD ["/bin/bash"]and thisJenkinsfile:pipeline {
agent {
dockerfile {
filename 'Dockerfile'
}
}
stages {
stage('Install') {
steps {
sh 'pip install --upgrade pip'
}
}
}This causes the following error:The directory '/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting pip
Downloading https://files.pythonhosted.org/packages/d8/f3/413bab4ff08e1fc4828dfc59996d721917df8e8583ea85385d51125dceff/pip-19.0.3-py2.py3-none-any.whl (1.4MB)
Installing collected packages: pip
Found existing installation: pip 19.0.2
Uninstalling pip-19.0.2:
Could not install packages due to an EnvironmentError: [Errno 13]
Permission denied: '/usr/local/bin/pip'
Consider using the `--user` option or check the permissions.I have tried to user the--user, with no success.I had some luck using args--user 0:0on the docker jenkinsfile declaration, but this creates directories and files owned by root which can not be deleted by the user Jenkins at the next run.I don't want to do thepip installon the Dockerfile since in reality the Install step is running a make file instead of the simplification I used above, that I want to use in other contexts.I've also seen advice to change theHOME environment var, and this seems to fix the first 2 warnings about the parent directoy not being owned by current user, but not theErrno 13part. | How to pip install in a docker image with a jenkins pipline step? |
Look atdocker events- there is an event for container 'die'.There is also an http interface to get the same information programmatically - seehereYou may want to do a web search for 'docker orchestration' - many projects springing up to manage multiple containers in the way you describe. | I am running multiple named docker containers (200+) on my VM Host.
I have a manager script/code that is supposed to manage the containers from the host.
I would like to know if there is any event-based mechanism to get notified when a container stops/fails. So that I can restart the stopped container.One solution I could think of is doing a periodic docker inspect and looking atState.PidorState.Runningto confirm the status.But,instead of periodic polling, it would be better if the manager is notified with pid/name when a container fails so that, the particular container alone can be restarted.On a general note, are there ways to programmatically monitor the status of a process from a different process that is not the parent ? | How to programmatically monitor if a docker container exited? |
I'm not sure whydocker-iosuddenly disappeared, but the same version previously available through the epel repository can be installed directly from this rpm hosted by Docker:[root@server]# yum install
https://get.docker.com/rpm/1.7.1/centos-6/RPMS/x86_64/docker-engine-1.7.1-1.el6.x86_64.rpm
[root@server]# docker --version
Docker version 1.7.0, build 0baf609 | For some time, thedocker-iopackage has been used to install Docker on CentOS 6.Since early this month, this package no longer appears to be available:[[email protected]:0 yum.repos.d]# yum install docker-io
Loaded plugins: fastestmirror, presto
Setting up Install Process
Determining fastest mirrors
* base: mirror.intergrid.com.au
* extras: mirror.ventraip.net.au
* updates: mirror.ventraip.net.au
base | 3.7 kB 00:00
base/primary_db | 4.7 MB 00:00
epel | 4.7 kB 00:00
epel/primary_db | 6.0 MB 00:00
extras | 3.4 kB 00:00
extras/primary_db | 28 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 3.2 MB 00:00
No package docker-io available.
Error: Nothing to dodocker-iowas previously part of the epel repository and has been the recommended way to install Docker (albeit, an older version) on CentOS 6 in anumberofplaces.Is there any other way Docker can be installed on CentOS 6? | Installing Docker on CentOS 6 after removal of docker-io |
Put the following directive to the server block where you listen for port 443.error_page 497 https://$host:$server_port$request_uri;This directive implies that when "The plain HTTP request was sent to HTTPS port" happens, redirect it to https version of current hostname, port and URI.Kinda hacky but works. | I'm running nginx in docker. HTTPS works fine but when I explicitly make HTTP request I get the following error400 Bad Request
The plain HTTP request was sent to HTTPS portnginx.conf is as followsworker_processes auto ;
events {}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/main.access.log;
server {
listen 80;
location / {
return 301 https://localhost:3000$request_uri;
}
}
server {
listen 443 ssl;
server_name localhost:3000;
root /var/www/html;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
try_files $uri /index.html;
}
}
}I run this container usingdocker run -p 3000:443 -it -d --name nginxtest nginx-testand get the following errordocker file is as followsFROM nginx:latest
COPY ./build /var/www/html
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./ssl /etc/nginx/ssl
EXPOSE 443
CMD [ "nginx","-g","daemon off;" ]Weird thing is that sometimes it works perfectly fine, and all of a sudden it stops working and won't even work if I recreate the containers.Even tried doing the following. Still no luckserver {
listen 80;
server_name localhost:3000
return 301 https://localhost:3000$request_uri;
}Another odd thing when I run the following docker commanddocker run -p 3000:443 -p 3001:80 -it -d --name nginxtest nginx-testand go to localhost:3001 it redirects me to https just fine but other things do break.
Sorry for the long question | Nginx HTTP not redirecting to HTTPS 400 Bad Request "The plain HTTP request was sent to HTTPS port" |
I got this working! I was having the same issue with you when you seeReading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ... Cannot access /etc/cwagentconfig: lstat /etc/cwagentconfig: no such file or directoryValid Json input schema.What you need to do is put your config file in /etc/cwagentconfig. A functioning dockerfile:FROM amazon/cloudwatch-agent:1.230621.0
COPY config.json /etc/cwagentconfigWhere config.json is some cloudwatch agent configuration, such as given by LinPy's answer.You can ignore the warning about/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json, or you can also COPY the config.json file to that location in the dockerfile as well.I will also share how I found this answer:I needed this run in ECS as a sidecar, and I could only find docs on how to run it in kubernetes. Following this documentation:https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-StatsD.htmlI decided to download all the example k8s manifests, when I saw this one:apiVersion: v1
kind: Pod
metadata:
namespace: default
name: amazonlinux
spec:
containers:
- name: amazonlinux
image: amazonlinux
command: ["/bin/sh"]
args: ["-c", "sleep 300"]
- name: cloudwatch-agent
image: amazon/cloudwatch-agent
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: cwagentconfig
mountPath: /etc/cwagentconfig
volumes:
- name: cwagentconfig
configMap:
name: cwagentstatsdconfig
terminationGracePeriodSeconds: 60So I saw that the volume mountcwagentconfigmounts to/etc/cwagentconfigand that's from thecwagentstatsdconfigconfigmap, andthat'sjust the json file. | From the docker hub there is animagewhich is maintained by amazon.Any one knowhow to configure and start the containeras I cannot find any documentation | How to start the cloudwatch agent in container? |
Have you tried this option-s, --storage-path"Configures storage path [$MACHINE_STORAGE_PATH]"?You can see it in docker-machine --help. | My development machine is a laptop with a smallish SSD and a huge external disk. Ideally I'd like docker-machine to use the external drive rather than filling up my internal disk.I know that I can hack it with mounts and so on but is there a way to make the docker-machine command use a directory that I specify instead of defaulting to~/.docker/machine/? | How can I make docker-machine create a VM in a specific location |
docker has all you need to build images and run containers. You can create your own image by writing a Dockerfile or by pulling it from the docker hub.In the Dockerfile you specify another image as the basis for your image, run command install things. Images can have tags, for example the ubuntu image can have the latest or 12.04 tag, that can be specified withubuntu:latestnotation.Once you have built the image withdocker build -t image-name .you can create containers from that image with `docker run --name container-name image-name.docker psto see running containersdocker rm to remove containers | I am getting into Docker and am trying to better understand how it works out there in the "real world".It occurs to me that, in practice:You need a way to version Docker imagesYou need a way to tell the Docker engine (running on a VM) to stop/start/restart a particular containerYou need a way to tell the Docker engine which version of a image to runDoes Docker ship with built-in commands for handling each of these? If not what tools/strategies are used for accomplishing them? Also, when I build a Docker image (via, say,docker build -t myapp .), what file type is produced and where is it located on the machine? | Docker image versioning and lifecycle management |
-v /Users/M/Projects/Docker/nginx-example/nginx.conf:/etc/nginx:royou are attempting to mount a file to a directory - change that to:-v /Users/M/Projects/Docker/nginx-example/nginx.conf:/etc/nginx/nginx.conf:roand you should be fine. Take a look at the examples in theDocker Volumes DocsAs well,pwdshould work in the path. The shell expands this before the docker command is run, just like math and inner parenthesis, inner sub-commands are run first. | I'm trying to run nginx within a docker container whilst mounting the configuration and static html files for it to serve up. Very simple stuff as far as I'm aware, but I keep getting an error about the directory not being a directory?I'm running this example on my Mac using the latest version of Boot2Docker.I have the following folder structure:% tree ~/Projects/Docker/nginx-example
.
├── html
│ └── test.html
└── nginx.conf
1 directory, 2 filesThe contents of thenginx.confis as follows:http {
server {
listen *:80; # Listen for incoming connections from any interface on port 80
server_name ""; # Don't worry if "Host" HTTP Header is empty or not set
root /usr/share/nginx/html; # serve static files from here
}
}I try to run the container (from within the~/Projects/Docker/nginx-exampledirectory) like so:docker run --name nginx-container \
-v /Users/M/Projects/Docker/nginx-example/html:/usr/share/nginx/html:ro \
-v /Users/M/Projects/Docker/nginx-example/nginx.conf:/etc/nginx:ro \
-P -d nginxOriginally I had tried something like-v $(pwd)/html:/usr/share/nginx/html:roto keep the command shorter, but when it didn't work I thought I'd be explicit just in case there was some funky sub shell issue I wasn't aware ofAnd I get the following outputfc41205914098d236893a3b4e20fa89703567c666ec1ff29f123215dfbef7163
Error response from daemon:
Cannot start container fc41205914098d236893a3b4e20fa89703567c666ec1ff29f123215dfbef7163:
[8] System error: not a directoryDoes anyone have any idea of what I'm missing?Mac Boot2Docker Volume issue?I'm aware there is an issue with mounting volumes into containers when using Boot2Docker (although I'm led to believe this has long been resolved)i.e.Mount volume to Docker image on OSXBut I followed the instructions there regardless, and it still didn't work | Mounting nginx conf as a docker volume causes system error boot2docker |
EDIT- New answer from Katalon supportI got a new response from Katalon support that says:First of all, I would to sorry for my answer due to I'm not giving out the proper one based on your question. I've reviewed again your question and see Katalon Studio have Linux version (http://download.katalon.com/4.8.0/Katalon_Studio_Linux_64-4.8.tar.gz) for console mode execution and it's ideally to package it into your dockerfile.That's more like it, and with the documentation here it should be pretty straightforward to get it working with Docker:https://docs.katalon.com/display/KD/Console+Mode+ExecutionHope this answer resolve your question better :).END EDITORIGINALI created a ticket on the Katalon Studio website asking this same question, and I got this (laughable) response:First of all, there is no Dockerfile for Katalon Studio. It will be hard and complicated to do this and we also do not have a plan to do it in the future :). But we will try to consider with your request to see if there is any applicable adjustment to this case.In other words, no Docker solution. It's too bad that we can't use it for our CI stuff, since I had good results with the prototyping I did.Oh well. | I have a Katalon test suite setup and it runs great in the UI and from the CLI on the machine where I have Katalon studio installed.I have Jenkins CI server running in a docker container, and I would like to setup a job to run my test suite on that Jenkins server.What runtime do I need on the Jenkins server so it can run a Katalon job? Is there a runtime or a plugin for Jenkins for this?If not, is there a docker container for Katalon that I can use to remotely run the job via jenkins, like the SonarQube stuff? | How do I run Katalon test suite in Jenkins inside Docker |
As clearly documented inNetworking in ComposeNetworked service-to-service communication use the CONTAINER_PORTThus you should use the container ports to communicate between the containers.http://bob:5000andhttp://alice:5000. | I just started working with docker-compose and am currently struggling with communication between the different services.I have 2 services,aliceandbob. I want these to be able to send http requests to each other. As far as I understood, services should be able to reach each other by using the servicename as hostname.Unfortunately,alicein my example is not able to reachbobonhttp://bob:5557, andbobis not able to reachaliceonhttp://alice:5556.What am I not understanding correctly? Is it even possible to make http requests between services?This is mydocker-compose.ymlfile:version: '3'
services:
alice:
build: blockchain
ports:
- "5556:5000"
environment:
NAME: Alice
bob:
build: blockchain
ports:
- "5557:5000"
environment:
NAME: Bob | Communicating between different docker services in docker-compose |
Yes. You can mount a socket into a container using a volume mount. And multiple containers can mount the same volume, whether that's a named volume or a host mount, to share the socket between the containers. You see this frequently with containers that mount the docker socket today, e.g.docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock busyboxwill run a container with the docker socket mounted.Notes on the docker.sock itself:The above is an example of mounting a socket, replace the docker.sock with the name of your own application's socket.Yes, the above gives the container access to manage docker, effectively root on the host. You see this with tools to manage docker packaged as containers. You are implicitly trusting them with root access on the server, not unlike trusting code downloaded with apt or rpm on the host. Be selective on what you give this access to. | I’m newbie to Docker, but i’d like to know: is it possible to connect one container from another container on Linux machine (any) with UNIX sockets?
For example i have one container for application core and second containers which covers database things.
Second example is two containers with application code, and first container can trigger some events in second.Performance really matters for me in both cases.
If it’s impossible to do this way, is there is any solution for these problems?Thanks! | Connection between docker containers via UNIX sockets |
AWS CDK depricated therepositoryNameproperty onDockerImageAsset. There are a few issues on GitHub referencing the problem. Seethis commentfrom one of the developers:At the moment the CDK comes with 2 asset systems:The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. @eladb, can you help me remember why we chose to do it this way?There is a request for a new construct that will allow you to deploy to a custom ECR repository at(aws-ecr-assets) ecr-deployment #12597.Use CaseI would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.There is also a third party solution athttps://github.com/wchaws/cdk-ecr-deploymentif you do not want to wait for the CDK team to implement the new construct. | I am trying to do something that seems fairly logical and straight forward.I am using the AWS CDK to provision an ecr repo:repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.I do this in the same service code with:assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default locationaws-cdk/assetsHow do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ? | aws cdk push image to ecr |
You can useheredocwithdocker execcommand:docker exec -i CONTAINER_NAME bash <<'EOF'
cat /dev/null > /usr/local/tomcat/logs/app.log
exit
EOFTo use variables:logname='/usr/local/tomcat/logs/app.log'then use as:docker exec -i CONTAINER_NAME bash < "$logname"
exit
EOF | I would like to write a bash script that automates the following:Get inside running containerdocker exec -it CONTAINER_NAME /bin/bashExecute some commands:cat /dev/null > /usr/local/tomcat/logs/app.log
exitThe problematic part is whendocker execis executed. The new shell is created, but the other commands are not executed.Is there a way to solve it? | How to execute commands in docker container as part of bash shell script |
I see two possibility:1) Make sure your ip_forward is set to 1 (sysctl -w net.ipv4.ip_forward=1)2) Make sure it is not a DNS issue: trydocker run base ping google.com, if it does not work, you can set custom dns server:docker run -dns 8.8.8.8 base ping google.com. | I am trying to build a docker image by using the ones in the repository however i haven't been able to run 'apt-get update' 'apt-get install' commands because it seems that the container is not connected to the internet. I think the problem is caused by the fact that i am using a wireless connection. Is there a way to configure the docker or lxc to use the wireless network instead of the ethernet. | How to configure docker to be able to have internet access via wireless connection? |
Omit thebuildon the basedocker-compose.yml, and place it in adocker-compose.override.ymlfile.When you run docker-compose up it reads the overrides automatically.Extracted from theDocker Compose Documentation.Since yourdocker-compose.ymlfile must have either build or image, we'll use image that has less priority, resulting in:version: '2'
services:
web:
image: repo
[...]Now let's move ontodocker-compose.override.yml, the one that will run by default (meaningdocker-compose upordocker-compose run web command).By default we want it to build the image from ourDockerfile, so we can do this simply by usingbuild: .version: '2'
services:
web:
build: .The production onedocker-compose.prod.ymlrun by usingdocker-compose -f docker-compose.yml -f docker-compose.prod.yml upwill be similar to this one, excepting that in this case we want it to take the image from the Docker repository:version: '2'
services:
web:
image: repoSince we already have the sameimage: repoon our basedocker-compose.ymlfile we can omit it here (but that's completely optional). | Having a basedocker-compose.ymllike the following:version: '2'
services:
web:
build: .
...How can I extend it to use an image instead?docker-compose.prod.ymlversion: '2'
services:
web:
image: username/repo:tagRunning it with dockerdocker-compose -f docker-compose.yml -f docker-compose.prod.yml upstill promps:Building webStep 1/x : FROM ...I tried withdocker-compose -f docker-compose.yml -f docker-compose.prod.yml up --no-build:ERROR: Service 'web' needs to be built, but --no-build was passed.I'm expecting the "Pulling from name/repo message" instead. Which options do I have? Or do I need to create a complete duplicate file to handle this slight modification? | Docker Compose how to extend service with build to use an image instead |
You can use the sidecar pattern following the instructions here:https://support.rancher.com/hc/en-us/articles/360041568712-How-to-troubleshoot-using-the-namespace-of-a-container#sidecar-container-0-2In short, do this to find a node where a coredns pod is running:kubectl -n kube-system get po -o wide | grep corednsssh to one of those nodes, then:docker ps -a | grep corednsCopy the Container ID to clipboard and run:ID=
docker run -it --net=container:$ID --pid=container:$ID --volumes-from=$ID alpine shYou will now be inside the "sidecar" container and can poke around. I.e.cat /etc/coredns/Corefile | I have a running k8s cluster with two replicas of CoreDNS. But when i try enter the bash prompt of the POD it's throwing me below error# kubectl exec -it coredns-5644d7b6d9-285bj -n kube-system sh
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "94f45da89fa5493a8283888464623788ef5e832dc31e0d89e427e71d86391fd6": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknownBut i am able to login to other pods without any issues. I tried with nsenter with kernel process ID it works but it only works for network related openrations like,# nsenter -t 24931 -n ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if5: mtu 1400 qdisc noqueue state UP group default
link/ether 7a:70:99:aa:53:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.2/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::7870:99ff:feaa:536c/64 scope link
valid_lft forever preferred_lft foreverHow to enter into this pod using kubectl and get rid of that error? | How to get into CoreDNS pod kuberrnetes? |
it is not a full answer to your question but we (JBoss Tools team) started working on this and here are a few blogs about what is possible todo today and where we are with Eclipse docker tooling.http://tools.jboss.org/blog/2015-03-02-getting-started-with-docker-and-wildfly.htmlhttp://tools.jboss.org/blog/2015-03-03-docker-and-wildfly-2.htmlhttp://tools.jboss.org/blog/2015-03-30-Eclipse_Docker_Tooling.html | I'm looking for a way to integrate Docker containers with the Eclipse platform.
I would like to run all build/test/debug command inside containers and use same containers in Continuous Integration build and later in production.The simplest way I looked on, was just to configure custom command but besides permissions problem (docker must run as sudo/root) it doesn't give me all the flexibility of real integration.Any ideas on the best way to proceed? | Eclipse - Docker integration |
One of the key features most assume with a multi-tenancy tool is isolation between each of the tenants. They should not be able to see or administer each others containers and/or data.The docker-ce engine is a sysadmin level tool out of the box. Anyone that can start containers with arbitrary options has root access on the host. There are 3rd party tools like twistlock that connect with an authz plugin interface, but they only provide coarse access controls, each person is either allowed or disallowed from an entire class of activities, like starting containers, or viewing logs. Giving users access to either the TLS port or docker socket results in the users being lumped into a single category, there's no concept of groups or namespaces for the users connecting to a docker engine.For multi-tenancy, docker would need to add a way to define users, and place them in a namespace that is only allowed to act on specific containers and volumes, and restrict options that allow breaking out of the container like changing capabilities or mounting arbitrary filesystems from the host. Docker's enterprise offering, UCP, does begin to add these features by using labels on objects, but I haven't had the time to evaluate whether this would provide a full multi-tenancy solution. | I watchedthis YouTube video on Dockerand at 22:00 the speaker (a Docker product manager) says:"You're probably thinking 'Docker does not support multi-tenancy'...and you are right!"But never is any explanation of why actually given. So I'm wondering: what did he mean by that?Why Docker doesn't support multi-tenancy?!If you Google "Docker multi-tenancy" you surprisingly get nothing! | Why doesn't Docker support multi-tenancy? |
You don't need a service for things outside the cluster. Depending on the networking model you're using, the docker container (ie kubernetes pod) should be able to connect to the MySQL container normally via the bridge that Docker sets up. Check the host has connectivity on port 3306, and it does, simply put in the DNS name (your kube-dns pod should forward any non kubernetes based requests on to the hosts resolv.conf of the host it was scheduled on) | I am running a kubernetes cluster in my centos machine.
I donot want to create a pod for mysql. MySQL is installed in another machine in same network (Machine is not in kubernates private network).How can I access the mysql service from the pods running in kubernetes cluster ?I have tried with service and end point with below configuration. But, No luck.apiVersion: v1
kind: Service
metadata:
name: database
spec:
ports:
- port: 13080
targetPort: 13080
protocol: TCP
---
kind: Deployment
apiVersion: v1
metadata:
name: database
subsets:
- addresses:
- ip: XX.XX.XX.XX
ports:
- port: 13080
---
kind: ReplicationController
metadata:
name: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: my_pods
image: my_pods
env:
- name: DATABASE_HOST
value: database
- name: DATABASE_PORT
value: "13080"
- name: DATABASE_USER
value: "SAAS"
- name: DATABASE_PASSWORD
value: "SAAS"
- name: DATABASE_NAME
value: "SAASDB"
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-secret
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
name: test-service
spec:
type: NodePort
ports:
- port: 11544
targetPort: 8080
nodePort: 30600
selector:
name: test | How to access mysql outside my kubernetes cluster? |
I believe it is by design that host cannot reach its own containers through a macvlan network. I leave it to others to explain why exactly this is so, but to verify that this is where your problem lies, you can try to ping your container at192.168.2.74from another host on the network or even from another container or vm on the same host. If you can reach the container from other machines but not from the host, everything is working as it should.According tothis blog post, you can nevertheless allow for host-container communication by creating a macvlan interface on the hostsub-interface and then create a macvlan interface in host in order to let it access the macvlan that the container is in.I have not tried this myself yet and I'm not sure about the exact consequences, so I quote the instructions fromthe blog posthere so that others can add to it where necessary:Create a macvlan interface on host sub-interface:docker network create -d macvlan \
–subnet=192.168.0.0/16 \
–ip-range=192.168.2.0/24 \
-o macvlan_mode=bridge \
-o parent=eth2.70 macvlan70Create container on that macvlan interface:docker run -d –net=macvlan70 –name nginx nginxFind ip address of Container:docker inspect nginx | grep IPAddress
“SecondaryIPAddresses”: null,
“IPAddress”: “”,
“IPAddress”: “192.168.2.1”,At this point, we cannot ping container IP “192.168.2.1” from host machine.Now, let’s create macvlan interface in host with address “192.168.2.10” in same network.sudo ip link add mymacvlan70 link eth2.70 type macvlan mode bridge
sudo ip addr add 192.168.2.10/24 dev mymacvlan70
sudo ifconfig mymacvlan70 upNow, we should be able to ping the Container IP as well as access “nginx” container from host machine.$ ping -c1 192.168.2.1
PING 192.168.2.1 (192.168.2.1): 56 data bytes
64 bytes from 192.168.2.1: seq=0 ttl=64 time=0.112 ms
— 192.168.2.1 ping statistics —
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.112/0.112/0.112 ms | Im trying to understand the "macvlan" network from docker. I create a new network:docker network create -d macvlan \
--subnet=192.168.2.0/24 \
--gateway=192.168.2.1 \
-o parent=eno1 \
pub_netAnd start new container with the new network:docker run --rm -d --net=pub_net --ip=192.168.2.74 --name=whoami -t jwilder/whoamiWhen i try to access the service from the container or ping it i get:curl: (7) Failed to connect to 192.168.2.74 port 8000: no route to hostTested with Ubuntu 16.04, Ubuntu 18.04 & CentOS 7.
Neither from the docker host itself or other clients on the network can reach the container.I followed the example fromt he docker site:https://docs.docker.com/network/network-tutorial-macvlan/#bridge-exampleWhat im missing ?I read hereBind address in Docker macvlanto execute these commands (no clue what they do):sudo ip link add pub_net link eno1 type macvlan mode bridge
sudo ip addr add 192.168.2.22/24 dev pub_netBut this does nothing on my machine(s) | docker macvlan - no route to host (container) |
Rungitlab-runner registermultiple times. It will always append new configurations to the same/etc/gitlab-runner/config.tomlfile. | I would like to use the same host computer to execute Docker builds using the shell executor, as described in the link below, and normal builds using the docker executor.I would like to be able to start builds of both types on the same host.I would like to use the debian package provided for Ubuntu and installed via ant from the repository.https://docs.gitlab.com/ce/ci/docker/using_docker_build.htmlIn other words, if I run a project to build docker containers, the shell executor should run the commands against docker. If I build a source code project, the docker executor should run my build inside a docker container.Can someone please describe the steps required to achieve such a configuration. | How to run a shell and docker executor on the same unix host? |
Docker doesn't run native on Windows. It actually creates a Linux VM where it runs the docker daemon. You can see this VM with VirtualBox (assuming you like many others use VirtualBox for virtualization).For this reason, in order to get your setup you need to modify this VM. You need to make sure its network interface is in NAT mode and then in the advance settings you can forward your port (2375) from host to guest. Restart Docker and it should work. | I'm new to Docker. My Docker Desktop for Windows version is 19.03.5.
I want to expose port 2375 from Docker desktop for windows, but if I use the GUI setting,that only can be accessed via tcp://127.0.0.1, My inner IP address 192.168.3.9 doesn't work.https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon.The document said to edit theC:\ProgramData\Docker\config\daemon.jsonand add"hosts": ["tcp://0.0.0.0:2375"], but it's doesn't work for any IP address, I'm very sure I did it as the document.So what should I do can make access via tcp://192.168.3.9 from another computer which in the same subnet? | how to expose 2375 from Docker desktop for windows |
Don't forget to map port to host port:docker run --name some-mongo -p 27017:27017 -d mongoThendocker-machine ipgives me192.168.99.100Type in terminalmongo 192.168.99.100printsMongoDB shell version: 3.2.4
connecting to: 192.168.99.100/test
Server has startup warnings:
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten]
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten]
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-08-22T07:35:20.214+0000 I CONTROL [initandlisten]I also tested with robomongo. I can connect without a problem. | I am running mongo docker image that I pulled fromdocker hub mongo imageIt works ok but when I start Robomongo I cannot connect to localhost. With following error message:Cannot connect to the MongoDB at localhost:27017.Error:
Network is unreachableI appreciate any help, thanks.EDIT: I solved the issue by using the following command:docker run -p 27017:27017 --name mongo_instance_001 -d mongo | Cannot connect Robomongo using MongoDB docker image |
Finally solved.1) Deletedaemon.jsonfile from/etc/dockerfolder.2) Restart docker service. | I am following below url for logging driverhttps://docs.docker.com/engine/admin/logging/overview/#configure-the-default-logging-drivernow, I want to remove this logging driverI have remove file(daemon.json) from /etc/docker folder too.But when I build container, system should always showing me warningWARNING: no logs are available with the 'none' log driverHow can I get rid of this warning? | WARNING: no logs are available with the 'none' log driver |
There's a nice documentation on how to integrate Spring-Boot with Docker:https://spring.io/guides/gs/spring-boot-docker/Basically you define your dockerfile insrc/main/docker/Dockerfileand configure the docker-maven-plugin like this:
com.spotify
docker-maven-plugin
0.4.11
${docker.image.prefix}/${project.artifactId}
src/main/docker
/
${project.build.directory}
${project.build.finalName}.jar
Dockerfile:FROM frolvlad/alpine-oraclejre8:slim
VOLUME /tmp
ADD gs-spring-boot-docker-0.1.0.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]Note that in this exampleFROM frolvlad/alpine-oraclejre8:slimis a small-footprinted image which is based on Alpine Linux.You should also be able to use the standard Java 8 image (which is based on Debian and might have an increased footprint) as well. An extensive list of available Java Baseimages can be found here:https://github.com/docker-library/docs/tree/master/openjdk. | What Docker base image (FROM) for Java Spring Boot application?I am just starting with docker, and I see thatFROMinsideDockerfilecan define image for Java likeFROM java:8If I am building using Gradle (or Maven) is the better base image to start to avoid configuring later what is common for Gradle/Maven project?And of course Spring Boot application is just .jar file inside build output folder, there should be less questions about how to run with Docker (for Java project built with standard build tools) | What Docker base image (`FROM`) for Java Spring Boot? |
I got the same issue resolved it by changing minikube base driver to hyperv from docker.Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -AllYour pc will restart after that you can sayminikube config set driver hypervThenminikube startwill start you with that driver.This worked for me. | I am trying to access a simple minikube cluster from the browser, but I keep getting the following:❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.I've created an external service for the cluster with the port number of 30384, and I'm running minikube in a docker container.I'm follwing "Hello Minikube" example to create my deployment.Step1: I created the deployment:kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4Step2: I created the external service:kubectl expose deployment hello-node --type=LoadBalancer --port=8080Step3: I ran the service, and that;s where I stuffed up
"minikube service hello-nodeThe full return message:❗ Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.3796077s💡 Restarting the docker service may improve performance.🏃 Starting tunnel for service hello-node.🎉 Opening service default/hello-node in default browser...❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.I tried to run the service to make it accessible from the browser, however, I wasn't able to. | Unable to access my minikube cluster from the browser (❗ Because you are using a Docker driver on windows, the terminal needs to be open to run it.) |
Hareem asked his question a while back, and there don't seem to be any good answers yet. I'm a noobie as well, and I too want to learn how to use a generic wordpress container that I can push to Amazon or test locally. I'm very new to docker, so this seems like a tall order!GoalFor now, I'll start collecting some resources here. Maybe they will help Hareem, and others like myself. This document will turn into a complete answer, or prompt someone else to give their version of an answer (which I'm sure is not quite so complex.)The Docker.io IndexFirst, the Docker index is a repository of already existing Docker.io components. Of these, there is a wordpress unit that seems relevant here:jbfink -Wordpress 3.5.2.Docker on EC2There is as yet no official Docker support for Ec2. However, the Docker community suggests an install path using a tool called Vagrant. The instructions for this live here:Docker Doc -Installing on Amazon EC2Work In ProgressThis is not a complete answer to the question. As of right now this only presents a couple of easy to locate resources, and perhaps goes against guidelines. Please bear with this!Things that need to be answered:How do we run / test the wordpress container(s) locally?How do we push the container(s) up to the EC2 instance?How do we wire the EC2 wordpress containers up to their own domains?Hopefully I will answer these questions - contributions and forks are welcome. I think Hareem's question is worth answering! | I just started playing around with Docker.io. Its a great platform for sure. I have an issue i need some help with. I ran a medium instance on ec2 setup docker. Now i want to run 2 wordpress blog independent of each other using docker.io on top of the medium instance.Please if someone can kindly guide me to resolve this issue i will extremely gratefulMany Thanks Indeed
Hareem HaqueUpdated:Basically, what i am trying to do is run two nodes for docker (node 1 & node 2). I run another node (node3: private repo for docker). What i am looking to accomplish is i run two blogs (wordpress on node1). I export the docker images to node3 (updates/exports are done very rarely)Since i am going to run wordpress i was hoping to run wordpress within Nginx and since node1/node2 will run 80 web i can put a physical node (nginx reverse proxy) in front of the two nodes and have the blogs run in ha mode.I am hoping that this experiment work so i that i can get rid of the xen cloud platform we have in office. Its to bulky and I have to manage alot of components.
I would rather export/backup docker image with my live data once in a blue moon and not have to worry about failover and vm management.The problem is that i have a novice when it comes to running docker and thus i am currently running around like a head less chicken with no idea where to properly begin.I would be extremely grateful if you can provide any guidance/assistance indeed.Best Regards
Hareem Haque | How to run 2 wordpress blogs using docker on ec2 |
As a security precaution, system devices are not exposed by default inside Docker containers. You can exposespecificdevices to your container using the--deviceoption todocker run, as in:docker run --device /dev/i2c-0 --device /dev/i2c-1 myimageYou can remove all restrictions with the--privilegedflag:docker run --privileged myimageThis will expose all of/devto your container, and remove other restrictions as well (e.g., you will be able to change the network configuration in the container and mount new filesystems). | I am trying to use the i2c pins on a raspberry pi inside a docker container. I install all my modules using RUN but when I use the CMD to run my python program i get an error that saysTrackback (most recent call last):
file "test.py", line 124, in
bus = smbus.SMBus(1)
IOError: [Errno 2] No such file or directoryIf I run this on my raspberry pi and not in my container it works fine. But when I turn off my i2c pins on my raspberry pi it gives me the same error when running it. So I know it has to do with my i2c pins being activated. Does anyone know how to resolve this problem? | I2C inside a docker container |
Check if you have thensentertool. It should be in theutil-linuxpackage, after version 2.23. Note: unfortunately, Debian and Ubuntu still ship with util-linux 2.20.If you havensenter, it's relatively easy. First, find the PID of the first process of the container (actually, any PID will do, but this is just easier and safer):PID=$(docker inspect --format '{{.State.Pid}}' my_container_id)Then, enter like this:nsenter --target $PID --mount --uts --ipc --net --pidVoilà! Note, however, thatnsenterwon't honor capabilities.If you don't havensenter(e.g. if you are using Debian or Ubuntu, or your distro has too old util-linux), you can download util-linux and compile it. I have ansenterbinary, maybe I can upload it to the Docker registry if that could help anyone.Another option is to usensinit, a helper tool for libcontainer. I don't think that there is a lot of documentation fornsinitsince it's very new, but checkhttps://asciinema.org/a/8090for an example. You will need a Go build environment. | In Docker releases previous to v0.9.0, you could attach(inject) a process into a container by using lxc-attach. For example:docker run -d ubuntu:12.04
docker inspect {{containerhash}} | grep ID
// "ID": "d846ae242838de66f12414fbc8807acb3c77778bdb81babab7115261f4242284"
sudo lxc-attach -n d846ae242838de66f12414fbc8807acb3c77778bdb81babab7115261f4242284 -- /bin/bashThis no longer works because of the 0.9.0 switch to libcontainer.How can we do this via libcontainer?There is an option to switch to lxc with a startup option, but I'd like to know how this can be accomplished via libcontainer. | Attaching process to Docker libcontainer container |
Deleting thenpm installtags from .csproj as suggested in this threadhttps://github.com/dotnet/sdk/issues/9593by user PKLeso resolved the problem.This will delete frontend from your container completely if I remember correctly. However if you want to remain it within container just make sure thatnpm installon your frontend leaves no errors. Beacuse otherwise MSB3073 error occurs. | i was containerizing my .Net + React.js application but during the process I have encountered an unexpected error. I got myself acquainted with similar posts but none of the solutions solved my problem. Since the build log is quite long I have placed in pastebin:https://pastebin.com/PhfYW3zmThe dockerfile which I am using comes from the official documentation, and that's why it comes to me as a surpise that it does not work:https://learn.microsoft.com/en-us/visualstudio/containers/container-tools-react?view=vs-2022The Dockerfile itself:FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
WORKDIR /src
COPY ["WebApp/WebApp.csproj", "WebApp/"]
RUN dotnet restore "WebApp/WebApp.csproj"
COPY . .
WORKDIR "/src/WebApp"
RUN dotnet build "WebApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApp.dll"] | error MSB3073: The command "npm install" exited with code 1 |
I just looked through what relevant literature (Adrian Mouat'sDocker, Liz Rice'sContainer Security) has to say on the topic and added my own thoughts to it:The main intention behind the much cited best practice to run containers as non-root is to avoid container breakouts via vulnerabilities in the application code. Naturally, if your application runs as root and then your container has access to the host, e.g. via a bind mount volume, a container breakout is possible. Likewise, if your application has rights to execute system libraries with vulnerabilities on your container file system, a denial of service attack looms.Against these risks youareprotected with your approach of usingrunuser, since your application would not have rights on the host's root file system. Similarly, your application could not be abused to call system libraries on the container file system or even execute system calls on the host kernel.However, if somebody attaches to your container withexec, hewouldbe root, since the container main process belongs to root. This might become an issue on systems with elaborate access right concepts like Kubernetes. Here, certain user groups might be granted a read-only view of the cluster including the right to exec into containers. Then, as root, they will have more rights than necessary, including possible rights on the host.In conclusion, I don't have strong security concerns regarding your approach, since it mitigates the risk of attacks via application vulnerabilities by running the application as non-root. The fact that you run to container main process as root, I see as a minor disadvantage that only creates problems in niche access control setups, where not fully trusted subjects get read-only access to your system. | Is it considered a secure practice to run root privilegedENTRYPOINT ["/bin/sh", entrypoint.sh"], that later switches to non-root user before running the application?More context:There are a number of articles (1,2,3) suggesting that running the container as non-root user is a best practice in terms of security. This can be achieved using theUSER appusercommand, however there are cases (4,5) when running the container as root and only switching to non-root in the anentrypoint.shscript is the only way to go around, eg:#!/bin/sh
chown -R appuser:appgroup /path/to/volume
exec runuser -u appuser "$@"and in Dockerfile:COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/bin/sh", "entrypoint.sh"]
CMD ["/usr/bin/myapp"]When callingdocker top containerI can see two processes, one root and one non-rootPID USER TIME COMMAND
5004 root 0:00 runuser -u appuser /usr/bin/myapp
5043 1000 0:02 /usr/bin/myappDoes it mean my container is running with a vulnerability given that root process, or is it considered secure?I found little discussion on the subject (6,7) and none seem definitive. I've looked for similar questions on StackOverflow but couldn't find anything related (8,9,10) that would address the security. | Docker - is it safe to switch to non-root user in ENTRYPOINT? |
Don't uselocalhost(basically an alias to127.0.0.1) as your server address within a Docker container. If you do this only 'localhost' (i.e. any service within the Docker container's network) can reach it.Drop the hostname to ensure it can be accessed outside the container:// Addr: "localhost:" + port, // unreachable outside container
Addr: ":" + port, // i.e. ":3000" - is accessible outside the container | I have a Go server which something like that. Router is Gorilla MUXvar port string
if port = os.Getenv("PORT"); port == "" {
port = "3000"
}
srv := &http.Server{
Handler: router,
Addr: "localhost:" + port,
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
fmt.Println("Server is running on port " + port)
log.Fatal(srv.ListenAndServe())Dockerfile is# Build Go Server
FROM golang:1.14 AS go-build
WORKDIR /app/server
COPY cmd/ ./cmd
COPY internal/ ./internal
COPY go.mod ./
COPY go.sum ./
RUN go build ./cmd/main.go
CMD ["./main"]I got successful a build. I ran it with following commanddocker run -p 3000:3000 baaf0159d0cdAnd I got following output. Server is runningServer is running on port 3000But when I tried to send request with curl I got empty response>curl localhost:3000
curl: (52) Empty reply from serverWhy is server not responding properly? I have another routes which I did not put here and they are not responding correctly too. I am on MacOS by the way. | Go server empty response in Docker container |
As I mentioned in comments, workaround here could be to use-vinstead of--mountDifferences between -v and --mount behaviorBecause the -v and --volume flags have been a part of Docker for a long time, their behavior cannot be changed. This means that there is one behavior that is different between -v and --mount.If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host,-v creates the endpoint for you. It is always created as a directory.If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host,Docker does not automatically create it for you, but generates an error.If you use docker swarm then it's well documentedhereIf you bind mount a host path into your service’s containers, the path must exist on every swarm node. The Docker swarm mode scheduler can schedule containers on any machine that meets resource availability requirements and satisfies all constraints and placement preferences you specify.Worth to check thisgithub issue comment. | I try to buildistio(1.6.0+) using Jenkins and get an error:docker: Error response from daemon: invalid mount config for type "bind":
bind mount source path does not exist: /home/jenkins/.dockertheslavecontains.dockerdirectory:13:34:42 + ls -a /home/jenkins
13:34:42 .
13:34:42 ..
13:34:42 agent
13:34:42 .bash_logout
13:34:42 .bash_profile
13:34:42 .bashrc
13:34:42 .cache
13:34:42 .docker
13:34:42 .gitconfig
13:34:42 .jenkins
13:34:42 .m2
13:34:42 .npmrc
13:34:42 .oracle_jre_usage
13:34:42 postgresql-9.4.1212.jar
13:34:42 .ssh
13:34:42 workspaceparts ofIstioscriptexport CONDITIONAL_HOST_MOUNTS=${CONDITIONAL_HOST_MOUNTS:-}
if [[ -d "${HOME}/.docker" ]]; then
CONDITIONAL_HOST_MOUNTS+="--mount type=bind,source=${HOME}/.docker,destination=/config/.docker,readonly "
fi
"${CONTAINER_CLI}" run --rm \
-u "${UID}:${DOCKER_GID}" \
--sig-proxy=true \
${DOCKER_SOCKET_MOUNT:--v /var/run/docker.sock:/var/run/docker.sock} \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
$CONTAINER_OPTIONS \
--env-file <(env | grep -v ${ENV_BLOCKLIST}) \
-e IN_BUILD_CONTAINER=1 \
-e TZ="${TIMEZONE:-$TZ}" \
--mount "type=bind,source=${PWD},destination=/work" \
--mount "type=volume,source=go,destination=/go" \
--mount "type=volume,source=gocache,destination=/gocache" \
${CONDITIONAL_HOST_MOUNTS} \
-w /work "${IMG}" "$@"... Have you tried to use -v instead of --mount, do you have any error then?❗️ I changed--mountto-vanderrordisappeared-v ${HOME}/.docker:/config/.docker | Invalid mount config for type "bind": bind mount source path does not exist: /home/jenkins/.docker (Istio) |
Installing version0.12.4(I had0.12.2.2before) solved the problem. SeeHow can I install the latest wkhtmltopdf on Ubuntu 16.04?for the steps. | When I use wkhtmltopdf (version 0.12.2.4, installed via apt-get) within a Docker container it fails withQXcbConnection: Could not connect to display(When I set the environment variableDISPLAY=unix0, I getQXcbConnection: Could not connect to display unix0which makes sense as no Xserver seems to be installed)There seems to be a headless version (source) and I thought that would mean that I don't need an Xserver.(xvfbseems to be another option, but I'm not sure how to run it / what to install)How can I run wkhtmltopdf in a Docker container, if I can't change the base image toopenlabs/docker-wkhtmltopdf? | How to use wkhtmltopdf with Docker |