Docker and Docker Compose for PHP development with GitHub and Digital Ocean deployment

I have always missed some easy to follow tutorial on Docker, so I have decided to create it myself. I hope it will help you understand why Docker is such a popular tool and why more and more developers are choosing Docker over Vagrant and other solutions.

When it comes to PHP development, you have basically three options how to approach to the problem of preparing development environment for your project. The oldest way is to install individual services on your development machine manually. With different versions of these services on staging and production environment, you can get into dependencies problems rather quickly. Also, it’s quite a challenge to manage different projects with different requirements on a single computer.

You can solve this problem with virtual machines. Download VirtualBox and set up your environment individually for every project so they won’t interfere because they will be totally separated. When you need to deploy your work along with your environment to the remote server, you will just provision the whole virtual machine. Vagrant can help you with this process because it allows you to deploy your project directly. For example to a DigitalOcean Droplet. But the problem is that you are working with the full-blown operating systems even though they are virtualized.

What if there is another way? What if you don’t need full operating system encapsulated in virtual machines to keep your projects separated and yet you would be able to have the same development environment everywhere, on your local machine, on the testing server and even on the production server. Welcome to the world of Docker!

To better understand the difference between Docker and VM-based solution, take a look at the image below:

Docker versus Virtual Machines

Docker can help you as a developer in three areas by:

  1. eliminating the “it works on my machine” problem once and for all because it will package dependencies with your apps in the container for portability and predictability during development, testing, and deployment,
  2. allowing you to deploy both microservices and traditional apps anywhere without costly rewrites by isolating apps in containers; this will eliminate conflicts and enhance security,
  3. streamlining collaboration with system operators and getting updates into production faster.

Interested? Great! Let’s give it a try.

Docker Installation

This one is easy, all you need is download a package for your operating system. In my case, I downloaded Docker for Mac. You will install Docker as any other application. Once you see the Docker happily humming in your top bar, you can start building awesomeness with me!

I will use Atom ( editor along with the terminal-plus package. If you want to follow my setup, be aware that you need to tweak the terminal-plus a bit to work. There is some minor issue, but it can be easily fixed like this:

  1. Go to ~/.atom/packages/terminal-plus/package.json and locate dependencies section.
  2. Remove the commit id (#……) at the end of the pty.js entry: “pty.js”: “git+” becomes “pty.js”: “git+”
  3. Go to terminal and run these commands:
Restart the Atom editor and terminal-plus should work now! Of course, you can choose your own text editor and terminal client. Everything will work the same.

Create your first Docker image

The easiest way to create a Docker image is with the Dockerfile which is something like a recipe for building an image.

Image is something like a blueprint. You can use one blueprint to create many objects like cars or houses. Similarly, you can use one image to create many containers.

Let’s take a look how easy it is to create a development environment based on PHP 5 and Apache web server.

Create a new folder on your Desktop and name it docker-apache-php5.

Inside docker-apache-php5 folder, create new folder called src where we will add a new file called phpinfo.php

Create this file and put this code inside:

This is very simple php code that will call just one simple function, but this function will output nice table with a detailed configuration of our development environment we are just creating.

Create another file, this time directly inside docker-apache-php5 folder and call it Dockerfile (just like this, no extension). Inside this file, we will write some directives for Docker.

Your directory structure should look like this now:

We want to build our development environment on PHP 5 and Apache web server. The best way is to start with the image already available. In case of PHP, there is probably no better source than the official image.

Docker images are available on Docker Hub. Sign up for a free account and once you are in, you can search for images.

Go ahead and type php in the search box.

Select php official image and let’s take a look at the details:

We want to use 5.6-apache, version which is PHP 5.6 including the Apache web server. This is convenient because we don’t need to install Apache separately.

Go ahead and click on the link (5.6/apache/Dockerfile) which will get you to GitHub Repository of this image. Just take a look at the Dockerfile.

Don’t panic, our Dockerfile will have just a couple of lines because we will take advantage of the hard work of PHP team and use their image. This doesn’t mean that you can’t just sit down and write your own image based on Debian Linux, but why would you want to waste your time, when you can just use what’s already done?

In order to use this image, go to your Dockerfile and write this line of code:

This says that our own image will be based on the 5.6-apache image created and maintained by PHP team. Sweet.

Type this highlighted line below:

Make sure there is a space between src/ and /var. It’s very important! Now, this line says that we want the content of the src folder we have created a few minutes ago, to be copied to /var/www/html/ but you might wonder why and where it is located.

This is the folder structure that will be created while our image will be built or more specifically while we will create a container from that image. I showed you the Dockerfile for 5.6-apache image on purpose. Remember the line FROM debian:jessie?

Our image will be based on 5.6-apache image, but even this image is based on another image. It this case, it is debian:jessie image. So basically, PHP team grabbed debian:jessie image and added their own modifications with their Dockerfile, like we are adding our own modifications to 5.6-apache image with our Dockerfile.

The point is that we are all adding layers to the basic debian:jessie image which is a Linux distribution and as you probably know, Linux file system starts with root (/) followed by specific subfolders. Mac OS is based on UNIX and it works similarly. Your Desktop is actually located in /Users/your-name/Desktop. In my case, it is /Users/zavrelj/Desktop.

Now, for a web content, Apache web server uses a directory called html which is stored inside www directory inside var directory. That’s how it is and because we know it, we can say that once our image with Apache web server is initialized or spun up to create a container, we can safely copy the content of our src folder to /var/www/html folder on Debian because it will be there since the Apache web server will create it during its own installation.

Give yourself a pause and let this all sink. It’s a really important concept.

Ok, the last line we will add to our Dockerfile looks like this:

It says that we want the port 80 to be available for incoming requests.

Your Dockerfile should look like this now:

Once you have this, save the changes, open your favorite terminal app. In the terminal, set your  docker-apache-php5 folder as a working directory. I expect you to know how to work with command line.

If you don’t just open terminal and type

It will show you your current working directory. If have you followed me step by step so far and you are on Mac computer, this command should get you to the right directory:

Copy this code and paste it into the terminal, then hit Enter. To make sure, you are in the right directory, type ls in the terminal and hit Enter. If you see something like this, you’re good to go:

It’s important to be in the right directory, where the Dockerfile is saved. We will now build our image from the Dockerfile.

Type this line in terminal:

 This command will build our image, -t option lets you give the image a custom name, in our case, it will be php-image because I want you from the very beginning to be able to make a distinction between images and containers. Finally, the dot at the end of this command means that the Dockerfile is located in the current directory. That’s why we wanted to get there!

If everything went right, you should see something similar in your terminal:

Docker has just created an image and assigned it an ID, in my case 576a14c36bc9. To see the list of all your images, just type this command in your terminal:

As you can see, there are two images, and they are sorted by the date of creation. The first one is our image we have just built, the second one is the image Docker pulled from Docker Hub. As you can see, its name is php and the tag is 5.6-apache, together it makes php:5.6-apache which is exactly what we wrote in the first line of our Dockerfile! Docker needed to pull this image first in order to create our own image. That’s why we have two images even though we have created only one.

Now we need to create a container from our php-image. Our image is just like a snapshot. To be able to actually work with your services like PHP, you need to spin up the container from that image.

Type this in your terminal:

This will create a container from our php-image.

-p 80:80 is port mapping, remember how we exposed 80 in the Dockerfile? Well, now we need to tell the container to use the exposed port 80 and deliver its content to the port 80 of our localhost.

-d stands for a detached mode which will bring the process to the background so you can still use the same terminal window

–name allows us to give our container name of choice, otherwise, Docker would pick one for us randomly

And finally, at the end of this command is the name of the image from which we want to create our container.

Remember that the name must be put only after all options! To make sure that your new container is up and running, type this command in terminal:

You should see this:

You can see container’s name, the image it was created from, ports, ID and status.

Since our container is waiting for some work, let’s make it do its job! Open your web browser and type localhost/phpinfo.php

You should get this:

It means that everything works and we are running PHP 5.6.30 on our local web server! Great work!

Adding database

Unfortunately, this container won’t work with a database because all we have is PHP and Apache web server. To add a database server to our development environment, like MySQL for example, we would have to create a container for a database and connect it to our php-container, thus those two containers or rather services inside those containers could talk to each other.

Let’s take a look at how this can be done. We will start again in Docker Hub and search for mysql:

And sure enough, there is an official repository maintained by MySQL team. Let’s create our own mysql-container by running this code:

When you run this code, Docker will first look for mysql:latest image on your computer. If it’s not available, it will pull it from Docker Hub first and then spin up the container from it. This is very important because Docker is trying to save your disk space. If mysql:latest image is already on your computer, Docker will use it instead of downloading yet another copy.

Remember this, we will come back to this concept later.

Type docker ps again. You should now see two containers and both are running.

This demonstrates, that you can immediately spin up a container from an already existing image. Only if you want to create your own image, you need to actually build it first and run it later to spin up a container from it.

Let’s create some mysql code to see if mysql is working. Since we run PHP 5.6, we can use mysql_connect function. Even though it’s deprecated in PHP 5 and completely removed from PHP 7, for our testing purposes, it will be just fine.

In src directory, create a new file named mysql.php and place this content in it:

This is very simple php code. It tries to connect to the database server with the credentials we provided. If the connection cannot be established, it will display an error. If the connection is successful and the database users exists, it will display a success message.

Now, try to go to localhost/mysql.php in your web browser. You should get this message:

Our mysql.php file can not be found, even though it is in the same directory as phpinfo.php file and this one can be found just fine.

The problem is that even though we added a new file and thus changed the content of our project, we are still running the old php-container based on the original php-image which has no clue about the changes we have just made.

To fix this, we need to rebuild our php-image and spin up a new container from this updated image. If you think now that this is a lot of hassle, it is, but just for now. You will truly appreciate a feature called volumes I will introduce later, once you go with me through this hell.

Stop the php-container we have created from php-image by running this command:

To list all containers, even those that are not running, use docker ps -a command. You can see that php-container has exited:

Once the container is stopped, we can remove it with this command:

You can remove the container even while it is running, in that case, you need to add -f option to the end of the command above. Once the php-container is removed, you can remove the php-image as well. If you tried to remove php-image while the php-container still existed, Docker would protest.

Ok, now we can rebuild our php-image again and the only reason for that is to copy our new mysql.php file into /var/www/html folder. Remember the instruction from Dockerfile? Here it is again: COPY src/ /var/www/html/

This is why we did all of this, to get our new mysql.php copied from src folder to /var/www/html folder. There is another, way better solution, though, and we will get to it soon. Let’s build our image again:

and spin up the updated container from it:

Now, navigate to localhost/mysql.php from your web browser. The file apparently exists, but we have another problem:

PHP image is very lightweight, it doesn’t usually include PHP extensions and mysql is one of those missing. That means that PHP doesn’t know about any function called mysql_connect()To fix this, we need to add mysql extension to our php-image first. But don’t be scared, we won’t undergo the same painful process again.

You can directly rebuild the image and then run the container without the painful process of stopping the container, removing the container and rebuilding the image. But I didn’t tell you sooner because I wanted you to try all these commands so you know how to manage containers and images. I hope you will forgive me this pesky move 🙂

Go to your Dockerfile and add this highlighted line at the bottom:

This will add mysql extension to our PHP image. Now just run this command in terminal:

You can see that in Step 4/4, a mysql extension has been added to our image:

If you list all images with docker image ls, you can see that php-image has been created only a few second ago. This means that if you build the image with the same name, the original image is overwritten with the new one.

The same can’t be done with the container, though. If you try and run this command now while the original container is still running…

…you will get this error message:

You need to stop and remove the currently running php-container first. As I mentioned already, you can do this at the same time by using -f option:

Now you can spin up the php-container again, but this time the updated php-image will be used.

You might wonder if you could just spin up a container with a different name and kept the original one running.
You could do that! The problem is that you would have to map a different port as well because port 80 would be still taken by the original php-container, thus unavailable for a new mapping.
This could be of course solved by using for example -p 81:80 instead of -p 80:80. Finally, you would have to explicitly type the port to the web browser like this: localhost:81/mysql.php.
Port 80 is the default one, that’s why it doesn’t have to be written explicitly, unlike other ports.

OK, navigate to localhost/mysql.php from your web browser again. Even though we get another warning now, we are getting closer because the new error message comes directly from mysql_connect() function. That means that it exists and PHP knows about it. But it seems like there is a problem with a network address:

The reason for this error is the fact that we have two separate containers. One for PHP (php-container) and one for MySQL (mysql-container). And they don’t know about each other, they don’t talk to each other. Let’s fix this. Stop php-container once again and remove it at the same time:

Run this code:

You are already familiar with this code except for the link part. This says that we want to link our php-container with mysql-containerNow, navigate to localhost/mysql.php from your web browser, this time you should see this:

Perfect! Now, in order to be able to modify the content of our src folder without the need to rebuild images all the time, we will add -v option to our docker run command. So for the last time, stop and remove php-container:

Run it again with this new option:

This option should be quite familiar. We used something similar in our Dockerfile to tell our image to copy the content of our src folder to the Apache web server default directory inside the container.

Well, this time, we will create the volume, which means that those two locations will be in sync. Actually, we will mount our folder saved in Desktop to the location inside the container. Once you make any kind of change in src folder, it will be automatically available in /var/www/html folder in Apache web server.

Let’s test this! Go to your mysql.php file and add “AMAZING!” at the end of the echo like this:

echo “Successfully connected to the database server! Database Users selected! AMAZING!“;

Save the file and refresh the browser! Isn’t that amazing? 🙂

Docker Compose

So far, we did it all manually. We configured and created images, we created containers and link them together. If you work with two or three containers, it is doable, even though we have spent quite some time with this. However, if you need to set up the environment with many more containers, it will become very tedious to go through all those steps manually every time.

Luckily, there is a better way. Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to create a YAML configuration file where you will configure your application’s sevices, and define all the steps necessary to build images, spin up containers and link them together. Finally, once all this is done, you will just set it all in motion with a single command.

Let’s take a look at how this works. This time, we will create the LEMP stack which will consist of Linux, PHP 7, Ngnix, and MySQL.  It is generally recommended to have one process or microservice per container, so we will separate things here. We will create 6 containers and orchestrate them with Docker Compose.

As we already did in the previous section, we will again use official images and extend them with our Dockerfiles. First, let’s delete all containers and images, so we can start with a clean slate.

To list all containers:

To delete containers:

To list all images:

To delete images:

-f option will force deletion even if the container is running or the image is in use.

If by any chance you won’t be able to delete container or image by its name, use its ID instead. This is actually the only viable alternative if you happen to have image with no name: <none>

List all containers with:

and all images with:

All clear? Great! Let’s begin! Go to your Desktop and create a new folder called docker-ngnix-php7.


Let’s start with a web server. Instead of Apache, we will use Nginx this time. First, we will check if there is any official image on Docker Hub. And sure enough, here it is:

We will choose the tag latest, so I hope you remember, that the name of the image and the tag go together like this: nginx:latest

Now, create a new file in your docker-nginx-php7 directory and save it as docker-compose.yml

Inside, write this:

This should be somewhat familiar. Remember when we ran mysql image? We used this command in the terminal: docker run -p 80:80 -d –name php-container php-image.

Now, instead of running this command, we will take the options and save them in a configuration file. Then, we will let Docker Compose run commands for us by following the instructions in this file. Save the file. This is what it should look like:

Go to the docker-nginx-php7 directory in your terminal and run this command:

-d option still means detached, nothing new here.

Docker Compose will pull Nginx image from Docker Hub, create a container and give it a name we specified. Then, it will start the container for us. Docker Compose will do all of these steps automatically.

I gave the container a specific name just for educational purposes here, so we can easily identify it. But it’s not a good practice in general because container names must be unique. If you specify a custom name, you won’t be able to scale that service beyond one container, so it’s probably better to let Docker assign automatically generated names instead. But, in our case, I want you to understand how things are working.

Use the familiar docker ps command to see the list of running containers. Write down the IP address assigned to ngnix-container and navigate to this address with your web browser. You don’t have to write the port number since 80 is a default value.

You should see your Nginx web server running:

That was easy, right?


Let’s say that we want to add the PHP to the mix and we want it to be automatically downloaded, configured and started. We also want to modify our Nginx web server a bit. You know the drill. If you want to modify the official image and add your own changes, you need to use Dockerfile as we already did in the previous section.

Let’s do this again. First, we will create a new directory inside our docker-ngnix-php7 folder and name it nginxIn this directory, we will save a new DockerfileNext, we will create a new index.php file which will be saved in www/html directory inside docker-ngnix-php7 folder with this content:

This simple page will help us test if PHP is running.

If you use Atom editor, you can create new file and the whole new directory structure at the same time! Just click with the right mouse button on the name of docker-nginx-php7 folder in left pane in Atom, choose New File and instead of typing just the name of the file, type the whole path www/html/index.php. Atom will create the file for you and both directories as well!

Your folder structure should look like this now:



To configure our Nginx web server, we will use default.conf, so create this file and save it in nginx folder. Now add this content inside default.conf and save it:

Just one thing you want to note is the highlighted line. Nginx will pass requests to the port 9000 of our php-container we are about to create. 

Now back to Dockerfile for Nginx. Write these two lines in it and save the changes:

This means that we will start with the default nginx image (nginx:latest), but then, we will use our own configuration we have just saved in default.conf and copy it to the location of the original configuration. Now we need to tell Docker to use our own Dockerfile instead of downloading the original image and since Dockerfile is inside nginx directory, we need to point to that directory. So instead of using image: nginx:latest, we will use build: ./nginx/

We will also create volumes so the Nginx web server and PHP as well can see the content of our www/html/ directory we have created earlier, namely our index.php file which sits inside. This content will be in sync with container’s directory /var/www/html/ and what’s more important, it will be persistent even when we decide to destroy containers.

Next, we will create a new php-container using original PHP image, this time PHP 7 FPM version. We need to expose port 9000 we set in default.conf file because the original image doesn’t expose it by default. And finally, we need to link our nginx-container to php-containerAfter implementing all those changes, our modified docker-compose.yml will look like this:

You might wonder what is the difference between ports and exposeExposed ports are accessible only by the container to which they were exposed, in our case php-container will expose port 9000 only to the linked container which happens to be nginx-containerPorts defined just as ports are accessible by the host machine, so in my case, it would be my MacBook or rather the web browser I will use to access those ports.

Even though our nginx-container is still running, we can run this command:

This time, Docker will pull php:7.0-fpm image from Docker Hub and create a new image based on the instructions in our Dockerfile.

As you can see, Docker is warning us that it built the image for nginx service only because it didn’t exist. This means that if this image already existed, Docker wouldn’t build it and it would use the existing image instead. This is very important because even though you will change Dockerfile in the future, Docker will ignore those changes unless you specifically say that you want to rebuild existing image by using the command docker-compose build.

Go ahead and take a look at the list of all images:

You should see the official nginx image, official php image that has just been pulled and finally, our modified version of official nginx image which name is dockernginxphp7_nginx. This name is based on the name of the directory where our docker-compose.yml file is saved. The last part of its name comes from the name of the image from which our image is derived, in our case _nginx.

docker ps will show you two containers running:

If your nginx-container is not running, use docker logs nginx-container command to see what is the problem. Very probably, it will be some kind of typo in default.conf file.

Even though we didn’t stop the original nginx-container based on the official nginx image, it’s not only stopped, it’s completely gone. Instead, we have our new modified nginx-container running, but this one is spun up from dockernginxphp7_nginx image. If you go back to your web browser and refresh the page, you should see this:

Let’s see if the mounted directory works as expected. Go to your index.php file and write AMAZING! inside <h1> tag like this:

When you refresh the page, AMAZING! will appear:

One last thing before we move to the database. As you might have noticed, we have mounted the same directory www/html/ to both our containers, nginx-container, and php-containerWhile this is perfectly legit, it is a common practice to have a special data container for this purpose. Data container holds data and all other containers are connected or linked to it.

In order to set this up, we need to change our docker-compose.yml file once again:

As you can see, we added a new container, app-data-container, which uses the same volumes parameters we used for php-container and nginx-container so far. This data container will hold application code, so it doesn’t need to run. It only needs to exist to be accessible, but since it won’t serve any other purpose, there is no need to keep it running and thus wasting resources.

We use the same official image we already have pulled previously. Again, this is to save some disk space. We don’t need to pull any new image for this purpose, php image will work just fine. Also, we told Docker to mount volumes from app-data for nginx-container and php-container, so we won’t need volumes options for those anymore and we can delete it.

Finally, we say that both nginx-container and php-container will use volumes from app-data-containerRun docker-compose up -d once again. As you can see in the terminal, Docker has just created a new app-data-container and re-created php-container and nginx-container.

Now, let’s see the list of containers, but this time, let’s display all containers, not just the those that are running:

As you can see, the app-data-container has been created but it’s not running because there is no reason for it to run. It only holds data. And it has been created from the same image as php-container, so we saved hundreds of megabytes we would otherwise need if we pulled data-only container.


We need to modify our php image because we need to install the extension that will allow php to connect to mysql. To do so, we will create a new folder named php and inside we will create a new Dockerfile with this content:

Your folder structure should look like this now:

Next, we need to change our docker-compose.yml file again.  We will change the way the php-container is built, next we will add mysql-container and mysql-data-container and finally, we will link php-container to mysql-container.

We will also define some environment variables for mysql-container. MYSQL_ROOT_PASSWORD and MYSQL_DATABASE variables will be applied only if a volume doesn’t contain any data. Otherwise, these will be ignored. It makes sense because otherwise we would create a new database with the same name and root password each time we would spin up a container, thus overwriting our database content. Not the behavior we want. I will name my database zavrel_db but go ahead and change the name if you feel like it!

As with app-data-container, mysql-data-container will just hold the data, this time not our application code, though, but database data like tables with rows and their content. Since we won’t access this data directly, we don’t really care where they will be located on our host machine, so we don’t need to mount them to our directory structure.

To test our MySQL setup, we will modify our index.php as well, so we can try to access our database:

This new script will take values we defined for the database, user, and password (notice that these are the same as environment values we set for our mysql-container) and try to establish the database connection. Once the connection is established, the script will try to select all tables from INFORMATION_SCHEMA where table type is BASE TABLE. Now, if you’re not familiar with MySQL, this might be a bit confusing for you.

Basically, every MySQL instance has a special database that stores information about all the other databases that the MySQL server maintains. This special database is called INFORMATION_SCHEMA. INFORMATION_SCHEMA database contains several read-only tables. They are actually views, not databases. Databases are of BASE TABLE type.

So when we try to select the table of type BASE TABLE we are actually looking for a database only and it is the database we will yet have to create. If it’s too much for you, don’t worry, it will all make sense soon.

Anway, once you have Dockerfile and index.php updated, run docker-compose up -d again. Docker will pull mysql image, next, it will download and install php extension for connection to the database.

Finally, it will start app-data-container, create mysql-data-container and mysql-container and recreate php-container and nginx-container.

Check with docker ps -a that you have five containers now, 2 of them exited (mysql-data-container and app-data-container).

Refresh index.php in your web browser. You should see this line at the bottom: There are no tables in database “zavrel_db”.

Which is perfectly fine because we haven’t created any tables in our database yet. However, there are already some tables, but those are not visible by a regular user. If you want to see them, change $user to “root” and $password to “secret” in index.phpThis way, you will get access to everything!

Refresh the browser once more:

What a list! Right? Ok, let’s put back our regular user who can see only what he should see:

Deep down the rabbit hole

So far, containers were like black boxes for us. We ran them, we listed them, but we never saw what is inside. That’s about to change now. I will show you how you can get right inside mysql-container and work with mysql server from within.

Run this command from your terminal:

Now you are inside the container! You can tell by the new prompt in your terminal:

It consists now of root@ followed by the ID of the mysql-containerIn my case, it’s [email protected], in your case the ID would be different, but it is the same ID your mysql-container has assigned. Want to check? List all running containers by docker ps and look for the CONTAINER ID in the list, it’s the first column.

You can now take a look around as you would in any other Linux system:

  • ls command will show you the list of files and directories,
  • pwd command will print the current directory, which is root directory (/),
  • uname -or command will show you the kernel release and that this is actually a Linux operating system.

Remember, how we defined the volume for mysql-container in docker-compose.yml file?

Let’s take a look at this directory:

ls command will show you its content:

All right, let’s end this quick trip by going back to the root directory:

Now, we will run mysql command line interface (MySQL CLI) inside our mysql-container that will allow us to work with the database server.

I want you to stop now for a while to let this sink and appreciate. You are working on your physical computer. This computer is running an operating system, Windows or Mac (if you’re on Linux, it’s a bit different). Inside your operating system, you are running a Docker container which is basically a Linux machine.
Now, we will go even deeper and run another command line interface to work with the database server. Can you see how we go deeper and deeper, layer after layer, down the rabbit hole? 🙂

Ok, let’s go back to work! To get access to MySQL CLI, we need username and password. Luckily for us, we already created both user and password when we set up environment variables for our mysql-container in docker-compose.yml file. I hope you noticed that we also set up root password as an environment variable. Remember this line?

You might ask, how do we know that there is a user named root. Well, there is always this user. That’s why we were able to set the password for him with MYSQL_ROOT_PASSWORD variable without even questioning his existence.

To sign in mysql server, though, we won’t use root access because that would give us too many results as root can see everything.

Sign in with a regular user instead: mysql -uuser -ppassword

-uuser means user is “user”

-ppassword means password is “password”

Run the command and you will be taken deeper, inside the world mysql server. Again, you can tell by the prompt which changed now from [email protected] (different ID in your case) to mysql> that we are somewhere else.

Inside mysql, there are different rules and different commands. Start with the command show databases; Don’t forget the semicolon! I told you, there are different rules in this world.

You will see the nice table with the list of all databases available. One of them is our own database with zavrel_dbRemember when we created it? Again, we defined it while preparing our mysql-container in docker-compose.yml file: MYSQL_DATABASE: zavrel_db

Let’s create a new table in our database. First, we need to select it, so mysql knows which database we want to work with:

You will get the information that database has been changed. Now, we can create a new table:

Go to your web browser and refresh the page, you will see this table in the list:

Ok. We are done here, let’s get all the way back to the familiar terminal of our computer. First, we need to leave MySQL CLI. This can be done by command \q

Go ahead and run it! MySQL will say Bye and you are back inside your mysql-container. Again, you can tell by the prompt [email protected]. Let’s go one layer up. To leave mysql-container, just use the shortcut CTRL + D or type exit and hit enter. See? We are finally back to our computer terminal! How was it? Did you like the trip? I hope you did!

I wanted to show you this rather complicated way of working with databases and tables so you can truly appreciate the web client we will learn about in a minute, but first, I want to go back to volumes once again, because we need to address few more things about them.

Inspecting containers

Remember how I told you that we don’t really care about where Docker stores volumes of mysql-data-container on our computer (host machine) because we won’t access them directly anyway? Well, if you are curious where they are nevertheless, there is a way how to find out.

Run this command:

Look for the Mounts section in the output you will get. Next to Source attribute is the location of database data on our host machine. It should be something like /var/lib/docker/volumes/ and so on provided you are on Mac.

Dangling volumes

When you create a container with mounted volumes and later destroy the container, mounted volumes won’t be destroyed with it unless you specifically say you want to destroy them as well. Such orphan volumes are called dangling volumes.

So far we used a command docker rm container-name -f to remove containers, but if you want to destroy volumes as well, you need to add another option, -v. So it will look like this: docker rm -v container-name -f.

But what about containers we already destroyed so far without destroying their volumes as well? Let’s check out if there are any such volumes. First, let’s list all the volumes we have created so far:

Now let’s narrow our list by adding the filter for dangling volumes only:

-q stand for quiet which only displays volume names

-f stands for filter

It seems like we have some:

To delete them, we will combine two commands here:

This will remove all dangling volumes for us. Since Docker 1.13 you can use an easier command instead:

This will remove all volumes not used by at least one container. Now if you check volumes again, you should have only one volume left:

We reclaimed almost 500 MB of space!


Ok, let’s move on and spin up our last container. phpMyAdmin is a great tool for managing mysql databases directly from the web browser. No one will force you to stop your trips deep inside MySQL CLI if that’s what you like, but a web interface is way more convenient in my opinion. Add the following lines at the end of your docker-compose.yml file:

By now, everything should be fairly clear. We start with the official docker image, publish container’s port 80 to port 8080 of our host machine, so we can access phpMyAdmin from the web browser. We need to use a different port, though, because port 80 is already taken by Nginx web server. Finally, we will link this container to our mysql-container and set an environment variable.

Go ahead and run this command once again:

Docker will pull phpMyAdmin image and create phpmyadmin-container.

Go to your web browser and type :8080 behind the IP address your Nginx is working on. In my case, it looks like this, but localhost:8080 works as well.

You should be presented with this login screen:

Now, log in as a regular user (user / password). You’re in mysql server! Check the list of databases on the left pane and click on zavrel_db. Can you see the table users we have recently created inside MySQL CLI?

Give yourself a little break, maybe a cup of coffee, and let it all digest a bit. We will continue with more exciting stuff. But since now you have learned a lot! Pat yourself on your back for this!

GitHub Volume

Mounting a local directory to make it accessible for nginx-container and php-container is fine until you want to deploy your application to some remote VPS (virtual private server). In such case, it would be great to have your code copied to a remote volume automatically. In this section, I will show you how to use GitHub for this.

Let’s make a copy of our docker-compose.yml file and save it as docker-compose-github.ymlWe will make some changes to our app-data-container so it won’t mount a local directory but rather get a repository from GitHub. In case you have your code on GitHub in a public repository, this will make it very easy to spin up your development environment on a remote server with the code cloned from your repository.

First, we need to create a Dockerfile for app-data image. Create a new folder called app-data and save the Dockerfile there with this content:

Your folder structure should look like this now:

Again, we are using already pulled official php image, but on top of that, we will update the underlying debian:jessie Linux distro and then install gitNext, we will clone my public repository I have created for this purpose and save it inside /var/www/html directory inside our container. Finally, we will create a volume from this directory, so other containers, namely nginx-container and php-container can access it.

Now, we need to change app-data image instructions in our docker-compose-github.yml file like this:

Ok, let’s clean up everything, so we can start with a clean slate.

Stop all containers created with a docker-compose command:

Remove all those stopped containers including volumes that were attached to them:

Clean dangling volumes:

In order to use our new docker-compose-github.yml file, we need to tell docker-compose about it, otherwise, it would use the default docker-compose.yml as always.

Rebuild the images with the new configuration file:

and spin up containers again:

Navigate to your page in the web browser and you should see this:

Digital Ocean

Let’s provision our development environment to a remote server. Digital Ocean is a great service. If you don’t have an account yet, sign up with my referral link and you will get $10 in credit!

Once you’re in, create a new Droplet:

and choose Docker from One-click apps:

Pick the smallest size available, it’s more than enough for our purposes:

Since I want you to use SSH for the remote access to your Droplet, you need to set it up, unless you already have it. The whole process is quite easy. Open new terminal window and type:

When you’re asked where to save the key, just hit Enter.

If some other key is already there, it will be overwritten.

Enter the password for the newly generated key (twice).

Once you see this, your key is ready:

Run this command to display the public key, select it and use CMD + C shortcut to copy it to the clipboard:

Go back to Droplet setup and hit New SSH Key button:

Paste your copied public key to the from and fill the name of your computer:

Make sure, your computer is selected for SSH access and choose a hostname. Finally, hit that green button Create.

Once your Droplet is created, write down its IP address.

Transferring the project folder

If you have followed me step by step, you should have your docker-nginx-php7 folder on your Desktop.

We will copy this folder to our Droplet so we can run Docker Compose with our YML configuration file remotely from the Droplet.

To copy the folder, we will use rsync command. Make sure you write this down exactly as it is. Instead of IP, use the actual IP address of your Droplet. We want to transfer the actual directory, not just the content inside it, so we need to omit the trailing slash:

This command will ask for your SSH key password and then create a copy of docker-nginx-php7 folder inside the home folder of the user root (/root).

Now, let’s check if everything has been transferred. SSH into your remote server (your actual IP address instead of IP):

Can you see your familiar directory structure including two configuration files?

Nice! Everything seems to be in place!

There’s no Docker Compose on this particular Droplet, but it’s fairly easy to install it. First, we need to install python-pip:

Next, we can install Docker Compose via pip:

We are ready now to let Docker Compose do its magic. Let’s run our familiar command that will automate the whole process of pulling and building images, getting the code from GitHub and spinning up all containers. Since there are no images to rebuild, we can use the up command directly:

Once everything is done and all containers are running, you can navigate to IP address of your Droplet ( in my case). Octocat should be waiting for you:

And if you add port 8080 behind the IP address, you will get phpMyAdmin welcome screen:

Go ahead and login with user / password or root / secret, both will work. Make sure that our zavrel_db database is there:

One last thing. Once you’re done with Digital Ocean, make sure to destroy your running Droplet so you won’t be billed. Or in case you used my referral link and received those $10 in credit, to not waste it all by running the Droplet you don’t need after you finish this tutorial.

Alright! That’s all. I hope you have learned something useful today.

Have a great day!


Leave a Reply