ASP.NET Core Apps in Docker Swarm Deployed to Azure

Note: Please read the previous post on Docker Swarm before continuing with this one.

Problem

How to deploy a cluster of ASP.NET Core applications using Docker Swam mode in Azure.

Solution

In the previous post I deployed a Docker Swarm on VMs setup on my local PC, which is fine for testing. In this post I’ll deploy the same services on a Docker Swarm hosted in Azure using “Docker EE for Azure”. Let’s start.

Create Public/Private Key

First we’ll create a public/private key pair, open Git Bash and use the following command (from the directory where you want the key files to be generated). Provide a filename when prompted:

Note: you may already have Git installed on your PC, if not you could do it from here.

This will generate two files, file with .pub extension is the public key and we’ll need its contents when setting up Docker for Azure. Private key will be used when using SSH to connect to Azure:

Create Azure AD Service Principal

In order to setup resources on Azure via scripts Docker provide, we need to setup an application in AD and use its service principal. Docker provides the required scripts in an image, pull the image and run scripts inside it:

Here fiverswarmapp is the name of application created in Azure AD, fiverswarmrg is the resource group under which I want all the resources deployed and northeuropse is the region. This will prompt you to authorise access to Azure, follow the steps and select a subscription:

Once the script finish executing, you’ll be given your access credentials. Make a note of App ID and App Secret, we’ll need these when setting up Azure for Docker.

Add Docker for Azure (Basic)

Login to Azure Portal and add a new service, search for “docker” and select “Docker EE for Azure (Basic)”:

Configure basic details using App ID, App Secret and public key created earlier. Select the resource group specified when creating Azure AD Service Principal:

Configure the number of manager and worker nodes:

Review the summary and create resource. It may take several minutes to setup the resources. Once done, go to your resource group:

Go to “externalSSHLoadBalancer” > Inbound NAT Rules and make a note of Public IP Address and TCP port, we’ll need this to access the master node (to manipulate Docker):

Go to “externalLoadBalancer” and make a note of Public IP Address, we’ll need this to access the deployed application:

Access Master Node via SSH

Open Git Bash and SSH into the master nodes via load balancer:

Here -i specifies the file with private key, -p TCP port for load balancer and docker@52.169.94.178 is the username/hostname:

Note: the prompt has changed to “swarm-manager”, verifying that we’re now connected to Azure.

Run docker node ls command to list all the nodes:

Tunnel into Master Node via SSH

We want to tunnel into master node so that we can use our local PC as context for Docker commands. Open Git Bash and run:

Here -fNL specifies location on remote host that listens to the commands

We can now run Docker commands with -H, which will run our commands on the remote host but will use context of our local PC. For instance run below to list nodes on the remote host:

Note: To stop tunnelling, find the process ID via ps command and then kill command to terminate the process.

Deploy Application to Swarm in Azure

Deploying application in Docker Swarm using the stack file is exactly same as discussed in the previous post, with additional -H option to point to the remote swarm in Azure:

Browse to your application using local balancer IP address:

That’s it. This is really cool.

NOTE: Azure Container Service currently doesn’t support Swarm mode, but rather than older standalone Docker Swarm. Hopefully this will change in future because the process of setting up Azure Container Service is even simpler however, a lot of steps are same e.g. generating keys, accessing via SSH and tunnelling etc!

Leave a Reply