Nginx Reverse Proxy to ASP.NET Core – Separate Containers

February 27, 2017
close up laptop keyboard and screen with code

The previous blog post showed how to setup a reverse proxy between Nginx and an ASP.NET Core application. In that example, both Nginx and the Kestrel process ran in the same box.

As alluded to, there is another (preferable) option. This time, we’ll create two separate containers: one for the application and one for the reverse proxy. Then we’ll use Docker compose to bring them up together and handle the network bridge between them.

Directory setup

This example will use a slightly different directory structure than the previous examples. The final directory structure should look something like this:

|- app
|- |- Controllers
|- |- Models
|- |- ...
|- |- Dockerfile
|- nginx
|- |- Dockerfile
|- |- nginx.conf
|- docker-compose.yml

To get started, let’s create the basic directory structure, generate the application, and build it:

mkdir example
mkdir example\app
mkdir example\nginx
cd example\app
dotnet new -t web
dotnet restore
dotnet build
dotnet publish

Docker configuration

The Docker setup this time will involve two separate Docker containers. A Docker Compose configuration will join the two together.

Docker Compose

Docker Compose is used to run multi-container Docker applications. Through a docker-compose.yml file you configure multiple application containers. Then you can build and run the composite collection of containers (similar to how you you would an individual container).

For this example, create a docker-compose.yml file in the root directory. Place this configuration inside of the file:

version: '2'
      context: ./app
      dockerfile: Dockerfile
    expose: - "5000"
      context: ./nginx
      dockerfile: Dockerfile
    ports: - "80:80"
    links: - app

That configuration defines two services: app and proxy.

For each service, the build section tells Docker Compose how to build each of the individual images when the enter collection is built.

The expose and ports sections control the way the services will interact with the network bridge and the host (see the “Network” section below).

Lastly, the proxy service is linked to the app service. That means that when the proxy service is brought up it will start the app service (if it is not already running).

Application container

Create a Dockerfile in the app directory with the following content:

FROM microsoft/aspnetcore:1.0
COPY bin/Debug/netcoreapp1.0/publish .
ENV ASPNETCORE_URLS https://+:5000
ENTRYPOINT ["dotnet", "app.dll"]

That configuration will bring up the ASP.NET Core application using Kestrel. The Kestrel process will listen on port 5000 (which will be exposed to the outside world).

Nginx container

Create a Dockerfile in the nginx directory. The Nginx image will use the base nginx image and copy a custom nginx.conf file into it:

FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

The nginx.conf file is fairly similar to the previous example. It’s contents are:

worker_processes 4;
events {
    worker_connections 1024;
http {
    sendfile on;
        upstream app_servers {
            server app:5000;
    server {
        listen 80;
        location / {
            proxy_pass https://app_servers;
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Host $server_name;

The only change from the previous example is on line 9. The previous example used an address of for the upstream app server. This time it specifies app:5000. That app is a named resource that was specified in the docker-compose.yml file.

The name used to define the upstream resources must match the name used for the service in the docker-compose.yml file.

Building the collection

To build the collection, we’ll use docker-compose. The build command will build the each of the services and then tag each container with the project name and the service name.

docker-compose build

Running the collection

To run the collection of containers:

docker-compose up

For this example, the application will be available at https://localhost:80. Accessing the application there will make a request to the Nginx service which will be proxied to the application service.

While running, the output for all contains are aggregated and intermingled. The following example output shows messages from both the app_1 and proxy_1 containers:

app_1 | info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
app_1 | Request finished in 3342.2438ms 200 text/html; charset=utf-8
proxy_1 | - - [24/Feb/2017:17:45:26 +0000] "GET / HTTP/1.1" 200 2490 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"

Networking with the proxy

Lastly, let’s briefly explore how the networking of this collection functions. When the collection is brought up, a shared network bridge is created. This bridge allows the services within the collection to communicate with one another.

There are two ways to expose network connections from a service to the bridge:

  • ports – This setting exposes the named ports within the bridge network. It also publishes the ports to the host machine.
  • expose – That setting exposes the named ports within the bridge. Unlike ports, however, the named ports are not published to the host machine.

For example, the app service in the configuration specifies 5000 through the expose setting. That exposes port 5000 of the app service to the bridge network only.

The proxy service, on the other hand, specifies 80:80 for the ports setting. That exposes port 80 of the app service to the bridge network and publishes port 80 to the host machine.

Running docker network ls shows the created bridge:

NETWORK       ID               NAME    DRIVER SCOPE
e942ad5d12ab  bridge           bridge  locale
b2fa8b3109c   host             host    local
7655823d3523  none             null    local
45a51b60efbd  example_default  bridge  local

Examining the containers through docker ps will show the effects:

CONTAINER ID  IMAGE          COMMAND                 CREATED         STATUS          PORTS                        NAMES
dddf78732098  example_proxy  "nginx -g 'daemon ..."  15 seconds ago  Up 14 seconds>80/tcp, 443/tcp  example_proxy
1e52b4a3c63cb  example_app   "dotnet app.dll"         16 seconds ago  Up 15 seconds  5000/tcp                     example_app_1

That means that the proxy service will be accessible at https://localhost:80 but the app service will not be directly reachable at https://localhost:5000. So, the only way to access the application is through the proxy service.

Next time?

In the next post, I’ll show how to setup basic load balancing using Nginx and Docker Compose. Stay tuned…

Build awesome things for fun.

Check out our current openings for your chance to make awesome things with creative, curious people.

Explore SEP Careers »

You Might Also Like