Developer Tools  / Development

A Guide on Using Multiple Dockerfiles

The widespread use of Docker has been a revolutionary step in how software is developed and delivered. But did you know that there's some advanced usages you might not be benefiting from? This includes using multiple Dockerfiles in a project. In this article, we guide you through the hows and whys of using multiple Dockerfiles in your applications.

Joel Burch

Joel Burch

COO

In software development and delivery, the Docker platform has simplified the creation and distribution of applications. It provides a streamlined, containerized approach that has had a significant impact on developer productivity. While it’s relatively simple to get started with Docker, there are some advanced features that are often underutilized. Among these is the ability to employ multiple Dockerfiles within a single project. This article will explore how to leverage multiple Dockerfiles, the benefits of doing this, as well as providing some technical examples to highlight some of the key use-cases.

Understanding Dockerfiles

Firstly, let’s brush up on some basics. A Dockerfile is a script composed of various commands and instructions used to create a Docker container image. This file includes a set of directives, each serving a specific purpose in the image creation process. Below is a basic sample Dockerfile, with comments explaining the role of each directive:

# Official Python image
FROM python:3.11-slim

# Set the working directory
WORKDIR /app

# Copy the local working directory contents into the container at /app
COPY . /app

# Install dependencies 
RUN pip install --no-cache-dir -r requirements.txt

# Listening port 
EXPOSE 8080

# Define an environment variable
ENV NAME=pythonapp

# Run app.py when the container launches
CMD ["python", "app.py"]


* FROM initializes the build stage and defines the base image to build on.
* WORKDIR sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD directives that follow in the Dockerfile.
* COPY copies new files or directories into the Docker image.
* RUN executes commands on top of the current image as a new layer and commits the results.
* EXPOSE configures Docker to open a specific network port for listening on the container, in this case port 8080.
* ENV sets environment variables.
* CMD provides the default execution command when the container is run.

When Docker builds an image from a Dockerfile, it executes these instructions sequentially, creating a layered file system. Each instruction creates a new layer in the image, with changes from the previous layer. This layered approach is crucial for understanding caching and build speed. Docker caches the result of each layer, so subsequent builds are faster if those layers haven't changed. However, any change in a layer invalidates the cache for all subsequent layers. Thus, structuring a Dockerfile efficiently, with an understanding of how caching works, can significantly improve image build speed.

Can You Use Multiple Dockerfiles?

The short answer is yes, you can use multiple Dockerfiles within a single project. This approach becomes advantageous, or even necessary, in certain scenarios. Each is driven by specific project requirements or architectural decisions. Understanding these scenarios can help developers and teams make informed choices about when and how to implement multiple Dockerfiles. Here are some key situations where multiple Dockerfiles are particularly useful:

  • Different Development and Production Environments 

It's common for development and production environments to have different requirements. For instance, a development environment might include additional debugging tools and configurations not needed in production. Using separate Dockerfiles for each environment allows for more controlled and efficient setups that are tailored to the specific needs of each context.

  • Multiple Services or Microservices 

In projects structured around microservices or services-oriented architecture, each service might have its unique dependencies and configuration requirements. Here, using a separate Dockerfile for each service facilitates a more modular and scalable approach. This method is often integrated with Docker Compose, which can orchestrate multiple containers, each built from its Dockerfile.

  • Building Applications for Different Platforms 

When developing applications intended to run on different operating systems, such as Windows and Linux, separate Dockerfiles become essential. Each platform may have distinct base images and dependencies, necessitating a unique Dockerfile to address these differences effectively.

  • Multi-Stage Builds 

Multi-stage builds in Docker allow you to use multiple build stages with separate targets within a single Dockerfile. However, in complex scenarios, it might be beneficial to split these stages into separate Dockerfiles. This separation can enhance readability and maintainability, especially in large projects where different teams may be responsible for different stages of the build process.

As you can see from the list above, employing multiple Dockerfiles can bring multiple benefits. They can significantly improve the flexibility, clarity, and efficiency of your Docker container setup. It enables a more nuanced approach to containerization, addressing the diverse needs of different environments, services, and platforms within any given software project.

Example Multiple Dockerfiles

So we’ve seen the whys of using multiple Dockerfiles. Now let’s take a look at the hows. Using multiple Dockerfiles in a project needs to be effectively managed. This can be done by employing a naming convention that clearly differentiates each Dockerfile's purpose. A common approach is to use dot notation, such as Dockerfile.dev for development environments and Dockerfile.prod for production environments. This method not only helps in maintaining clarity but also streamlines the build process by explicitly specifying which Dockerfile to use for a given context.

For reference, here are example Dockerfiles for both development and production environments:

Dockerfile.dev
# Base image
FROM node:20

# Set the working directory
WORKDIR /app

# Install dependencies
COPY package*.json ./
RUN npm install

# Copy the rest of the application code
COPY . .

# Expose port 3000 for the application
EXPOSE 3000

# Command to run the application
CMD ["npm", "start"]

In this development Dockerfile, the base is the full Node.js image, which includes all packages and tools as part of the standard image distribution. The dependencies are installed before the application code is copied, allowing Docker to cache the dependencies layer. This speeds up the build process in subsequent builds.

Dockerfile.prod

# Use a smaller, more secure base image for production
FROM node:20-slim

# Set the working directory
WORKDIR /app

# Only copy the package.json and package-lock.json initially
COPY package*.json ./

# Install production dependencies
RUN npm install --only=production

# Copy the rest of the application code
COPY . .

# Expose port 3000 for the application
EXPOSE 3000

# Command to run the application
CMD ["node", "app.js"]

In this example, the production Dockerfile uses a node:20-slim image, which is smaller and more secure by virtue of having a much smaller dependency chain. It installs only the production dependencies, reducing the size and potential security vulnerabilities of the final image.

To build and run these Dockerfiles, you would use commands specifying the target Dockerfile. For example:

To build and run these Dockerfiles, you would use commands specifying the target Dockerfile. For example:

  • For development: docker build -f Dockerfile.dev .

  • For production: docker build -f Dockerfile.prod .

This example illustrates how using multiple Dockerfiles with clear naming conventions can help tailor the build process to different environments. This ensures that each has only what it needs and nothing more. This leads to more efficient and secure Docker images, tailored for their specific use case.

Docker Compose and Multiple Dockerfiles

Let’s now explore implementing multiple Dockerfiles with a container orchestration tool, in this case Docker Compose.. Fully-feature orchestration tools like Kubernetes tend to be what gets deployed in more complex, larger-scale environments, but Compose is much simpler to get started with. This makes it particularly suitable for emulating a microservices architecture in a local development environment. By using Docker Compose, you can easily link multiple services (each possibly with its own Dockerfile) and manage them as a cohesive unit.

Let's consider an example where we have a web application and a database service, each with its own Dockerfile.

Dockerfile.web
# Base image 
FROM node:20

# Set working directory
WORKDIR /app

# Install dependencies
COPY web/package*.json ./
RUN npm install

# Bundle app source
COPY web/ .

# Expose port 3000
EXPOSE 3000

# Start the application
CMD ["npm", "start"]
Dockerfile.db

# Use an official PostgreSQL image as the base
FROM postgres:16

# Set environment variables for the database
ENV POSTGRES_DB=appdb
ENV POSTGRES_USER=appuser
ENV POSTGRES_PASSWORD=12345password

# Expose the default postgres port
EXPOSE 5432

Now, we can define a docker-compose.yml file to orchestrate these services:

services:
  web:
	build:
  	context: .
  	dockerfile: Dockerfile.web
	ports:
  	- "3000:3000"
	depends_on:
  	- db
	environment:
  	DATABASE_HOST: db

  db:
	build:
  	context: .
  	dockerfile: Dockerfile.db
	ports:
  	- "5432:5432"
	volumes:
  	- db-data:/var/lib/postgresql/data

volumes:
  db-data:

In this configuration:

  • The web service is built from Dockerfile.web and exposes port 3000.

  • The db service is built from Dockerfile.db, exposes port 5432, and uses a named volume db-data for persistent storage.

  • The depends_on attribute in the web service ensures that the db service is started first.

  • The environment section in the web service defines the environment variables necessary for connecting to the database.

To launch the entire stack, you would run docker-compose up, which builds and starts both the web and db services based on the configurations provided. It’s important to highlight that while this is a great setup for local development of a frontend web application and datastore, this configuration would not be suitable for production; specifically the sensitive database credentials exposed in plain text. In a live environment, those values would be injected from an encrypted key:value store at or near runtime only.

Multi-stage Builds and Multiple Dockerfiles

In complex application development where builds require complex dependency maps, a single Dockerfile may not suffice. This is particularly true for applications that need to go through several build and test stages, each with distinct requirements. In such cases, employing multiple Dockerfiles in conjunction with Docker's multi-stage build feature can help reduce complexity and avoid a large monofile. Buildx, from Docker, is a plugin that extends the capabilities of Docker Build with features like building for multiple architectures and implementing advanced build patterns like multi-stage builds.

It's important to note that to use these advanced features, you need at least Dockerfile format version 1.4 and Docker Buildx version 0.8. You can find more information and the latest updates on Docker Buildx at the official GitHub repository.

Let's look at an example of a multi-stage build using multiple Dockerfiles for an application that requires a build stage, a test stage, and a production stage.

Dockerfile.build

# Build stage
FROM node:20 as builder

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .
RUN npm run build

This Dockerfile (Dockerfile.build) is focused on compiling or building the application. It starts from a Node.js image, installs dependencies, and runs the build script.

Dockerfile.test

# Test stage
FROM node:20 as tester

WORKDIR /app

COPY --from=builder /app ./
RUN npm run test

Here, Dockerfile.test is used for running tests. It copies the application from the previous build stage (using the --from=builder directive) and runs the test scripts.

Dockerfile.prod

# Production stage
FROM nginx:alpine

COPY --from=builder /app/dist /usr/share/nginx/html

Finally, Dockerfile.prod prepares the production image. This Dockerfile starts from an Nginx image and copies the built application from the build stage into the Nginx server directory.

To build and use these Dockerfiles with Docker Buildx, you would execute commands specifying each target stage and Dockerfile. For example:

  • For the build stage: docker buildx build --target builder -f Dockerfile.build .

  • For the test stage: docker buildx build --target tester -f Dockerfile.test .

  • For the production stage: docker buildx build --target production -f Dockerfile.prod .

By separating concerns into different Dockerfiles, the process becomes more manageable and maintainable, especially in large and complex projects.

Maximizing Docker's Potential

Docker is an indispensable tool for software developers. It streamlines the process of building, shipping, and running applications. However, beyond its initial ease of use, Docker has advanced features and capabilities that often go untapped. 

Developers and engineering teams can use multiple Dockerfiles to optimize and fine-tune their software build chains. They can differentiate between development and production environments, emulate microservices-based services locally, and potentially support multiple compute architectures.

Embracing these advanced Dockerfile practices can lead to more efficient, maintainable, and scalable application development. As the complexity of projects grows, the ability to leverage these features becomes increasingly valuable. Ultimately, understanding and utilizing multiple Dockerfiles is not just about tapping into Docker's full potential; it's about enhancing the overall quality and efficiency of software development.

Divio users deploy a wide range of Dockerized applications onto our platform. Using some of the strategies described in this article can help streamline application development and deployment to our platform. Please check out our documentation section to get started.

New: Experience Divio's Open Cloud with our 30-day Free Trial!
Easily deploy your web applications and explore customized solutions.
Sign up now