How to Write a Dockerfile for Node.js Apps (Step-by-Step)
Think of a Dockerfile like a precise blueprint for your app’s environment. It turns your code into a portable, consistent container that runs anywhere, without the “it works on my machine” headaches. But skimping on details can lead to bloated, vulnerable images.
In this tutorial, we’ll build a simple Recipe App with Node.js, Express, and SQLite, then craft a Dockerfile from basic to battle-tested.
Building the Recipe App (Anchor Example)
We’ll use Node.js, Express, and SQLite. The app has three endpoints:
- POST /api/recipes → Create a recipe (with validation)
- GET /api/recipes → List all recipes
- GET /api/recipes/:id → Get a recipe by ID
Step 1: Project Setup
mkdir recipe-app
cd recipe-app
npm init -y
npm install express sqlite3 body-parser
Add a “start” script to package.json for convenience:
"scripts": {
"start": "node server.js"
}
Step 2: Create server.js
const express = require("express");
const sqlite3 = require("sqlite3").verbose();
const bodyParser = require("body-parser");
const app = express();
const PORT = 3000;
app.use(bodyParser.json());
// Database setup
const db = new sqlite3.Database(":memory:");
db.serialize(() => {
db.run(`CREATE TABLE recipes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
ingredients TEXT NOT NULL
)`);
});
// Validation
function validateRecipe(data) {
if (!data.title || typeof data.title !== "string") {
return "Title is required and must be a string";
}
if (!Array.isArray(data.ingredients) || data.ingredients.length === 0) {
return "Ingredients must be a non-empty array";
}
return null;
}
// Endpoints
app.post("/api/recipes", (req, res) => {
const error = validateRecipe(req.body);
if (error) return res.status(400).json({error});
const {title, ingredients} = req.body;
db.run(
`INSERT INTO recipes (title, ingredients) VALUES (?, ?)`,
[title, JSON.stringify(ingredients)],
function (err) {
if (err) return res.status(500).json({error: "Database error"});
res.status(201).json({id: this.lastID, title, ingredients});
},
);
});
app.get("/api/recipes", (req, res) => {
db.all(`SELECT * FROM recipes`, [], (err, rows) => {
if (err) return res.status(500).json({error: "Database error"});
const recipes = rows.map((r) => ({
id: r.id,
title: r.title,
ingredients: JSON.parse(r.ingredients),
}));
res.json(recipes);
});
});
app.get("/api/recipes/:id", (req, res) => {
db.get(`SELECT * FROM recipes WHERE id = ?`, [req.params.id], (err, row) => {
if (err) return res.status(500).json({error: "Database error"});
if (!row) return res.status(404).json({error: "Recipe not found"});
res.json({
id: row.id,
title: row.title,
ingredients: JSON.parse(row.ingredients),
});
});
});
app.listen(PORT, () => {
console.log(`Recipe app running on port ${PORT}`);
});
Step 3 — Test the app
npm start
Try:
curl -X POST localhost:3000/api/recipes \
-H "Content-Type: application/json" \
-d '{"title":"Pasta","ingredients":["noodles","tomato sauce"]}'
curl localhost:3000/api/recipes
curl localhost:3000/api/recipes/1
If you see JSON output, the app works. Now we containerize it.
The First Dockerfile (Simple Version)
Here’s the simplest Dockerfile we can write:
FROM node:24-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Understanding the Simple Dockerfile
Let’s go line by line.
1. FROM node:24-slim
- Definition: Every Dockerfile starts from a base image. A base image is an existing image that provides an operating system and optionally language runtimes or libraries.
- Here: We use the official Node.js image, version 24, with the
slimvariant. - Why slim? It contains only the essentials — fewer packages, smaller size, fewer vulnerabilities.
- Trade-off: Debugging tools (like
curl,bash) may be missing. You may need to install them temporarily when troubleshooting.
2. WORKDIR /app
- Definition: Sets the working directory inside the container. All subsequent commands run here.
- Why: Prevents commands from running in unpredictable locations. Think of it as “changing directory” once, so everything is organized.
3. COPY package*.json ./
- Definition: Copies files from your host into the image.
- Why copy package files first? Docker builds images in layers. If dependencies don’t change, Docker can reuse the cached
npm installlayer. This makes rebuilds faster. - Consequence if misordered: If you copy all files first, any code change invalidates the cache, forcing a full reinstall.
4. RUN npm install
- Definition: Executes a command inside the image at build time.
- Here: Installs dependencies.
- Warning: By default, this installs dev dependencies too, which bloats the image.
5. COPY . .
- Definition: Copies the rest of the application code.
- Risk: Without a
.dockerignore, you may copy secrets, logs, or localnode_modules.
6. EXPOSE 3000
- Definition: Documents the port the app listens on.
- Important: This does not publish the port. You still need
-pwhen running the container. - Why: Helps other tools (like Docker Compose or Kubernetes) know which port to map.
7. CMD [“npm”, “start”]
- Definition: Defines the default command when the container starts.
- Here: Runs the app.
- Warning: If omitted, the container runs but does nothing.
The Improved Dockerfile (Production-Ready)
Now let’s make it secure and efficient:
FROM node:24-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
RUN useradd -D appuser && chown -R appuser:appuser /app
USER appuser
EXPOSE 3000
CMD ["npm", "start"]
Explaining the Improvements
1. npm install --omit=dev
- Installs only production dependencies.
- Benefit: Smaller image, fewer vulnerabilities, faster builds.
- Consequence if ignored: Dev tools (like test frameworks) end up in production, increasing attack surface.
2. Non-root user
RUN useradd -D appuser && chown -R appuser:appuser /app
USER appuser
- Problem: Containers run as root by default. If compromised, the attacker has root inside the container.
- Solution: Create a dedicated user with limited privileges using adduser -D (-D = disabled login, skips interactive prompts and home dir creation for Docker builds).
- Benefit: Even if the app is exploited, damage is contained.
- Trade-off: Some operations (like binding to privileged ports <1024) won’t work without extra configuration.
3. Alpine base image
- We use the alpine base image because it is smaller than the slim base image.
- Benefit: Smaller attack surface, faster pulls.
- Trade-off: Less tooling inside the container.
Image size before changing to alpine - under disk usage
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
recipe-app:v1.0.0 d7efc6fdec94 260MB 0B
Image size after changing to alpine
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
recipe-app:v1.0.0 be8bd8c4c41a 196MB 0B
.dockerignore
Create a .dockerignore file:
node_modules
npm-debug.log
Dockerfile
.dockerignore
.git
- Why: Prevents unnecessary files from being copied into the image.
- Consequence if ignored: Images become larger, slower, and may accidentally include secrets or local build artifacts.
Build and Run
docker build -t recipe-app:v1.0.0 .
docker run -p 3000:3000 recipe-app:v1.0.0
-ttags the image.-pmaps host port 3000 to container port 3000.- Without
-p, the app runs but is unreachable from outside.
Test the app again following the steps we used previously.
Debugging
If the container doesn’t respond:
- Run in foreground:
docker run -p 3000:3000 recipe-app:v1.0.0 - Check logs: is
npm installfailing? - Ensure Express binds to
0.0.0.0:
Otherwise, it only listens inside the container.app.listen(PORT, "0.0.0.0");
What You Learned
- A Dockerfile is a recipe for building images.
- Each instruction creates a layer, and order matters for caching.
- Security requires non-root users and
.dockerignore. - Ports must be documented (
EXPOSE) and published (-p). - Production images should omit dev dependencies.
Reflection
A Dockerfile is not just a checklist. Each instruction defines boundaries:
- FROM decides your foundation.
- COPY decides what enters the image.
- RUN changes the environment.
- USER defines privilege boundaries.
- CMD defines how the container behaves.
Misusing them can lead to bloated, insecure, or fragile images. Using them responsibly makes your app portable, efficient, and safe.
👉 Practical takeaway: Always ask: What does this instruction add? What risks does it carry? How will it behave when rebuilt tomorrow? That’s how you move from “it works” to “it works responsibly.”



