Docker Best Practice, Multi-Stage Build
DockerThis is related to a long post Kubernetes, Helm, Laravel, PHP-FPM, Nginx, GitLab the DevOps Way, let's go deeper on the Dockerfile
. First, this is what we talking about.
FROM node:lts-alpine as node_build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY webpack.mix.js ./
COPY resources/ ./resources/
COPY public/ ./public/
# fail du purge sinon
RUN mkdir -p /public/css
RUN touch /public/css/app.css
RUN mkdir -p /public/js
RUN touch /public/js/app.js
RUN npm run prod
FROM composer:2.1.9 as composer_build
# voir pour le .lock
COPY ./composer.json /app/
RUN composer install --no-dev --no-autoloader --no-scripts
COPY . /app
RUN composer install --no-dev --optimize-autoloader
FROM php:8.0-fpm-alpine
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install pdo pdo_mysql
COPY devops/docker/php/*.conf /usr/local/etc/php-fpm.d/
COPY /app/ /var/www/html/
COPY /app/public/ /var/www/html/public/
RUN php artisan view:cache
We need to understand that every step will be cached in a layer if no code as changed, this mean it better to do something in two step if one can be cached. There is two main concepts here
- Build your assets in a separated build (multi-stage), and copy them in the final build.
- Install dependency in a separated step to allow docker caching system to operate.
Multi-stage build #
In this example, you have a node build stage, and a php (composer) build stage, this allow you to start with basic image that contains already all binary needed for the build and installation of dependency, and throw them away on the final build.
This mean :
FROM node:lts-alpine as node_build
# do something
FROM composer:2.1.9 as composer_build
# do something
FROM php:8.0-fpm-alpine
# Copy previous builded files for the final docker image
The final image will not contains node npm
nor php composer
, no need to remove them manually, multi-stage does it for you.
Separate step for vendors and build #
We all know that npm install
and composer install
can take some time, you do not want those installation to occur on every build. Let's install dependency only when you add a new dependency, this mean, only when package.json
or composer.json
has changed.
About the .lock
of both php and node vendors, you might want to copy a fix/locked version of vendors. It could be a good idea to copy only the .lock
file.
If you copy everything at once, every time a change is occur in your code you will install those dependency. frustrating! This mean the following steps
- Copy package.json (or .lock)
- Install dependency
- Copy javascript code
- Build assets
It could be tricky and adapt on your needs, let's simplify the Dockerfile
, this will give you.
FROM node:lts-alpine as node_build
WORKDIR /app
COPY package.json ./
RUN npm install
COPY webpack.mix.js ./
COPY resources/ ./resources/
COPY public/ ./public/
RUN npm run prod
This is a three parts serie about Docker best practice.
- Docker Best Practice latest tag
- Docker Best Practice, Multi-Stage Build
- How Could I miss Docker BuildKit
In this case, the npm install
is run only when the package.json is changed.
Let's take a look to the php version.
FROM composer:2.1.9 as composer_build
COPY ./composer.json /app/
RUN composer install --no-dev --no-autoloader --no-scripts
COPY . /app
RUN composer install --no-dev --optimize-autoloader
The idea is the same,
- Copy composer.json (or .lock)
- Install dependency
- Copy php code
- Build assets
In some cases, a dump-autoload could be enought in the last step. The idea is to not generate the autoloader before to have the copy of php code done. It will failed.
Merge everything in a minimalist final build #
Collect build result, and merge it in a light Docker image. In my case it's php-fpm:alpine but php:alpine is fine.
FROM php:8.0-fpm-alpine
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install pdo pdo_mysql
COPY devops/docker/php/*.conf /usr/local/etc/php-fpm.d/
COPY /app/ /var/www/html/
COPY /app/public/ /var/www/html/public/
RUN php artisan view:cache
The idea is to collect builds, configure the container as needed, do some optimisation, and you are done with a lightweight optimized Docker Image.
Docker BuildKit #
I only found this one lately, very useful for fast build with multistage, I just wrote How Cloud I Miss Docker BuildKit, you might want to check this easy trick.