When you need to setup a cron job, you can do it using Docker. The best way to do it with Docker is using an Alpine Linux image, as it has only 5MB initial size.
Don’t know what is Docker? Check more about it here.
In this post I expect you to already have Docker and docker-compose installed on your OS.
Creating the image
To display contents of the root user’s crontab, use the less command: less /etc/crontab. The system returns an output like the following: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. Package details. Package: apk-cron: Version: 1.0-r3: Description: Periodic software updates: Project. The crontab is the method you use to create, edit, install, uninstall, and list cron jobs. The command for creating and editing cron jobs is the same and simple. And what’s even cooler is that you don’t need to restart cron after creating new files or editing existing ones. Cron reads the crontab (cron tables) for predefined commands and scripts. By using a specific syntax, you can configure a cron job to schedule scripts or other commands to run automatically. This guide shows you how to set up a cron job in Linux, with examples.
In the case below I needed an Alpine Linux container with Node.js inside, as I wanted cron to run a Node.js script.
I'm trying to schedule some ingestion jobs in an Alpine container. It took me a while to understand why my cron jobs did not start: crond doesn't seems to be running. Rc-service -l grep crond According to Alpine's documentation, crond must first be started with openrc (i.e. Some kind of systemctl). Here is the Dockerfile.
First we create a file called Dockerfile with the following content (yes, with no file extension):
Let’s save this file inside a directory called cron.
The FROM command tells Docker which image we want to use for building our image.

The MAINTAINER command tells Docker who is maintaining this Dockerfile.
The RUN command tells Docker which commands to run on image creation.
It’s very important for your Dockerfile to have the least number of commands possible, as another RUN command for example, would create another layer in the resulting image. That’s why we concatenate the commands with && instead of writing a RUN command for each of them. Simplicity should be the goal.
In the example, I set the timezone to Prague (my current timezone), but you can change it to yours instead, of course.
The CMD command tells Docker what will run inside the container that will be created from this image, which is cron in this case. Exactly what we need.
Configuring cron
Now we need to configure what this image will run using cron.
I created this silly example below just to illustrate what could be done.
Save the content below to a file called root in the same directory where you saved the Dockerfile (the cron directory you created before).
Alpine Install Cron 2
Now create a directory called scripts on the same level of the cron directory, not inside of it.
Inside this scripts directory, create our hello.js script with the following content:
In this example I told cron to execute a JavaScript file using the Node.js executable that comes inside the container and output the response to a hello.log file.This cron command, as you can see in the explanation in the comments, will run every day at 7:00 and 19:00.
Ok, so how to add this to our container?
Docker-compose
I like using docker-compose as it enables me to orchestrate many containers in a simple way, using only one docker-compose.yml file.
With docker-compose you can for example create a file with all the containers you need and then run docker-compose up for it to start all of them at once, instead of dealing with docker run and passing all parameters every time, etc.
Let’s create a file called docker-compose.yml with the following content in same level as our scripts and cron directories:
Docker Alpine Install Cron
Spacing is very important here. Visual Studio Code can help you with that, as it detects yml files.
The cron directory is informed in the build atribute there, so docker-compose will build the image using the Dockerfile we created inside the cron directory.
In “volumes”, the crontest directory inside your host OS is being mounted inside the container using the same path there, so when our script outputs to hello.log as configured above, the log will be easily accessible from the host OS instead of having to enter the cron container to read it. If this directory doesn’t exist on your host OS, docker-compose will create it.
For the next lines configured inside “volumes”, the root file we created inside the cront directory is being mounted inside the container in the /etc/crontabs/root path. On the 3rd line, everything from the scripts directory is being mounted inside /home/node inside the container, as this is the home directory this Node.js container comes with.
Instead of mounting these files inside the container using docker-compose, you could have added them on image creation using the ADD command, but then you’d have to rebuild the image if you needed to change any of these files and the resulting image would also have more layers. Because of that, I prefer to mount them from the host OS instead, so if I need to modify any of these files, they are not builtin with the image in any way.
After saving the docker-compose.yml file, you can run the command to build our docker image and start our container from it:
This way all the commands will show up on your terminal and you won’t be able to use this terminal for anything else. If you add -d in the end of the command, then it runs as a daemon and your terminal will be free, but you wont see any output about the image build process or when it’s running.
If you run docker-compose witout the daemon option, you just need to use Ctrl + c when you want to stop the containers specified in your docker-compose.yml file. If you run as a daemon, you can run docker-compose stop to stop the containers and then if you want to remove them you can run docker-compose rm
In order to list the current containers:
And the images:
Running on startup
After creating the image and checking that everything works as expected, we need to create a startup script so if our OS gets restarted the cron container runs automatically.
The servers I configure are usually running Debian Linux, my personal notebook runs Arch Linux and at work I run Manjaro. All these Linux distributions come with systemd, so I’ll show an example of how the startup could be configured for that.
Let’s save a file called docker-infra.service inside /etc/systemd/system/ with the following content:
You need to change the path to your docker-compose.yml file to the one where you saved yours inside your server, of course.
Then let’s reload the system daemons:
and enable our newly created daemon to run on system startup:
If you need to stop or check the status of the daemon that controls our docker container, just change the enable in the systemctl command above to stop or status and that’s it.
Conclusion
In this post we saw how to create a Docker image to use as cron to run Node.js scripts. Then we saw how to use docker-compose and create a daemon for it to run on system startup.
If you have any suggestion or doubt, let me know in the comments below :)
Please enable JavaScript to view the comments powered by Disqus.comments powered by DisqusYou'd like a docker container that runs cron jobs, with the output of those cron jobs going to stdout so they areaccessible as docker logs. How do you do that?
Install cron, and set it up to run in the foreground
In your Dockerfile, apt-get -y install cron
Use apk or whatever if you are running on alpine or another distributionSet the command or entrypoint in your Dockerfile to run cron 9n the foreground
Create the crontab with redirection
Copy the crontab with your jobs in it into /etc/crontab
. Each job should redirect it's stdout and stderr to fh 1 of PID 1(the crontab process). Here's a simple example crontab:
This just prints 'HELLO' every minute, redirecting the echo output to process 1's stdout (file handle 1). This line also redirects stderr,although that is not really necessary in the case of a simple echo.
This works because docker always treats the stdout from process 1 as the docker log stream.
An even simpler way
... to run a single job on a regular basis is to use date
and sleep
. This makes for a simpler container (no need for cron) if you only need a command or commands to run at a single interval (say, every morning at 3:00AM). Detailsin this gist