Every two weeks I send out a newsletter containing lots of interesting stuff for the modern PHP developer. You can expect quick tips, links to interesting tutorials, opinions and packages. Want to learn the cool stuff? Then sign up now!

Zero downtime deployments with Envoy

Envoy is Laravel’s official task runner. Using a Blade style syntax tasks can be defined that can be run both locally and remotely. At Spatie, we’ve been using Envoy for quite some time to deploy code on production servers. Summarized our trusty Envoy script did the following things: bringing the application down, pulling new code on the server via the installed git repo, running composer install, running migrations, etc…, and finally bringing the application back up.  The script had a big downside: the application would be down for close to a minute. This week I took the time to solve that issue.

In February Chris Fidao, of Servers for Hackers-fame, already created an Envoy script to deploy code with zero downtime. Read his entire post to know how it works. In short it comes down to this:

  • a new version of the entire application gets prepared in a directory on the server
  • when it’s fully ready the script will create a symlink so the webserver will use the sources in the prepared directory

Michael Dyrynda describes the process in more detail on his blog. Some big name deployment tools, like Capistrano, use this strategy too.

I’ve used Michael’s code as a basis for our updated Envoy script. The basic flow is the same but I’ll highlight some changes.

Loading the .env file

Envoy has the ability to send messages to Slack. In order to do this a hook must be specified. It’s best practice to store all sensitive configuration in the .env file. By default Envoy will not read this file. By adding this code in the setup all values specified in the .env file can be read.

require __DIR__.'/vendor/autoload.php';
(new \Dotenv())->load(__DIR__, '.env');

@slack(env('SLACK_ENDPOINT'), "#deployments", "{$task} done")

Enabling fast code deployments

Doing an entire deployment encompasses generating assets, doing a composer install and generating the Laravel’s compiled.php file. These tasks can take quite some time. More often than not an entire deployment process lasts more than a minute. When doing a simple bugfix in a php file we shouldn’t need to run through all these steps. It would be great if we could just to a git pull on the server. In Michaels script he downloaded a tarball to the server. This is great for preserving disk space, but a simple git pull isn’t possible. We swapped out downloading the tarball in favor of doing a git clone:

git clone --depth 1 [email protected]:{{ $repository }} {{ $newReleaseName }}

The --depth 1 indicates that the history doesn’t need to be downloaded.

If you’ve read Chris’ and Michael’s posts than you know that the storage directories in our project are removed and symlinked to a persistent storage directory. If we would just perform git pull on our repo on the server these symlinks would be overwritten by the files in the repo. This can be prevented by using git’s sparse checkout feature:

git config core.sparsecheckout true
echo "*" > .git/info/sparse-checkout
echo "!storage" >> .git/info/sparse-checkout
echo "!public/build" >> .git/info/sparse-checkout
git read-tree -mu HEAD

Using sparse checkout prevents some files and directories from being checked out. Our symlinks will remain in place. Now we can use the task below to, in a few seconds, deploy some code without having to jump all hoops of a full deploy:

@task('deployOnlyCode',['on' => 'remote'])
cd {{ $currentDir }}
git pull origin master

Refreshing PHP’s OPCache

When I was testing the Envoy script I noticed that sometimes new changes weren’t visible when browsing to the application. My first thought that there was something wrong with the symlinks or webserver configuration. After at bit of Googling I stumbled on this article by Mattias Geniar. It contained the reason why the latest changes were not visible:

In short: if the “path” to your document root or PHP doesn’t actually change, PHP’s OPCache will not reload your PHP Code.

So to clear the OPCache the PHP process needs to be restarted after every deploy:

sudo service php5-fpm restart

If you have any questions regarding our new deployment process are have suggestions to improve it, let me know in the comments below.

Freek Van der Herten is a partner and developer at Spatie, an Antwerp based company that specializes in creating web apps with Laravel. After hours he writes about modern PHP and Laravel on this blog. When not coding he’s probably rehearsing with his kraut rock band.
  • I’m absolutely humbled to get a mention on your blog and happy that I was able to give you a start with your own deploy script 🙂

    I hadn’t even thought of hooking Envoy into dotenv; that is cool!

  • Micael Gustafsson

    Thanks for this! I like the way you build gulp assets locally and then scp them to the server.

    Do you have any good way to quit the deployment process with Envoy when something goes wrong?
    Let’s say that you don’t want to run the migrations if some other task broke before. Or maybe you just want to delete the newReleaseDir if something went wrong.

    Only way I’ve found so far is tho use the @error directive to listen for failed tasks and then from there stop the process…

    • I haven’t found a good way to stop the deploy process automatically. That @error directive isn’t mentioned in the docs. Does it stop on every error?

      • Micael Gustafsson

        There are a few undocumented directives, I found it while looking through the “compiler” here: https://github.com/laravel/envoy/blob/master/src/Compiler.php#L39

        It *seems* to work similar to the @after callback, called after each error. I don’t know if all errors are caught though… It does not stop the rest of the tasks, unless you do something like “die;” within the callback.

  • legshooter

    Doesn’t the php5-fpm service restart undermine the zero downtime part? How can a request come through during the restart, surely it isn’t an atomic process?

    • You’re right it’s not a zero downtime deployment, but it comes very very close.

      • legshooter

        Check this out, it might help you make it 100% zero downtime:

        Btw, I almost never see this discussed, as if all deploys are purely code related and databases don’t even exist, but is there really zero downtime deployment when the database needs to be updated? The database update is atomic all by itself, and the symlink is also atomic all by itself, but there’s always this window between them, when you can have requests slip in with the newly modified database and old code.