Easily convert webpages to images using PHP

Browsershot is a package that can easily convert any webpage into a image. Under the hood the conversion is made possible new headless options in Chrome 59. In this post I'd like to show you how you can use Browsershot v2. Here's a quick example of how it can be used:


About three years ago I started working on my first package, called Browsershot. It can convert any webpage to an image. Under the hood PhantomJS was used to perform the transformation. Since it got released it did it's job pretty well, but unfortunately PhantomJS recently has been abandoned. A few weeks ago Chrome 59 got released. In that version some pretty cool features were introduced. One of them is the ability to take screenshots of a webpage. A recent version of Chrome probably is better in rendering a correct page then and abandoned version of PhantomJS. That's why I decided to make a new major release of Browsershot. In this post I'd like to show you how you can use Browsershot v2.

Installing Chrome 59

In order to make use of Browsershot you must make sure that Chrome 59 or higher is installed on your system. On a Forge provisioned Ubuntu 16.04 server you can install the latest stable version of Chrome like this:
sudo wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update
sudo apt-get -f install
sudo apt-get install google-chrome-stable
Browsershot has been tested on MacOS and Ubuntu 16.04. If you use another OS your mileage may vary.

Using Browsershot

Here's the easiest way to create an image of a webpage:
Browsershot will make an education guess where Google Chrome is located. If on your system Chrome can not be found you can manually hint it's location:
By default the screenshot will be a png and it's size will match the resolution you use for your desktop. Want another size of screenshot? No problem!
    ->windowSize(640, 480)
You can also set the size of the output image independently of the size of window. Here's how to resize a screenshot take with a resolution of 1920x1080 and scale that down to something that fits inside 200x200.
    ->windowSize(1920, 1080)
    ->fit(Manipulations::FIT_CONTAIN, 200, 200)
In fact, you can use all the methods spatie/image provides. Here's an example where a greyscale image is created:
    ->windowSize(640, 480)
If, for some reason, you want to set the user agent Google Chrome should use when taking the screenshot you can do so:
    ->userAgent('My Special Snowflake Browser 1.0')

In closing

The screenshot capabilities of Chrome are already quite good, but there's a lot of room for improvement still. Right now there's no way to specify where Chrome should save the screenshot (a problem that the package solves for you behind the scenes). In the DevTools there's an option to take a screenshot of the whole length of the page. Unfortunately this isn't possible with the current command line options. I'm pretty sure the headless and screenshotting options will improve in future versions of Chrome. I intend to update Browsershot as soon as new features become available in Chrome. Want to get started using Browsershot? Then head over to GitHub. Also take a look at the other packages our team has previously made.

A straightforward Vue component to filter and sort tables

Today we released our newest Vue component called vue-table-component. It aims to be a very easy to use component to make tables filterable and sortable. In this post I'd like to tell you all about it.

Why creating yet another table component?

Let's first touch upon why we created it. To make lists of models sortable and filterable we used to rely on the venerable DataTables.net component. It works great, but comes with a few caveats. First, it's built on jQuery. In our projects we avoid using jQuery nowadays and use Vue specific components. Secondly, there's a lot of stuff in there that we don't need, we probably are only using 10% of it's features. So for our use cases, it's a bit bloated. And lastly, because it contains so much features, using it can be a little unwieldy. When creating our own table component we had only our specific use case in mind: a small set of data that is passed via json.

Using vue-table-component

Here's an example of how you can use the component:
     { firstName: 'John', lastName: 'Lennon', instrument: 'Guitar', editUrl: '<a href='#john'>Edit</a>', birthday: '04/10/1940', songs: 72 },
     { firstName: 'Paul', lastName: 'McCartney', instrument: 'Bass', editUrl: '<a href='#paul'>Edit</a>', birthday: '18/06/1942', songs: 70 },
     { firstName: 'George', lastName: 'Harrison', instrument: 'Guitar', editUrl: '<a href='#george'>Edit</a>', birthday: '25/02/1943', songs: 22 },
     { firstName: 'Ringo', lastName: 'Starr', instrument: 'Drums', editUrl: '<a href='#ringo'>Edit</a>', birthday: '07/07/1940', songs: 2 },
     <table-column show="firstName" label="First name"></table-column>
     <table-column show="lastName" label="Last name"></table-column>
     <table-column show="instrument" label="Instrument"></table-column>
     <table-column show="songs" label="Songs" data-type="numeric"></table-column>
     <table-column show="birthday" label="Birthday" data-type="date:DD/MM/YYYY"></table-column>
     <table-column show="editUrl" label="" :sortable="false" :filterable="false"></table-column>
Let's take a look on how our component will render that piece of html. The component doesn't come with any styling, so you'll need to provide your own css. Head over to the demo page to play with the rendered output. You've probably noticed that a filter field is added to the top. By default all data will be used in to filter. If any columns contain html it will be filtered out before filtering. So even though that last column contains links, filtering on href won't yield any results. If you don't want a column of data to be filterable just add :filterable="false to table-column. Like expected, clicking a column header will sort the data. If the data contains a column with numerical values or dates you must set a data-type prop on table-column. Don't want a column to be sortable? No problem! Just pass a sortable property set to false to table-column. The component will also remember it's state for 15 minutes. So when you reload the page that filter and sorting you used previously will still be used. Head over to the readme on GitHub to learn all the props that can be passed.

In closing

Like mentioned before our table component is very much targeted to our use case. Our intent is to keep this component very clean and simple. If you need more features, take a look at the numerous other Vue components out there that can render table data. Because in our client projects we are using Vue more and more you can expect some more opensource Vue components from us soon. If you haven't already done so, be sure to take a look at this list of things we opensourced previously.

A package to enable short class names in an Artisan tinker session

Tinker is probably one of my most used Artisan commands. A minor annoyance is that it can be quite bothersome having to type the fully qualified classname to do something simple. Today we release a new package called laravel-tinker-tools. When fully installed let's you use the short class names in a tinker session: This magic does not only work with models but with every class in your Laravel app.

Installing the package

There are a few non standard steps you need to take in order to install the package. First, pull in the package like you normally would:
composer require spatie/laravel-tinker-tools
Next, create a file named .psysh.php in the root of your Laravel app with this content:

Finally, dump the optimized version of the autoloader so autoload_classmap.php gets created.
composer dump-autoload -o
And with that all out of the way you can use short class names in your tinker session.

A peek behind the curtains

When you use a class that hasn't been loaded in yet, PHP will call the registered autoloader functions. Such autoloader functions are responsible for loading up the requested class. In a typical project Composer will register an autoloader function that can include the file where the class is stored in. Composer has a few ways to locate the right files. In most cases it will convert the fully qualified class name to a path. For example, when using a class App\Models\NewsItem Composer will load the file in app/Models/NewsItem.php. It's a bit more complicated behind the scenes but that's the gist of it. To make the process of finding a class fast, Composer caches all the fully qualified classnames and their paths in the generated autoload_classmap.php, which can be found in vendor/composer. Now, to make this package work, \Spatie\TinkerTools\ShortClassNames will read Composer's autoload_classmap.php and convert the fully qualified class names to short class names. The result is a collection that's being kept in the $classes property Our class will also register an autoloader. When you use NewsItem in your code. PHP will first call Composer's autoloader. But of course that autoloader can't find the class. So the autoloader from this package comes next. Our autoloader will use the aforementioned $classes collection to find to fully qualified class name. It will then use class_alias to alias NewsItem to App\Models\NewsItem.

What happens if there are multiple classes with same name?

Now you might wonder what'll happen it there are more classes with the same name in different namespaces? E.g. App\Models\NewsItem, Vendor\PackageName\NewsItem. Well, autoload_classmap.php is sorted alphabetically on the fully qualified namespace. So App\Models\NewsItem will be used and not Vendor\PackageName\NewsItem. Because App starts with an "A" there's a high chance that, in case of a collision, a class inside your application will get picked. Currently there are no ways to alter this. I'd accept PRs that make this behaviour customizable.

Need more tinker magic?

There are a lot of other options that can be set in tinker.config.php. Learn all the options by reading the official psysh configuration documentation. Caleb Porzio's excellent blogpost "Supercharge Your Laravel Tinker Workflow" is an excellent read as well. This package was inspired by that blogpost

Maybe you don't need tinker...

If you want to run a single line of code tinker can be a bit of overkill. You must start up tinker, type the code, press enter, and quit tinker. Our laravel-artisan-dd package contains an Artisan command that can dump anything from the commandline. No need to start and quit tinker anymore. We've updated the package so you can make use of short class names too. Here's how you can dump a model using a minimal amount of keystrokes:

In closing

laravel-tinker-tools and laravel-artisan-dd aren't the only packages we've made that make developing Laravel apps easier. Be sure to take a look at these ones at well: Need even more stuff? The open source section on our company website lists everything we've released previously: https://spatie.be/en/opensource

Quickly dd anything from the commandline

Laravel's tinker command allows to run any code you want as if you are inside your Laravel app. But if you want to run a single line of code if can be a bit bothersome. You must start up tinker, type the code, press enter, and quit tinker. Our new spatie/laravel-artisan-dd package contains an Artisan command to dd anything from the commandline. No need to start and quit tinker anymore. Here's a simple example of how you can use it: You can dd anything you want. Of course the output will be pretty printed. Here's an example where an Eloquent model is dumped. Multiple pieces of code can be dumped in one go:
php artisan dd "bcrypt('secret')" "bcrypt('another-secret')"; 
The dd artisan command using PHP's eval to run arbitrary code. Be aware that this can be potentially dangerous. By default the command will only run in a local environment. You can make it run in other environments by setting an ALLOW_DD_COMMAND enviroment variable to true. If you like this package go check it out on GitHub. This isn't the first package our team has made. Go check out this big list of Laravel packages on our company website to learn if we've made anything that could be of use to you.

A conversation on laravel-html

Hi, how are you? I'm fine thanks! How are you? I saw you released another package last week. Yup yup, you mean laravel-html, right? It was actually coded up by my colleague Sebastian, who did an awesome job. But why put out yet another html generator? Html generation is a solved problem, right? Yes, it is but... I'm mean we already have laravelCollective/html... Yes, but... Why not just type html, ... come on, do we really need a package for this? It's easy to just type html. Let me say first that if you're happy with your current solution, just stick with it. There's no right or wrong here. Because everybody already has their own favourite solution, html generation is a pretty divisive subject. I believe that's why html generation was kicked out of the Laravel core. Stop rambling, just tell why you've created this. Well, like probably most fellow artisans we have been using the laravel collective package for quite some time and we were quite happy with it. Since html generation got kicked out of the Laravel core and has been put in the laravel collective package it has not evolved much. Take for instance this piece of code:
Form::email('element-name', null, ['class' => 'my-class']);
Seems like this is quite simple solution, but a few things bother me. That null parameter for instance when there is no value. And the fact that you need pass an array as the third parameter. In our package all html can be built up with a fluent interface. So you can do this:
Yeah ok, that's certainly nice, but do you plan to code all of your html like this? No, certainly not. For simple things I prefer just writing the html. But when it comes to generating forms, I prefer using our package. Because when typing forms manually there's so much things to you need to take care of: the method field, the csrf token field etc... it's very error prone. Here's how you can create a form with laravel-html:
html()->form('PUT', '/post')->open();
html()->email('email')->placeholder('Your e-mail address');
html()->submit('Process this');
That'll output:
<form method="POST" action="/post">
    <input type="hidden" name="_method" id="_method" value="PUT">
    <input type="hidden" name="_token" id="_token" value="csrf_token_will_be_here">
    <input type="email" name="email" id="email" placeholder="Your e-mail address">
    <input type="submit" value="Process this">"
As you can see those hidden _method and _token fields were added automatically. Cool! You can also use models or anything that implements ArrayAccess really to prefil the values of the form.

So if name is set on user the form element will receive it's value.
<input type="text" name="name" id="name" value="John">
This all sounds interesting, please continue. Well, if you want to know more, you really should really check out the documentation. It contains all you need to know. If you have a question about the package feel free to open up an issue at GitHub. Is there anything more that we should know about the package? It's PHP 7.1 only. Wut? Can you create a PHP 7.0 or PHP 5.6 version? Boring. Next question. But why did you make it PHP 7.1 only? My projects do not use that version yet. We mainly create packages to use them in our own projects. All our new client projects use PHP 7.1, so it makes no sense for us to create packages for an older version of PHP. But PHP 7.0 is still an active version... I really don't care. You're of course free to fork the package and maintain your own copy that's compatible with an older PHP version. Yay for open source. Ok, I'll go check it out! Please do, like already mentioned the docs are here. And be sure to take a look at the other packages we've created previously.

An easy to use server monitor written in PHP

We all dream of servers that need no maintenance at all. But unfortunately in reality this is not the case. Disks can get full, processes can crash, the server can run out of memory... Last week our team released a server monitor package written in PHP that keeps an eye on the health of all your servers. When it detects a problem it can, amongst others, notify you via slack. In this post I'd like to give some background why we created it and give you a run-through of what the package can do.

Why create another server health monitor?

The short answer: the available solutions were too complicated and / or too expensive for my company. If you want the long version, read on and otherwise skip to the introduction of laravel-server-monitor. In order to answer this question, let's first take a look at how my company has been doing things the last few years. We're what most people call a web agency. Our team is quite small: only a handful of developers with no dedicated operations team. We create a lot of web applications for clients. For the most part we also host all these applications. Up until a few years ago we created rather smallish sites and apps. We relied on traditional shared hosting. But as the complexity of our projects grew, shared hosting didn't cut it anymore. Thanks to excellent resources like serversforhackers.com and Laravel Forge we felt confident enough running our own servers. Each application is hosted on its own Digital Ocean server that was provisioned by Laravel Forge. Sure, running each application on a separate server is probably a bit more expensive than grouping some of them together on the same box. But using separate boxes has a lot of benefits:
  • for new projects you can just set up a new box with the latest versions of PHP, MySQL, etc...
  • When touching a project running on an older of PHP we can very easily upgrade the PHP version on that server. When running multiple applications on the same server you don't get this freedom without testing all the application running on it.
  • when a server is in trouble it only impacts one application
  • an application that is misbehaving in terms of memory and cpu usage can't impact other applications
  • each application has a lot of diskspace to play with (minimum 20 GB)
  • when Digital Ocean loses your server (yes, this can happen), it only impacts one application
Even though we are very happy with how we to things in regard to hosting, we don't see a lot of other companies of our size using this strategy. Most of them use managed / shared hosting. So as a small company with a lot of servers we're probably in a niche. The problem with paid server monitoring is that most services assume that if you have a lot of servers you're probably a large company that has a big budget for server monitoring. Pricing of paid plans is mostly per host. For single host this is mostly cheap (Datadog for example has a plan of $15 / per host / per month), but multiplied by a hundred hosts, this becomes too expensive. Also most of these services offer much more than we need. We don't need graphs of historical data or a lot of checks. We simply want to have a notification on our Slack channel when disk space is running low or when a service like memcached or beanstalk is down. There also are a lot of free open source solutions, like Zabbix, Nagios and Icinga. As a developer, the problem with these tools is that they don't target developers but people in an operations department. For developers these tools are quite complex to set up. Take a look at this guide to install Nagios. Sure, it's doable, but if you don't have much experience setting these kinds of things up, it can be quite daunting. There must be a better way.

Introducing laravel-server-monitor

To monitor our servers we built a Laravel package called laravel-server-monitor. This package can perform health checks on all your servers. If you're familiar with Laravel I'm sure you can install the package in a couple of mintutes. Not familiar with Laravel? No problem! We've also made a standalone version. More on that later. And to be clear: the package is able to monitor all kinds of servers, not only ones where Laravel is running. The package monitors your server by ssh'ing into them and performing certain commands. It'll interpret the output returned by the command to determine if the check failed or not. Let's illustrate this with the memcached check provided out of the box. This verifies if Memcached is running. The check runs service memcached status on your server and if it outputs a string that contains memcached is running the check will succeed. If not, the check will fail. When a check fails, and on other events, the package can send you a notification. Notifications look like this in Slack. You can specify which channels will send notifications in the config file. By default the package has support for Slack and mail notifications. Because the package leverages Laravel's native notifications you can use any of the community supported drivers or write your own. Hosts and checks can be added via the add-host artisan command or by manually adding them in the hosts and checks table. This package comes with a few built-in checks. But it's laughably easy to add your own checks.

Defining checks

The package will run checks on hosts. But what does such a check look like? A check actually is a very simple class that extends Spatie\ServerMonitor\CheckDefinitions\CheckDefinition. Let's take a look at the code of the built-in diskspace check.
namespace Spatie\ServerMonitor\CheckDefinitions;

use Spatie\Regex\Regex;
use Symfony\Component\Process\Process;

final class Diskspace extends CheckDefinition
    public $command = 'df -P .';

    public function resolve(Process $process)
        $percentage = $this->getDiskUsagePercentage($process->getOutput());

        $message = "usage at {$percentage}%";

        if ($percentage >= 90) {


        if ($percentage >= 80) {



    protected function getDiskUsagePercentage(string $commandOutput): int
        return (int) Regex::match('/(\d?\d)%/', $commandOutput)->group(1);
This check will perform df -P . on the server. That will generate output much like this:
Filesystem                1024-blocks     Used Available Capacity Mounted on
/dev/disk/by-label/DOROOT    20511356 12378568   7067832      64% /
With a little bit of regex we extract the percentage listed in the Capacity column. If it's higher than 90% we'll call fail. This will mark the check as failed and will send out a notification. If it's higher than 80% it'll issue a warning. If it's below 80% the check succeeds. It's a simple as that. Let's take a look at another example: the memcached check.
namespace Spatie\ServerMonitor\CheckDefinitions;

use Symfony\Component\Process\Process;

final class Memcached extends CheckDefinition
    public $command = 'service memcached status';

    public function resolve(Process $process)
        if (str_contains($process->getOutput(), 'memcached is running')) {
            $this->check->succeed('is running');


        $this->check->fail('is not running');
This check will run the command service memcached status on the server. If that commands outputs a string that contains memcached is running the check succeeds, otherwise it fails. Very simple.

Adding your own checks

Writing your own checks is very easy. Let's create a check that'll verify if nginx is running. Let's take a look at how to manually verify if Nginx is running. The easiest way is to run systemctl is-active nginx. This command outputs active if Nginx is running. Let's create an automatic check using that command. The first thing you must to do is create a class that extends from Spatie\ServerMonitor\CheckDefinitions\CheckDefinition. Here's an example implementation.
namespace App\MyChecks;

use Spatie\ServerMonitor\CheckDefinitions\CheckDefinition;
use Symfony\Component\Process\Process;

class Nginx extends CheckDefinition
    public $command = 'systemctl is-active nginx';

    public function resolve(Process $process)
        if (str_contains($process->getOutput(), 'active')) {
            $this->check->succeed('is running');


        $this->check->fail('is not running');
Let's go over this code in detail. The command to be executed on the server is specified in the $command property of the class. The resolve function that accepts an instance of Symfony\Component\Process\Process. The output of that process can be inspected using $process->getOutput(). If the output contains active we'll call $this->check->succeed which will mark the check successful. If it does not contain that string $this->check->fail will be called and the check marked as failed. By default the package sends you a notification whenever a check fails. The string that is passed to $this->check->failed will be displayed in the notification. After creating this class you must register your class in the config file.
// config/server-monitor.php
'checks' => [
    'nginx' => App\MyChecks\Nginx::class,
And with that, you're done. A check definition can actually do a few more things like when it's supposed to be run the next time, setting timeouts and it has support for using custom properties. Take a look at the docs if you want to know more about this.

Using the stand alone version

If you're not familiar with Laravel, installing a package can be a bit daunting. That's why we also created a stand alone version called server-monitor-app. Under the hood it's simply a vanilla Laravel 5.4 application with the laravel-server-monitor package pre-installed into it. Using this app you can set up server monitoring in literally one minute. Here's a video that demonstrates the installation and using a check.

Under the hood

Let's take a look at a few cool pieces of source code. If you have a buch server than you can end up with al lot of checks that need to be run. Running all those checks one after the other can take a bit of time. That's why the package has support for running checks concurrently. In the config file you can configure how many ssh connections the package may use. This code is taken from CheckCollection which is responsable for running all the checks.
 public function runAll()
     while ($this->pendingChecks->isNotEmpty() || $this->runningChecks->isNotEmpty()) {
         if ($this->runningChecks->count() < config('server-monitor.concurrent_ssh_connections')) {

This loop will run as long as there are pending checks or running checks. Whenever there are less checks as the amount configured in the config file, another new check is started. Let's take a look at what's happening inside the handleFinishedChecks function.
protected function handleFinishedChecks()
    [$this->runningChecks, $finishedChecks] = $this->runningChecks->partition(function (Check $check) {
        return $check->getProcess()->isRunning();

This code leverages al lot of niceties offered by the latest versions of PHP and Laravel. It will filter out all the processes that are not running anymore (and are thus finished) and put them in the $finishedChecks collection. After that handleFinishedProcess will be called on each finished process. handleFinishedProcess will eventually call the resolve function seen in the CheckDefinition examples listed above.

Testing the code

Like all other packages we previously made, laravel-server-monitor contains a good suite of tests. This allows us to improve the code and accept PRs without fear of breaking the code. Because SSH connections are used in this package, testing all functionality provided some challenges. To easily code that relies on ssh connections the test suite contains a dummy SSH server written in JavaScript. When it runs it mimics all functionality of an SSH server. The SSH server itself is provided by the mscdex/ssh2. My colleague Seb wrote an easy to use abstraction around it. Using that abstraction we can let it respond whatever we want to when a command is sent to the server. This makes testing the package end to end a breeze. Here's how we test a succesful check.
 /** @test */
 public function it_can_run_a_successful_check()


     $check = Check::where('host_id', $this->host->id)->where('type', 'diskspace')->first();

     $this->assertEquals('usage at 40%', $check->last_run_message);
     $this->assertEquals(CheckStatus::SUCCESS, $check->status);
Let's take a look at another example. The package will fire a CheckRestored event if a check succeeds again after it has failed previously.
 public function the_recovered_event_will_be_fired_when_an_check_succeeds_after_it_has_failed()





     Event::assertDispatched(CheckRestored::class, function (CheckRestored $event) {
         return $event->check->id === $this->check->id;

In closing

Laravel Server Monitor was a fun project to work on. We're quite happy with the results and now use it to monitor all our servers. If you're interested in learning more about the package head over to our documentation site or the package itself on GitHub. Keep in mind that this package was built specifically for teams without a dedicated ops member or department. So before using it, research the alternatives a bit yourself and make up your own mind what is a good solution for you. Our server monitor determines health by checking stuff inside your server. We also built another package called laravel-uptime-monitor that monitors your server from the outside. It'll regularly send http requests to verify if your server is up. It can even verify if the used ssl certificate is still valid for a certain amount of days. Take a look at the uptime monitor docs to know more. Also take a look at the other framework agnostic and laravel specific packages before. Maybe we've made something that can be of use in your next project.

A Laravel package to impersonate users

A great feature of Laravel Spark is it's ability to impersonate other users. As an admin you can view all screens as if you are logged in as another user. This allows you to easily spot a problem that your user might be reporting. Laravel-impersonate is a package, made by MarceauKa and Thibault Chazottes that can add this behaviour to any Laravel app. Here are some code examples taken from the readme.
Auth::user()->impersonate($otherUser); // You're now logged as the $otherUser.

Auth::user()->leaveImpersonation(); // You're now logged as your original user.

$manager = app('impersonate');

// Find an user by its ID

// TRUE if your are impersonating an user.

// Impersonate an user. Pass the original user and the user you want to impersonate
$manager->take($from, $to);

// Leave current impersonation

// Get the impersonator ID
It even includes some handy blade directives:
    <a href="{{ route('impersonate', $user->id) }}">Impersonate this user</a>

    <a href="{{ route('impersonate.leave') }}">Leave impersonation</a>
Want to know more, take a look at the package on GitHub.

Packages that make developing Laravel apps easier

In this post I'd like to share some of the packages that make developing a Laravel app easier.


This package really needs no introduction as it is one of the most popular packages around. It's made by Barry Vd. Heuvel and it's a real powerhouse. Once the package is installed it displays a debugbar in your browser that shows a lot of useful info such as the executed queries, which views are used, which controller built up the page, and much much more. https://github.com/barryvdh/laravel-debugbar


This package, also made by Barry, can generate some files that help an IDE provide improved autocompletion. Using this package PHPStorm can autocomplete methods on facades, classes resolved out of the container and model properties. https://github.com/barryvdh/laravel-ide-helper


Those who are already using Laravel for quite some time know that Laravel 4 provided an artisan command to tail the Laravel default log. Unfortunately in Laravel 5 that command was axed. This package brings it back. With the package installed you can just run php artisan tail to tail the log. https://github.com/spatie/laravel-tail


Laravel 5.3 introduced mailables. A mailable is a class that takes care of everything regarding sending a mail. One of the things that is not so easy to in a vanilla Laravel app is testing such a mail. In order to for example let your app send an order confirmation mail you have to go through the entire checkout process. The laravel-mailable-test package can make the mailable testing process a lot easier. It provides an artisan command that can send a mailable to a specific mail address. The package will provide a value for any typehinted argument of the constructor of the mailable. If an argument is a int, string or bool the package will generated a value using Faker. Any argument that typehints an Eloquent model will receive the first record of that model. If you want to learn more, read this introductory blog post about it. https://github.com/spatie/laravel-mailable-test


Made by Mohamed Said, this package can help you test a flow where mail is involved. Instead of sending the actual mail that package will save the mail to the filesystem. In the browser it will display a little notice that a mail was sent. Clicking on the link in that notice will display the saved mail right in your browser. Here's a short movie clip that shows how it can be used to test a password reset. https://github.com/themsaid/laravel-mail-preview


In my projects I have never run a down migration. So I don't bother with coding up the down steps. And without those down steps running Laravel's migrate:refresh will result in errors. If you think skipping writing down steps is lazy, read this comment Adam Wathan made on Reddit. The laravel-migrate-fresh package provides an artisan command that can knock out all your tables without using down steps of a migration. https://github.com/spatie/laravel-migrate-fresh
Do you know some other indispensable package that makes developing a Laravel app easier? Let me know in the comments below.

An artisan command to easily test mailables

Most of the Laravel apps we create at Spatie will send mails. This can be a password reset mail, a welcome mail after registration, an order confirmation mail, ... One of the things we do is styling such mails so it has the same look and feel as the site it was sent from. When testing such mails our designers had to request a password reset or go through the entire checkout flow just to receive such an order confirmation mail. To make that testing process a lot easier we've created a package called laravel-mailable-test. This package provides an artisan command that can send a mailable to an mail-address. To send any mailable issue this artisan command:
php artisan mail:send-test "App\Mail\MyMailable" [email protected]
This will send the given mailable to the given email address. The to-, cc- and bcc-addresses that may be set in the given mailable will be cleared. The mail will only be sent to the email address given in the artisan command. The package will provide a value for any typehinted argument of the constructor of the mailable. If an argument is a int, string or bool the package will generated a value using Faker. Any argument that typehints an Eloquent model will receive the first record of that model. Image the constructor of your mailable looks like this:
public function __construct(string $title, Order $order) 
That constructor will receive a string generated by the sentence method of Faker and the first Order in your database. The values that are passed to the constructor of the mailable can be customized using the values option of the command.
php artisan mail:send-test "App\Mail\MyMailable" [email protected] --values="title:My title,order:5"
Using this command My title will be passed to $title and an Order with id 5 will be passed to $order. To learn more about the package head over to the readme on GitHub. Be sure take also take a look at this list of Laravel packages our team has previously made.

Easily work with the Twitter Streaming API in PHP

Twitter provides a streaming API with which you can do interesting things such as listen for tweets that contain specific strings or actions a user might take (e.g. liking a tweet, following someone,...). In this post you'll learn an easy way to work with that API.


When researching on how to work with the streaming API in PHP it stumbled upon Phirehose. This package can authenticate and set up a connection with the streaming API. Unfortunately the API of Phirehose itself is a bit unwieldy. That's why I created a wrapper around Phirehose that makes setting up a connection to Twitter's streaming API a breeze. I made a framework agnostic version of the package and a Laravel specific one.

Getting started

Let's review how to integrate the package in a Laravel app. First you need to retrieve some credentials from Twitter. Fortunately this is very easy. Head over to the Application management on Twitter to create an application. Once you've created your application, click on the Keys and access tokens tab to retrieve your consumer_key, consumer_secret, access_token and access_token_secret. Now you've got the credentials let's install the package in a Laravel app. Like a gazillion other packages laravel-twitter-streaming-api can be installed via composer.
composer require spatie/laravel-twitter-streaming-api
Next, install the service provider.
// config/app.php
'providers' => [
If you love working with facades, you'll be happy to know that there's one provider by the package.
// config/app.php
'aliases' => [
    'TwitterStreamingApi' => Spatie\LaravelTwitterStreamingApi\TwitterStreamingApiFacade::class,
The config file must be published with this command:
php artisan vendor:publish --provider="Spatie\LaravelTwitterStreamingApi\TwitterStreamingApiServiceProvider" --tag="config"
It'll copy a file with this content in config/laravel-twitter-streaming-api.php
return [

     * To work with Twitter's Streaming API you'll need some credentials.
     * If you don't have credentials yet, head over to https://apps.twitter.com/

    'access_token' => env('TWITTER_ACCESS_TOKEN'),

    'access_token_secret' => env('TWITTER_ACCESS_TOKEN_SECRET'),

    'consumer_key' => env('TWITTER_CONSUMER_KEY'),

    'consumer_secret' => env('TWITTER_CONSUMER_SECRET'),
Create the TWITTER_ACCESS_TOKEN, TWITTER_ACCESS_TOKEN_SECRET, TWITTER_CONSUMER_KEY and TWITTER_CONSUMER_SECRET keys in your .env and set them to the values you retrieved on Twitter. Now you're ready to use the package.

Your first stream

Let's listen for all tweets that contain the string #laravel. I've created an example Laravel app with the package preinstalled. This artisan command included to listen for incoming tweets:

namespace App\Console\Commands;

use Illuminate\Console\Command;
use TwitterStreamingApi;

class ListenForHashTags extends Command
     * The name and signature of the console command.
     * @var string
    protected $signature = 'twitter:listen-for-hash-tags';

     * The console command description.
     * @var string
    protected $description = 'Listen for hashtags being used on Twitter';

     * Execute the console command.
     * @return mixed
    public function handle()
            ->whenHears('#laravel', function (array $tweet) {
                dump("{$tweet['user']['screen_name']} tweeted {$tweet['text']}");
In case you haven't used it before, that dump function does the same as dd but without the dying part. The call to startListening() starts the listening process. The function will run forever so any code after that won't be executed. Let's go ahead an execute the command. In the output you'll see some logging done by the underlying Phirehose package. Now tweet something with #laravel in the tweet. In the console you'll see your own tweet appearing immediately. Pretty cool! Be aware that this API works in realtime, so they could report a lot of activity. If you need to do some heavy work processing that activity it's best to put that work in a queue to keep your listening process fast.

Another example

In the example above we've used the public stream. You can use that one to listen for tweets with specific strings posted on Twitter. But there also other kinds of streams. With the user stream you can listen for certain events that happen for the authenticated user. Here's an example that demonstrates how you can listen people favouriting on of your tweets:
->onEvent(function(array $event) {
    if ($event['event'] === 'favorite') {
        echo "Our tweet {$event['target_object']['text']} got favourited by {$event['source']['screen_name']}";

In closing

The Twitter Streaming API's are pretty powerful. If you want to know more about them take a look at the official docs. The easiest way to get started with them is by using our laravel specific or framework agnostic package. If you don't like wrappers and want to use Phirehose directly, then read this tutorial on Scotch.io. In any case check out the other PHP and Laravel packages our team has made before. Have fun building a cool app using Twitter's streaming APIs!

A Laravel package to rebuild the database

Out of the box Laravel comes with a few commands to migrate the database. One of them is migrate:refresh. That one will first run the down-steps for all your migrations and then run all the up steps. After that process your database should have the same structure as specified in your migrations. But what if your migrations don't have a down-step? Because I seldom have to revert a migration in my recent projects I haven't bothered with coding up the down steps. And without those down steps running migrate:refresh will result in errors, or worse, tears. I've created a little package that contains a command to quickly nuke all the tables in the database, run all migrations and all seeders. It will not use the down steps of the migrations but will simple use the output of the SHOW TABLES query to knock down all tables. Once the package is installed this is how you can build up your database again:
php artisan migrate:fresh
Need to run the seeds as well? No problem!
php artisan migrate:fresh --seed
The package works on MySQL, PostgreSQL and SQLite databases. If you need to perform some extra steps right before or right after the tables are dropped, you can hook into these events. This this all sound good to you? Then check out the package on GitHub. On our company site you'll find a list of Laravel packages we've previously made.

Using Varnish on a Laravel Forge provisioned server

For a project we're working on at Spatie we're expecting high traffic. That's why we spent some time researching how to improve the request speed of a Laravel application and the amount of requests a single server can handle. There are many strategies and services you can use to speed up a site. In our specific project one of the things we settled on is Varnish. In this post I'd like to share how to set up Varnish on a Forge provisioned server.

High level overview

First, let's discuss what Varnish does. Varnish calls itself a caching HTTP reverse proxy. Basically that means that instead of a webserver listening for requests, Varnish will listen for them. When a request comes in the first time it will pass it on to the webserver. The webserver will do its work as usual and send a response back to Varnish. Varnish will cache that response and send it back to the visitor of the site. The next time the same request comes in Varnish will just serve up it's cached response. The webserver (and PHP/Laravel) won't even be started up. This results in a dramatic increase in performance. You might wonder how Varnish decides what should be cached and for how long. The answer: http headers. By default Varnish will look at the Cache-Control header in the response of the webserver. Let's take a look example of such a header. This header will let Varnish know that this response should be cached for 60 seconds.
Cache-Control: public, max-age=60
Varnish can be configured in detail using a vcl. This stands or "Varnish Configuration Language". It can be used to manipulate the request. You can do things such as ignoring certain headers, removing cookies before they get passed to the webserver. The response coming back from the webserver can be manipulated as well. Think of things like manipulating or add headers to the response. There are more advanced features like Grace Mode and Health Checks can be configured as well. The remainder of this blogpost aims to set up a basic varnish installation on a Forge provisioned server. Before you continue reading I highly recommended watching this presentation Mattias Geniar gave at this year's Laracon first. He explains some core concepts in more detail. Prefer reading a blog post over watching a video? Then header over to this blogpost on Matthias' blog.

Installing Varnish

Now that you know what Varnish is and what it can do, let's get our handy dirty and install Varnish. If want to follow along provision a fresh server using Forge and install your project (or a clean Laravel installation) on it. I'm going to use the varnishtest.spatie.be domain for the remainder of this post. Replace this by your own url. Instead of using a clean Laravel installation I'm going to use a copy of our homegrown Laravel application template called Blender. By using Blender I'll hopefully run into many gotcha's when integrating Varnish with a real world app. To install Varnish just run this command on your server:
sudo apt-get install varnish
This will pull in Varnish 4.x or Varnish 5.x. Both versions are fine.

Updating the default VCL

The VCL file - Varnish' configuration file - is located at /etc/varnish/default.vcl. Out of the box it contains some empty placeholders functions. To make Varnish behave in a sane way we should write out how we want Varnish to behave. But, once more, Mattias Geniar has got our backs. He open sourced his VCL configuration. He went the extra mile and put very helpful comments throughout the template. Go ahead and replace the contents of /etc/varnish/default.vcl with the content of Mattias' VCL. Restart Varnish to make sure the updated VCL is being used:
sudo service varnish restart

Opening a port for Varnish

By default Varnish will listen for requests on port 6081. So the page that is seen when surfing to http://varnishtest.spatie.be (replace that url by the url of your project) is still being server up by nginx / PHP. Pointing the browser to http://varnishtest.spatie.be:6081 will result in a connection error. That error is caused by the firewall that by default blocks all requests to that port. In the Forge admin, underneath the "Network" tab you can add firewall rules. Go ahead and open up port 6081 by adding a line to that section. Now surfing to http://varnishtest.spatie.be:6081 should display the same content as http://varnishtest.spatie.be:80. If, in your project that page doesn't return a 200 response code, but a 302 you'll see an Error 503 Backend fetch failed error. In my multilingual projects "/" often gets redirected to "/nl", so I've encountered that error too. Let's fix that. Turns out the Mattias' VCL template uses a probe to determine if the back-end (aka the webserver) is healthy. Just comment out these lines by adding a # in front of it. A visit to http://varnishtest.spatie.be:6081 should now render the same content as on port 80.

Checking the headers

Even though the content of the pages of port 80 and 6081 are the same the response headers are not. Here are the headers of my test application: Notice those X-Cache and X-Cache-Hits headers? They are being set because we told Varnish to do so in our VCL. X-Cache is set to MISS. And it stays at that value when performing some more requests. So unfortunately Varnish isn't caching anything. Why could that be? If you take another look at the headers you'll notice that cookie called laravel_session is being set. By default Laravel opens up a new session for every unique visitor. It uses that cookie to match up the visitor with the session on the server. In most cases this perfectly fine behaviour. Varnish however will, by default, not cache any pages where cookies are being set. A cookie being set tells Varnish that the response it received is for a specific visitor. That response should not be shown to another visitor. Sure, we could try to configure Laravel in such a way that it doesn't set cookies, but that becomes cumbersome very quickly. To make working with Varnish in Laravel as easy as possible I've created a package called laravel-varnish. Simply put the package provides a middleware that sets a X-Cachable header on the response of every route the middleware is applied upon. In the Varnish configuration were going to listen for that specific header. If a response with X-Cacheable is returned from the webserver Varnish will ignore and remove any cookies that are being set. Keep in mind that pages where you actually plan on using the session - to for example display a flash message or use authentication - cannot easily be cached with Varnish. There is a feature called Edge Side Includes with which you can cache parts of the page but that's out of scope for this blog post.

Making Varnish play nice with Laravel

As mentioned in the previous paragraph I've made a Laravel package that makes working with Varnish very easy. To install it you can simply pull the package in via Composer:
composer require laravel-varnish
After composer is finished you should install the service provider:
// config/app.php

'providers' => [
Next you must publish the config-file with:
php artisan vendor:publish --provider="Spatie\Varnish\VarnishServiceProvider" --tag="config"
In the published laravel-varnish.php config file you should set the host key to the right value. Next the Spatie\Varnish\Middleware\CacheWithVarnish middleware should be added to the $routeMiddleware array:
// app/Http/Kernel.php

protected $routeMiddleware = [
   'cacheable' => Spatie\Varnish\Middleware\CacheWithVarnish::class,
Finally, in you should add these lines to the vcl_backend_reponse function in your VCL located at /etc/varnish/default.vcl:
if (beresp.http.X-Cacheable ~ "1") {
    unset beresp.http.set-cookie;
Restart Varnish again to make sure the added lines will be used:
sudo service varnish restart
Now that the package is installed we can apply that cacheable middleware to some routes.
Route::group(['middleware' => 'cacheable'], function () {
   //all the routes that should be cached
I've gone ahead and cached some routes in my application. Let's test it out by making a request to the homepage of the site and inspect the headers and response time. You can see that the X-Cacheable header that was added on by our package. The presence of this header tells Varnish that it's ok to remove all headers regarding the setting of cookies to make this response cacheable. Varnish tells us through the X-Cache header that the response came not from it's cache. So the response was built up by PHP/Laravel. In the output you can also see that the total response time was a little under 200 ms. Let's fire off the same request again. Now the value of X-Cache header is HIT. Varnish got this response from its cache. Nginx / Laravel and PHP were not involved in answering this request. The total response time is now 75 ms, an impressive improvement. To flush cache content normally you need to run a so-called ban command via the varnishadm tool. But with the package installed you can just use this artisan command on your server:
php artisan varnish:flush
You could use this command in your deployment script to flush the Varnish cache on each deploy. Running curl -I -w "Total time: %{time_total} ms\n" http://varnishtest.spatie.be:6081/nl again will result in a cache miss.

Measuring the performance difference

The one small single test from the previous paragraph isn't really enough proof that Varnish is speeding up requests. Maybe we were just a bit lucky. Let's run a more thorough test. Blitz.io is an online service that can be used to perform load tests. Let's run it against our application. We'll run this test: It will run a bunch of get request originating from Ireland. In a time span of 30 seconds it will ramp up the load from 1 to 1000 concurrent users. This will result in about 26 000 request in 30 seconds. The server my application is installed upon is the smallest Digital Ocean droplet. It has one CPU and 512 MB of RAM. Let's first try running the test on port 80 where nginx is still listening for requests. O my... The test runs fine for 5 seconds. After that the response time quickly rises. After 10 seconds, when 200 concurrent request / second are hitting the server response times grow to over 2 seconds. After 15 seconds the first errors start to appear due to the fact the webserver is swamped in work. It starts to send out 502 Bad Gateway errors. From there on the situation only gets worse. Nearly all requests result in errors or timeouts. Let's now try the same test but on port 6081 where Varnish is running. What. A. Rush. Beside the little hiccup at the end, the response time of all requests was around 80 ms. And even the slowest response was at a very acceptable 120 ms. There were no errors: all 26 000 requests got a response. Quite amazing. Let's try to find Varnish' breaking point and up the concurrent requests from 1 000 to 10 000. 15 seconds in, with 2 500 requests hitting on our poor little server in one second, the response time rises to over a second. But only after nearly 30 seconds, with a whopping 6 000 requests a second going on, Varnish starts returning errors.

Installing Varnish: final steps

As it stands, Varnish is still running on port 6081 and nginx on port 80. So all the visitors of our site still get served uncached page. To start using Varnish for real the easiest thing your can to is to just swap the ports. Let's make Varnish run on port 80 and nginx on port 6081. Here are the steps involved: 1) edit /etc/nginx/sites-enabled/<your domain>. Change
listen 80;
listen [::]:80;
listen 6081;
listen [::]:6081;
2) edit /etc/nginx/forge-conf/<your domain>/before/redirect.conf. Change
listen 80;
listen [::]:80;
listen 6081;
listen [::]:6081;
3) edit /etc/nginx/sites-available/catch-all. Change
listen 80;
listen 6081;
4) edit /etc/default/varnish. Change
DAEMON_OPTS="-a :6081 \
DAEMON_OPTS="-a :80 \
5) edit /etc/varnish/default.vcl. Change
  .port = "80";
  .port = "6081";
6) edit /lib/systemd/system/varnish.service. Change
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
I'm pretty sure there must be a more elegant way, but let's just fire off the bazooka and restart the server. When it comes back up you'll notice that headers of response on port 80 will contain X-Cache headers (set by Varnish) and those on port on 6081 (where Nginx is running) do not. If you wish to do so may remove the line that opens up port 6081 in Firewall rules screen on Forge. And with that your Varnish installation should be up and running. Congratulations!

In closing

Varnish is a incredible powerful tool that can speed up your application immensely. In this post I've barely scratched the surface of what you can do with it. If you want to learn more about it, check out the official documentation or the aformentioned blogpost by Mattias Geniar. Out of the box Varnish can't handle https connections, so you'll need to do some extra configurating to make it work. For our particular project we settled on Varnish, but there are plenty alternatives to scale your app: Also keep in mind that you should only use Varnish if you expect high traffic. For smallish sites you probably shouldn't bother installing and configuring Varnish. If you do decide to use Varnish, be sure to take a look at our laravel-varnish package. If you like it, check out our other Laravel and PHP packages.

A developer friendly wrapper around Fractal

Fractal is an amazing package to transform data before using it in an API. Unfortunately working with Fractal can be a bit verbose. That's why we created a wrapper called Fractalistic around it, that makes working with Fractal a bit more developer friendly. It's framework agnostic so you can use it in any PHP project. Using the vanilla Fractal package data can be transformed like this:
use League\Fractal\Manager;
use League\Fractal\Resource\Collection;

$books = [
   ['id'=>1, 'title'=>'Hogfather', 'characters' => [...]], 
   ['id'=>2, 'title'=>'Game Of Kill Everyone', 'characters' => [...]]

$manager = new Manager();

$resource = new Collection($books, new BookTransformer());


Our Fractalistic wrapper package makes that process a tad easier:
   ->transformWith(new BookTransformer())
There's also a very short syntax available to quickly transform data:
Fractal::create($books, new BookTransformer())->toArray();
If you want to use this package inside Laravel, it's recommend to use laravel-fractal instead. That package contains a few more bells and whistles specifically targetted at Laravel users. To learn all the options Fractalistic has to offer, head over to the readme on GitHub. If you like it, take a look at our previous open source work as well. There's a list of framework agnostic packages we made on our company site.

Improving the performance of our PHP based crawler

Today a new major version of our homegrown crawler was released. The crawler is used to power our http-status-check, laravel-sitemap and laravel-link-checker packages. A new major feature is the greatly improved crawling speed. This was accomplished by leveraging multiple concurrent requests. Let's take a look at the performance improvements gained by using concurrent requests. In the video below, the crawler is started two times. On the left we have v1 of the crawler that just does one request and waits for the response before launching another request. On the right we have v2 that uses 10 concurrent requests. Both crawls will go over our entire company site https://spatie.be Even though I gave v1 a little head start, it really got blown away by v2. Where v1 is constantly waiting on a response from the server, v2 will just launch another request, while it's waiting for responses of previous requests. To make requests, the crawler package uses Guzzle, a the well known http client. A cool feature of Guzzle is that it provides support for concurrent requests out of the box. If you want to know more about on that subject, read this excellent blogpost by Hannes Van de Vreken. Here's the relevant code in our package. Together with the release of crawler v2: these packages have new major versions that make use of crawler v2: - http-status-check: this command line tool can scan your entire site and report back the http status codes for each page. We use this tool whenever we launch a new site at Spatie to check if there are broken links. - laravel-sitemap: this Laravel package can generate a sitemap by crawling your entire site. - laravel-link-checker: this one can automatically notify you whenever a broken link is found on your site. Integrating the crawler into your own project or package is easy. You can set a CrawlProfile to determine which urls should be crawled. A crawlReporter can be used to determine what should be done with the found urls. Want to know more? Then head over to the crawler repo on GitHub. If you like the crawler, be sure to also take a look at the many other framework agnostic and Laravel specific packages our team has created.

An uptime and ssl certificate monitor written in PHP

Today we released our newest package: spatie/laravel-uptime-monitor. It's a powerful, easy to configure uptime monitor. It's written in PHP and distributed as a Laravel package. It will notify you when your site is down (and when it comes back up). You can also be notified a few days before an SSL certificate on one of your sites expires. Under the hood, the package leverages Laravel 5.3's notifications, so it's easy to use Slack, Telegram or one of the many available notification drivers. To make sure you can easily work with the package, we've written extensive documentation. Topics range from the basic installation setup to some more advanced settings. In this post I'd like to show you how to start using the package and discuss some of the problems we faced (and solved) while creating the code.

The basics

After you've set up the package you can use the monitor:create command to monitor a url. Here's how to add a monitor for https://example.com:
php artisan monitor:create https://example.com
You will be asked if the uptime check should look for a specific string on the response. This is handy if you know a few words that appear on the url you want to monitor. This is optional, but if you do choose to specify a string and the string does not appear in the response when checking the url, the package will consider that uptime check failed. Instead of using the monitor:create command you may also manually create a row in the monitors table. The documentations contains a description of all the fields in that table. Now, if all goes well the package will check the uptime of https://example.com every five minutes. That number can be changed in the configuration. If the site becomes unreachable for any reason, the package will send you a notification of that event. Here's how that looks like in Slack: monitor-failed When an uptime check fails the package goes into crunch mode and starts checking that site every single minute. As soon as the connectivity to the site is restored you'll be notified. monitor-recovered Out of the box the uptime check will be performed in a sane way. You can tweak some settings of the uptime check to your liking. Like mentioned above the package will also check the validity of the ssl certificate. By default this check is being performed daily. If a certificate is expiring soon or is invalid, you'll get notified. Here's how such a notification looks like: ssl-expiring-soon You can view all configured monitors with the monitor:list command. You'll see some output not unlike this: monitor-list

Making the uptime check fast

Like many other agencies, at our company we have to check the uptime of a lot of sites. That's why this process happens as fast as possible. Under the hood the uptime check uses Guzzle Promises. So instead of making a request to a site and just keep waiting until there is a response, the package will perform multiple http request at the same time. If you want to know more about Guzzle promises, read this blogpost by Hannes Van de Vreken. Take a look at the MonitorCollection-class in the package to learn how promises are being leveraged in this package.

Testing the code

To make sure the package works correctly we've built an extensive test suite. It was a challenge to keep the test suite fast. Reaching out to real sites to test reachability is much too slow. A typical response from a site takes 200ms (and that's a very optimistic number), multiply that by the Β±50 tests we you'll get a suite that takes over 10 seconds to run. That's much too slow. Another problem we needed to solve is the timing of the tests. By default the package will check a reachable site only once in 5 minutes. Of course our test suite can't wait five minutes. To solve the problem of slow responses we built a little node server that mimics a site and included that in the testsuite. The Express framework makes it really easy to do so. Here's the entire source code of the server:
"use strict";

let app = require('express')();

let bodyParser = require('body-parser');
app.use(bodyParser.json()); // support json encoded bodies
app.use(bodyParser.urlencoded({ extended: true })); // support encoded bodies

let serverResponse = {};

app.get('/', function (request, response) {
    const statusCode = serverResponse.statusCode;

    response.writeHead(statusCode || 200, { 'Content-Type': 'text/html' });
    response.end(serverResponse.body || "This is the testserver");

app.post('/setServerResponse', function(request, response) {
    serverResponse.statusCode = request.body.statusCode
    serverResponse.body = request.body.body;

    response.send("Response set");

let server = app.listen(8080, function () {
    const host = 'localhost';
    const port = server.address().port;

    console.log('Testing server listening at http://%s:%s', host, port);
By default visiting http://localhost:8080/ returns an http response with content This is the testserver and status code 200. To change that response a post request to setServerResponse can be submitted with the text and status code that a visit to / should return. Unlike making a request to a site on the internet, this server, because it runs locally, is blazing fast. This is the PHP class in the testsuite that communicates with the node server:
namespace Spatie\UptimeMonitor\Test;

use GuzzleHttp\Client;

class Server
    /** @var \GuzzleHttp\Client */
    protected $client;

    public function __construct()
        $this->client = new Client();


    public function setResponseBody(string $text, int $statusCode = 200)
        $this->client->post('http://localhost:8080/setServerResponse', [
            'form_params' => [
                'statusCode' => $statusCode,
                'body' => $text,

    public function up()
        $this->setResponseBody('Site is up', 200);

    public function down()
        $this->setResponseBody('Site is down', 503);
The second problem - testing time based functionality - can be solved by just controlling time. Yeah, you've read that right. The whole package makes use of Carbon instances to work with time. Carbon has this method to just set the current time.
use Carbon\Carbon;

Carbon::setTestNow(Carbon::create(2016, 1, 1, 00, 00, 00));

// will return a Carbon instance with a datetime value 
// of 1st January 2016 no matter what the real
// current date or time is
To make time progress a couple of minutes we made this function :
public function progressMinutes(int $minutes)
   $newNow = Carbon::now()->addMinutes($minutes);

Now let's take a look at a real test that uses both the testserver and the time control. Our uptime check will only fire of a UptimeCheckRecovered event after a UptimeCheckFailed was sent first. The UptimeCheckRecovered contains a datetime indicating when the UptimeCheckFailed event failed for the first time. Here's the test to make sure the the UptimeCheckRecovered gets fired at the right time and it contains the right info:
/** @test */
public function the_recovered_event_will_be_fired_when_an_uptime_check_succeeds_after_it_has_failed()
     * Get all monitors which uptime we should check
     * In this test there is only one monitor configured.
    $monitors = MonitorRepository::getForUptimeCheck();

     * Bring the testserver down.

     * To avoid false positives the package will only raise an `UptimeCheckFailed`
     * event after 3 consecutive failures.
    foreach (range(1, 3) as $index) {
        /** Perform the uptime check */


     * The `UptimeCHeckFailed`-event should have fired by now.

     * Let's simulate a downtime of 10 minutes
    $downTimeLengthInMinutes = 10;

     * Bring the testserver up

     * To be 100% certain let's assert the the `UptimeCheckRecovered`-event
     * hasn't been fired yet.

     * Let's go ahead a check the uptime again.

     * And now the `UptimeCheckEventRecovered` event should have fired
     * We'll also assert that `downtimePeriod` is correct. It should have a
     * a startDateTime of 10 minutes ago.
    Event::assertFired(UptimeCheckRecovered::class, function ($event) use ($downTimeLengthInMinutes) {

        if ($event->downtimePeriod->startDateTime->toDayDateTimeString() !== Carbon::now()->subMinutes($downTimeLengthInMinutes)->toDayDateTimeString()) {
            return false;

        if ($event->monitor->id !== $this->monitor->id) {
            return false;

        return true;
Sure, there's a lot going on, but it all should be very readable.

Y U provide no GUI?

Because everybody's needs are a bit different, the package does not come with any views. If you need a some screens to manage and view your configured monitors you should handle this in your own project. There's only one table to manage - monitors - that should not be to overly difficult. We use semver, so we guarantee we'll make no breaking changes within a major version. The screens you build around the package should just keep working after upgrading the package. If you're in a generous mood you could make your fellow developers happy by make a package out of your screens.

A word on recording uptime history

Some notifications, for example UptimeCheckRecovered, have a little bit of knowledge on how long a period of downtime was. Take a look at the notification: monitor-recovered But other than that the package records no history. If you want to for example calculate an uptime percentage or to draw a graph of the uptime you can leverage the various events that are fired. The documentation specifies which events get send by the uptime check and the certificate check. A possible strategy would be to just write all the those events to an big table, a kind of event log. You could use that event log to generate your desired reports. This strategy has got a name: event sourcing. If you're interested in knowing more about event sourcing watch this talk by Mitchell van Wijngaarden given at this year's Laracon.

Closing notes

Sure, there already are a lot of excellent services out there to check uptime, both free and paid. We created this package to be in full control of how the uptime and ssl check work and how notifications get send. You can also choose from which location the checks should be performed. Just install the package onto a server in that location. Although it certainly took some time get it right, the core functionalities was fairly easy to implement. Guzzle already had much of the functionality we needed to perform uptime checks quickly. Laravel itself make it a breeze to schedule uptime checks and comes with an excellent notification system out of the box. We put a lot of effort in polishing the code and making sure everything just works. At Spatie, we are already using our own monitor to check to uptime of all our sites. If you choose to use our package, we very much hope that you'll like it. Check out the extensive documentation we've written. If you have any remarks or questions, just open up an issue on the repo on GitHub. And don't forget to send us a postcard! :-) We've made a lot of other packages in the past, check out this list on our company site. Maybe we've made something that could be of use in your projects.

An opinionated tagging package for Laravel apps

There are a lot of quality tagging packages out there. Most of them offer the same thing: creating tags, associating them with models and some functions to easily retrieve models with certain tags. But in our projects at Spatie we need more functionality. Last week we released our own - very opinionated - tagging package aptly called laravel-tags. It comes with batteries included. Out of the box it has support for translating tags, multiple tag types and sorting capabilities. After the package is installed (be sure to check the requirements first) the only thing you have to do is to add the HasTags trait to an Eloquent model to make it taggable.
use Illuminate\Database\Eloquent\Model;
use Spatie\Tags\HasTags;

class NewsItem extends Model
    use HasTags;
Here are some code examples of what that trait allows you to do:
//create a model with some tags
$newsItem = NewsItem::create([
   'name' => 'testModel',
   'tags' => ['tag', 'tag2'], //tags will be created if they don't exist

//attaching tags
$newsItem->attachTags(['tag4', 'tag5']);

//detaching tags
$newsItem->detachTags(['tag4', 'tag5']);

//syncing tags
$newsItem->syncTags(['tag1', 'tag2']); // all other tags on this model will be detached

//retrieving models that have any of the given tags
NewsItem::withAnyTags(['tag1', 'tag2']);

//retrieve models that have all of the given tags
NewsItem::withAllTags(['tag1', 'tag2']);
Under the hood those tags will be converted to Spatie\Tags\Tag models.

Adding translations

If you're creating a multilingual app it's really easy to translate the tags. Here's a quick example.
$tag = Tag::findOrCreate('my tag'); //store in the current locale of your app

//let's add some translation for other languages
$tag->setTranslation('name', 'fr', 'mon tag');
$tag->setTranslation('name', 'nl', 'mijn tag');

//don't forget to save the model

$tag->getTranslation('name', 'fr'); // returns 'mon tag'

$tag->name // returns the name of the tag in current locale of your app.
The translations of the tags are stored in the name column of the tags table. It's a json column. To find a tag with a specific translation you can just use Laravel's query builder which has support for json columns.
   ->where('name->fr', 'mon tag')
Behind the scenes spatie/laravel-translatable is used. You can use any method provided by that package.

Using tag types

In your application you might want to have multiple collections of tags. For example: you might want one group of tags for your News model and another group of tags for your BlogPost model. To create separate collections of tags you can use tag types.
//creating a tag with a certain type
$tagWithType = Tag::create('headline', 'newsTag'):
In addition to strings, all methods mentioned previously in this post can take instances of Tag as well.
The provided method scopes, withAnyTags and withAllTags, can take instances of Spatie\Tags\Tag too:
$tag = Tag::create('gossip', 'newsTag');
$tag2 = Tag::create('headline', 'newsTag');

NewsItem::withAnyTags([$tag, $tag2])->get();
To get all tags with a specific type use the getWithType method.
$tagA = Tag::findOrCreate('tagA', 'firstType');
$tagB = Tag::findOrCreate('tagB', 'firstType');
$tagC = Tag::findOrCreate('tagC', 'secondType');
$tagD = Tag::findOrCreate('tagD', 'secondType');

Tag::getWithType('firstType'); // returns a collection with $tagA and $tagB

//there's also a scoped version
Tag::withType('firstType')->get(); // returns the same result

Sorting tags

Whenever a tag is created it's order_column will be set the highest value in that column + 1.
$tag = Tag::findOrCreate('tagA');
$tag->order_column; // returns 1
$tag = Tag::findOrCreate('tagB');
$tag->order_column; // returns 2
Under the hood spatie/eloquent-sortable is used, so you can use any model provided by that package. Here are some examples:
//get all tags sorted on `order_column`
$orderedTags = Tags::ordered()->get(); 

//set a new order entirely


//move the tag to the first or last position

Of course you can also manually change the value of the order_column.
$tag->order_column = 10;

If you want to know more about the package check out the documentation. Want to read at the source code (or ⭐ the project πŸ˜™)? Head over the repo to GitHub. This isn't the first Laravel package we've made, take a look at the list of Laravel packages on our company website to see if there's anything there that can be used on your projects.

Managing opening hours with PHP

For several different clients we needed to display a schedule of opening hours on their websites. They also wanted to display if a department / store / ... is open on the moment you visit the site. My colleague Seb extracted all the functionality around opening hours to the newly released opening-hours package. You create an OpeningHours object that describes a business' opening hours. It can be queried for open or closed on days or specific dates, or use to present the times per day. A set of opening hours is created by passing in a regular schedule, and a list of exceptions.
$openingHours = OpeningHours::create([
    'monday' => ['09:00-12:00', '13:00-18:00'],
    'tuesday' => ['09:00-12:00', '13:00-18:00'],
    'wednesday' => ['09:00-12:00'],
    'thursday' => ['09:00-12:00', '13:00-18:00'],
    'friday' => ['09:00-12:00', '13:00-20:00'],
    'saturday' => ['09:00-12:00', '13:00-16:00'],
    'sunday' => [],
    'exceptions' => [
        '2016-11-11' => ['09:00-12:00'],
        '2016-12-25' => [],
The object can be queried for a day in the week, which will return a result based on the regular schedule:
// Open on Mondays:
$openingHours->isOpenOn('monday'); // true

// Closed on Sundays:
$openingHours->isOpenOn('sunday'); // false
It can also be queried for a specific date and time:
// Closed because it's after hours:
$openingHours->isOpenAt(new DateTime('2016-09-26 19:00:00')); // false

// Closed because Christmas was set as an exception
$openingHours->isOpenAt(new DateTime('2016-12-25')); // false
It can also return arrays of opening hours for a week or a day:
// OpeningHoursForDay object for the regular schedule

// OpeningHoursForDay[] for the regular schedule, keyed by day name

// OpeningHoursForDay object for a specific day
$openingHours->forDate(new DateTime('2016-12-25'));

// OpeningHoursForDay[] of all exceptions, keyed by date
Take a look at the package on GitHub to learn the full api. Check out the list of PHP packages we've previously made to see if we've made something else that could be of use to you.

A Laravel package to store language lines in the database

In a vanilla Laravel installation you can use language files to localize your app. The Laravel documentation refers to any string in a language file as a language line. You can fetch the value of any language line easily with Laravel's handy trans-function.
trans('messages.welcome'); // fetches the string in the welcome-key of the messages-lang-file.
In our projects a client sometimes wants to change the value of such a language line. But as we don't want to let the client edit php files we had to come up with another way of storing localized strings. Our new laravel-translation-loader package will enable the language lines to be stored in the database. In this way you can build a GUI around that to let the client manipulate the values. When using our package can still use all the features of the trans function you know and love. Once the package is installed you'll have a language_lines table where all localized values can be stored. There's a LanguageLine model included. Here's how you can create a new LanguageLine:
use Spatie\TranslationLoader\LanguageLine;

   'group' => 'validation',
   'key' => 'required',
   'text' => ['en' => 'This is a required field', 'nl' => 'Dit is een verplicht veld'],
The cool thing is that that can still keep using the default language files as well. If a requested translation is present in both the database and the language files, the database version will be returned by the trans function. While creating the package we also kept an eye on performance. When requesting a language line from a group we will retrieve all language lines from a group. In this way we avoid a separate query from each language line. We'll also cache the whole group to avoid querying the database from language lines in subsequent requests. Whenever a language line in a group is created, updated or deleted, we'll invalidate the cache of the group. The package is also easy to extend. Image you want to store your translations not in a php-files or the database, but in a yaml-file or csv-file or... you can create your own translation provider. A translation provider can be any class that implements the Spatie\TranslationLoader\TranslationLoaders\TranslationLoader-interface. It contains only one method:
namespace Spatie\TranslationLoader\TranslationLoaders;

interface TranslationLoader
     * Returns all translations for the given locale and group.
    public function loadTranslations(string $locale, string $group): array;
Translation providers can be registered in the translation_loaders key of the config file. Take a look at the readme of the package on GitHub to learn all the options. Be sure to also check out the list of Laravel packages our team has previously made.

A little library to deal with color conversions

My colleague Seb needed to convert some color values in PHP. He looked around for some good packages, but there weren't any that fit the bill. So guess what he did? He just created a new package called spatie/color. Here's what it can do:
$rgb = Rgb::fromString('rgb(55,155,255)');

echo $rgb->red(); // 55
echo $rgb->green(); // 155
echo $rgb->blue(); // 255

echo $rgb; // rgb(55,155,255)

$rgba = $rgb->toRgba(); // `Spatie\Color\Rgba`
$rgba->alpha(); // 1
echo $rgba; // rgba(55,155,255,1)

$hex = $rgb->toHex(); // `Spatie\Color\Hex`
echo $hex; // #379bff
Currently the package only supports rgb, rgba and hex formats. If you need another format, submit a PR.