Laravel/MySQL JSON documents faster lookup using generated columns

In an older post on his blog Mohamed Said demonstrates how you can leverage virtual columns to speed up queries on data stored as JSON.
Laravel 5.3 is shipped with built-in support for updating and querying JSON type database fields, the support currently fully covers MySQL 5.7 JSON type fields updates and lookups, ... Let's see how we may create a generated column to store users favorite color for later indexing.
https://themsaid.com/laravel-mysql-json-colum-fast-lookup-20160709

Easily optimize images using PHP (and some binaries)

Our recently released image-optimizer package can shave off some kilobyes of PNGs, JPGs, SVGs and GIFs by running them through a chain of various image optimization tools. In this blog post I'll tell you all about it. First, here's a quick example on how you can use it:
use Spatie\ImageOptimizer\OptimizerChainFactory;

$optimizerChain = OptimizerChainFactory::create();

$optimizerChain->optimize($pathToImage);
The image at $pathToImage will be overwritten by an optimized version which should be smaller. Here are a few images that were optimized by this package.

Why we built this

On nearly every website we make, images are displayed. In fact, in most cases images make up the bulk of the size of the entire page. So to provide a fast page load it's best to make those images as small as they can be. The less bytes the browser needs to download the faster the page will be. Now, to make images as small as they can be (without sacrificing a lot of detail) there are a lot of tools available. These tools, such as jpegtran, pngquant, and gifsicle, work more or less by applying a little bit of compression, removing metadata and reducing the number of colors. In most cases these tools can make your images considerably smaller without you noticing, unless you have a trained eye for that. These tools are free to use and can be easily installed on any system. So far so good. What makes them a bit bothersome to use from PHP code is that you need to create a process to run them. You must make sure that you pass the right kind of image to the right optimizer. You also have to decide which optimization parameters you're going to use for each tool. None of this is rocket science, but I bet that the vast majority of small to medium sites don't bother writing this code or researching these optimizations. Our package aims to do all of this out of the box. It can find out which optimizers it should run, it can execute the binary and by default uses a set of optimizers with a sane configuration.

How to optimize images

Optimizing an image is very easy using our package.
use Spatie\ImageOptimizer\OptimizerChainFactory;

$optimizerChain = OptimizerChainFactory::create();

$optimizerChain->optimize($pathToImage);
The image at $pathToImage will be overwritten by an optimized version which should be smaller. The package will use these optimizers if they are present on your system: The readme of the package includes instructions on how to install these on Ubuntu and MacOS.

Which tools will do what?

Like already mentioned, the package will automatically pick the right tool for the right image.

JPGs

JPGs will be made smaller by running them through JpegOptim. These options are used: - --strip-all: this strips out all text information such as comments and EXIF data - --all-progressive: this will make sure the resulting image is a progressive one, meaning it can be downloading using multiple passes of progressively higher details.

PNGs

PNGs will be made smaller by running them through two tools. The first one is Pngquant 2, a lossy PNG comprossor. We set no extra options, their defaults are used. After that we run the image throug a second one: Optipng. These options are used: - -i0: this will result in a non-interlaced, progressive scanned image - -o2: this set the optimization level to two (multiple IDAT compression trials)

Customizing the optimization

If you want to customize the chain of optimizers you can do so by adding Optimizers manually to a OptimizerChain. Here's an example where we only want optipng and jpegoptim to be used:
use Spatie\ImageOptimizer\OptimizerChain;
use Spatie\ImageOptimizer\Optimizers\Jpegoptim;
use Spatie\ImageOptimizer\Optimizers\Pngquant;

$optimizerChain = (new OptimizerChain)
   ->addOptimizer(new Jpegoptim([
       '--strip-all',
       '--all-progressive',
   ]))

   ->addOptimizer(new Pngquant([
       '--force',
   ]))
If you want to use another tool than the package supports out of the box, you can easily write your own optimizer. An optimizer is any class that implements the Spatie\ImageOptimizer\Optimizers\Optimizer interface. If you want to view an example implementation take a look at the existing optimizers that ship with the package.

Integration in other packages

Our image-optimizer is not the first package we wrote the revolves around handling images. There also our image package which makes modifying images very easy. And we have laravel-medialibrary which can associate all kinds of files (including images) with Eloquent models. And lastly we have Browsershot, which can turn any webpage into an image. Let's take a look on how image-optimizer has been integrated in all of those.

Image

Using the image package you can manipulate an image like this:
Image::load('example.jpg')
    ->sepia()
    ->blur(50)
    ->save();
This creates a sepia version that is blurred. The package has many other available manipulations. Here's how you can create an optimized version.
Image::load('example.jpg')
    ->sepia()
    ->blur(50)
    ->optimize()
    ->save();
Yea, just add the optimize method in the chain and you're done. Easy right?

laravel-medialibrary

In laravel-medialibrary this is just as easy. Using that package you can define conversion profiles on your models. Whenever you associate a file with that model a derived file using that conversion profile is being generated. This is handy for creating thumbnails and such. Here's a quick example of such a conversion profile.
use Illuminate\Database\Eloquent\Model;
use Spatie\MediaLibrary\HasMedia\Interfaces\HasMediaConversions;
use Spatie\MediaLibrary\HasMedia\HasMediaTrait;

class NewsItem extends Model implements HasMediaConversions
{
    use HasMediaTrait;

    public function registerMediaConversions()
    {
        $this->addMediaConversion('thumb')
              ->width(368)
              ->height(232)
              ->sharpen(10);
    }
}
Let's add an image to the medialibrary:
$media = NewsItem::first()->addMedia($pathToImage)->toMediaCollection();
Besides storing the original item, the medialibrary will also create a derived image.
$media->getPath();  // the path to the where the original image is stored
$media->getPath('thumb') // the path to the converted image with dimensions 368x232

$media->getUrl();  // the url to the where the original image is stored
$media->getUrl('thumb') // the url to the converted image with dimensions 368x232
That's a crash course into using medialibrary. But wait, we didn't create an optimized version of that thumbnail. Let's do that now. Under the hood all conversions are done by the aforementioned image package. So you can just use optimize function in your conversion profile.
    public function registerMediaConversions()
    {
        $this->addMediaConversion('thumb')
              ->width(368)
              ->height(232)
              ->sharpen(10)
              ->optimize();
    }
Boom done. In the next major version of medialibrary we'll automatically call optimize behind the screens for all image conversions. So you'll get optimized conversion by default. We'll add a nonOptimized method if you want to opt out of that. We haven't introduced that behaviour in the current version because it's breaking change.

Browsershot

Browsershot is a package that leverages headless Chrome to turn any webpage into an image. Here's how to use it:
Browsershot::url('https://example.com')->save($pathToImage);
And here's how to save an optimized version:
Browsershot::url('https://example.com')->optimize()->save($pathToImage);

In closing

I should mention that our optimize package is based upon another one by Piotr Śliwa. All the basic principles on how the package should work are inspired by Piotr's work. The reason why we rewrote it is because his was not that easy to extend and did not use modern stuff such as Symfony's Process component or PSR-3 compatible logging. In this post I've mainly mentioned tools you can install locally, but there actually are a lot of SaaS solutions as well such as TinyPNG, Kraken.io, imgix.com and many many others. In this first release of our image-optimizer package I've mainly concentrated on supporting local optimizers. With remote optimizers you have to deal with slowness of the network and API keys and such. But I do recognize the value of those remote services. That's why you'll probably see some remote optimizers being referenced or included in package in the future. Here's an issue on the repo where the first thoughts on that were being exchanged. The package contains a few more features not covered by this blogpost. So check out image-optimizer on GitHub. I hope our tool can make your images smaller and your pages faster. If you haven't already done so, check out our previous work in the open source space. Please send us a postcard if any of our stuff makes in into your production environment.

Make your app fly with PHP OPcache

Recently this button to optimize PHP's OPcache was added to Laravel Forge. If you were wondering what PHP OPcache is all about and what pressing this button does with your application, read this article Olav van Schie wrote on the subject a while ago.
Every time you execute a PHP script, the script needs to be compiled to byte code. OPcache leverages a cache for this bytecode, so the next time the same script is requested, it doesn’t have to recompile it. This can save some precious execution time, and thus make your app faster (and maybe save some server costs).
https://medium.com/appstract/make-your-laravel-app-fly-with-php-opcache-9948db2a5f93

Classes, complexity, and functional programming

In this article Kent C. Dodds clearly explains what the downsides are of using Class in JavaScript.
Classes (and prototypes) have their place in JavaScript. But they’re an optimization. They don’t make your code simpler, they make it more complex. It’s better to narrow your focus on things that are not only simple to learn but simple to understand: functions and objects.
https://medium.com/@kentcdodds/classes-complexity-and-functional-programming-a8dd86903747

A tool for making JavaScript code run faster

Even though I'don't like Facebook as a user, their amazing contributions to open source are something to be very grateful for. Last week they presented their new work in progress: Prepack.
Prepack is a tool that optimizes JavaScript source code: Computations that can be done at compile-time instead of run-time get eliminated. Prepack replaces the global code of a JavaScript bundle with equivalent code that is a simple sequence of assignments. This gets rid of most intermediate computations and object allocations.
https://prepack.io/ It's still in development, so best not use it in production environments yet.

Moving from PHP (Laravel) to Go

Danny Van Kooten did an interesting experiment. He completely rewrote an Laravel app to a version in Go. In a post on his blog he shares some details about his project along with some benchmarks.
Earlier this year, I made an arguably bad business decision. I decided to rewrite the Laravel application powering Boxzilla in Go. No regrets though. Just a few weeks later I was deploying the Go application. Building it was the most fun I had in months, I learned a ton and the end result is a huge improvement over the old application. Better performance, easier deployments and higher test coverage.
https://dannyvankooten.com/laravel-to-golang/

Tweaking Eloquent relations – how to get latest related model?

Jarek Tkaczyk demonstrates how you can use a helper relation to eager load specific models.
Have you ever needed to show only single related model from the hasMany relationship on a set of parents? Being it latest, highest or just random, it’s not very clever to load whole collection using eager loading, just like running query per every parent. Of course you can do that better, and now let me show you how.
https://softonsofa.com/tweaking-eloquent-relations-how-to-get-latest-related-model/

Symfony Routing performance considerations

On his blog Frank De Jonge explains how he solved a performance problem in one of his projects.
Last week I took a deep dive into Symfony's Routing Component. A project I worked on suffered from a huge performance penalty caused by a routing mistake. This lead me on the path to discovering some interesting performance considerations. Some common practices align nicely with Symfony's optimisations, let's look into those.
https://blog.frankdejonge.nl/symfony-routing-performance-considerations/

Getting started with Varnish Cache

If you want to learn Varnish Thijs Feryn wrote a book for you. It's free to download until 7th March 2017.
Getting Strated with Varnish Cache is a technical book about the Varnish caching technology. Varnish is a so-called reverse caching proxy that acts as an intermediary between the browser and the webserver. Varnish stores HTTP responses and serves them to the browser, without accessing the backend for every request. This causes a massive speed increase.
https://blog.feryn.eu/my-varnish-book-is-now-available/

Using Varnish on a Laravel Forge provisioned server

For a project we're working on at Spatie we're expecting high traffic. That's why we spent some time researching how to improve the request speed of a Laravel application and the amount of requests a single server can handle. There are many strategies and services you can use to speed up a site. In our specific project one of the things we settled on is Varnish. In this post I'd like to share how to set up Varnish on a Forge provisioned server.

High level overview

First, let's discuss what Varnish does. Varnish calls itself a caching HTTP reverse proxy. Basically that means that instead of a webserver listening for requests, Varnish will listen for them. When a request comes in the first time it will pass it on to the webserver. The webserver will do its work as usual and send a response back to Varnish. Varnish will cache that response and send it back to the visitor of the site. The next time the same request comes in Varnish will just serve up it's cached response. The webserver (and PHP/Laravel) won't even be started up. This results in a dramatic increase in performance. You might wonder how Varnish decides what should be cached and for how long. The answer: http headers. By default Varnish will look at the Cache-Control header in the response of the webserver. Let's take a look example of such a header. This header will let Varnish know that this response should be cached for 60 seconds.
Cache-Control: public, max-age=60
Varnish can be configured in detail using a vcl. This stands or "Varnish Configuration Language". It can be used to manipulate the request. You can do things such as ignoring certain headers, removing cookies before they get passed to the webserver. The response coming back from the webserver can be manipulated as well. Think of things like manipulating or add headers to the response. There are more advanced features like Grace Mode and Health Checks can be configured as well. The remainder of this blogpost aims to set up a basic varnish installation on a Forge provisioned server. Before you continue reading I highly recommended watching this presentation Mattias Geniar gave at this year's Laracon first. He explains some core concepts in more detail. Prefer reading a blog post over watching a video? Then header over to this blogpost on Matthias' blog.

Installing Varnish

Now that you know what Varnish is and what it can do, let's get our handy dirty and install Varnish. If want to follow along provision a fresh server using Forge and install your project (or a clean Laravel installation) on it. I'm going to use the varnishtest.spatie.be domain for the remainder of this post. Replace this by your own url. Instead of using a clean Laravel installation I'm going to use a copy of our homegrown Laravel application template called Blender. By using Blender I'll hopefully run into many gotcha's when integrating Varnish with a real world app. To install Varnish just run this command on your server:
sudo apt-get install varnish
This will pull in Varnish 4.x or Varnish 5.x. Both versions are fine.

Updating the default VCL

The VCL file - Varnish' configuration file - is located at /etc/varnish/default.vcl. Out of the box it contains some empty placeholders functions. To make Varnish behave in a sane way we should write out how we want Varnish to behave. But, once more, Mattias Geniar has got our backs. He open sourced his VCL configuration. He went the extra mile and put very helpful comments throughout the template. Go ahead and replace the contents of /etc/varnish/default.vcl with the content of Mattias' VCL. Restart Varnish to make sure the updated VCL is being used:
sudo service varnish restart

Opening a port for Varnish

By default Varnish will listen for requests on port 6081. So the page that is seen when surfing to http://varnishtest.spatie.be (replace that url by the url of your project) is still being server up by nginx / PHP. Pointing the browser to http://varnishtest.spatie.be:6081 will result in a connection error. That error is caused by the firewall that by default blocks all requests to that port. In the Forge admin, underneath the "Network" tab you can add firewall rules. Go ahead and open up port 6081 by adding a line to that section. Now surfing to http://varnishtest.spatie.be:6081 should display the same content as http://varnishtest.spatie.be:80. If, in your project that page doesn't return a 200 response code, but a 302 you'll see an Error 503 Backend fetch failed error. In my multilingual projects "/" often gets redirected to "/nl", so I've encountered that error too. Let's fix that. Turns out the Mattias' VCL template uses a probe to determine if the back-end (aka the webserver) is healthy. Just comment out these lines by adding a # in front of it. A visit to http://varnishtest.spatie.be:6081 should now render the same content as on port 80.

Checking the headers

Even though the content of the pages of port 80 and 6081 are the same the response headers are not. Here are the headers of my test application: Notice those X-Cache and X-Cache-Hits headers? They are being set because we told Varnish to do so in our VCL. X-Cache is set to MISS. And it stays at that value when performing some more requests. So unfortunately Varnish isn't caching anything. Why could that be? If you take another look at the headers you'll notice that cookie called laravel_session is being set. By default Laravel opens up a new session for every unique visitor. It uses that cookie to match up the visitor with the session on the server. In most cases this perfectly fine behaviour. Varnish however will, by default, not cache any pages where cookies are being set. A cookie being set tells Varnish that the response it received is for a specific visitor. That response should not be shown to another visitor. Sure, we could try to configure Laravel in such a way that it doesn't set cookies, but that becomes cumbersome very quickly. To make working with Varnish in Laravel as easy as possible I've created a package called laravel-varnish. Simply put the package provides a middleware that sets a X-Cachable header on the response of every route the middleware is applied upon. In the Varnish configuration were going to listen for that specific header. If a response with X-Cacheable is returned from the webserver Varnish will ignore and remove any cookies that are being set. Keep in mind that pages where you actually plan on using the session - to for example display a flash message or use authentication - cannot easily be cached with Varnish. There is a feature called Edge Side Includes with which you can cache parts of the page but that's out of scope for this blog post.

Making Varnish play nice with Laravel

As mentioned in the previous paragraph I've made a Laravel package that makes working with Varnish very easy. To install it you can simply pull the package in via Composer:
composer require laravel-varnish
After composer is finished you should install the service provider:
// config/app.php

'providers' => [
    ...
    'Spatie\Varnish\VarnishServiceProvider',
];
Next you must publish the config-file with:
php artisan vendor:publish --provider="Spatie\Varnish\VarnishServiceProvider" --tag="config"
In the published laravel-varnish.php config file you should set the host key to the right value. Next the Spatie\Varnish\Middleware\CacheWithVarnish middleware should be added to the $routeMiddleware array:
// app/Http/Kernel.php

protected $routeMiddleware = [
...
   'cacheable' => Spatie\Varnish\Middleware\CacheWithVarnish::class,
]
Finally, in you should add these lines to the vcl_backend_reponse function in your VCL located at /etc/varnish/default.vcl:
if (beresp.http.X-Cacheable ~ "1") {
    unset beresp.http.set-cookie;
}
Restart Varnish again to make sure the added lines will be used:
sudo service varnish restart
Now that the package is installed we can apply that cacheable middleware to some routes.
Route::group(['middleware' => 'cacheable'], function () {
   //all the routes that should be cached
});
I've gone ahead and cached some routes in my application. Let's test it out by making a request to the homepage of the site and inspect the headers and response time. You can see that the X-Cacheable header that was added on by our package. The presence of this header tells Varnish that it's ok to remove all headers regarding the setting of cookies to make this response cacheable. Varnish tells us through the X-Cache header that the response came not from it's cache. So the response was built up by PHP/Laravel. In the output you can also see that the total response time was a little under 200 ms. Let's fire off the same request again. Now the value of X-Cache header is HIT. Varnish got this response from its cache. Nginx / Laravel and PHP were not involved in answering this request. The total response time is now 75 ms, an impressive improvement. To flush cache content normally you need to run a so-called ban command via the varnishadm tool. But with the package installed you can just use this artisan command on your server:
php artisan varnish:flush
You could use this command in your deployment script to flush the Varnish cache on each deploy. Running curl -I -w "Total time: %{time_total} ms\n" http://varnishtest.spatie.be:6081/nl again will result in a cache miss.

Measuring the performance difference

The one small single test from the previous paragraph isn't really enough proof that Varnish is speeding up requests. Maybe we were just a bit lucky. Let's run a more thorough test. Blitz.io is an online service that can be used to perform load tests. Let's run it against our application. We'll run this test: It will run a bunch of get request originating from Ireland. In a time span of 30 seconds it will ramp up the load from 1 to 1000 concurrent users. This will result in about 26 000 request in 30 seconds. The server my application is installed upon is the smallest Digital Ocean droplet. It has one CPU and 512 MB of RAM. Let's first try running the test on port 80 where nginx is still listening for requests. O my... The test runs fine for 5 seconds. After that the response time quickly rises. After 10 seconds, when 200 concurrent request / second are hitting the server response times grow to over 2 seconds. After 15 seconds the first errors start to appear due to the fact the webserver is swamped in work. It starts to send out 502 Bad Gateway errors. From there on the situation only gets worse. Nearly all requests result in errors or timeouts. Let's now try the same test but on port 6081 where Varnish is running. What. A. Rush. Beside the little hiccup at the end, the response time of all requests was around 80 ms. And even the slowest response was at a very acceptable 120 ms. There were no errors: all 26 000 requests got a response. Quite amazing. Let's try to find Varnish' breaking point and up the concurrent requests from 1 000 to 10 000. 15 seconds in, with 2 500 requests hitting on our poor little server in one second, the response time rises to over a second. But only after nearly 30 seconds, with a whopping 6 000 requests a second going on, Varnish starts returning errors.

Installing Varnish: final steps

As it stands, Varnish is still running on port 6081 and nginx on port 80. So all the visitors of our site still get served uncached page. To start using Varnish for real the easiest thing your can to is to just swap the ports. Let's make Varnish run on port 80 and nginx on port 6081. Here are the steps involved: 1) edit /etc/nginx/sites-enabled/<your domain>. Change
listen 80;
listen [::]:80;
to
listen 6081;
listen [::]:6081;
2) edit /etc/nginx/forge-conf/<your domain>/before/redirect.conf. Change
listen 80;
listen [::]:80;
to
listen 6081;
listen [::]:6081;
3) edit /etc/nginx/sites-available/catch-all. Change
listen 80;
to
listen 6081;
4) edit /etc/default/varnish. Change
DAEMON_OPTS="-a :6081 \
to
DAEMON_OPTS="-a :80 \
5) edit /etc/varnish/default.vcl. Change
  .port = "80";
to
  .port = "6081";
6) edit /lib/systemd/system/varnish.service. Change
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
to
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
I'm pretty sure there must be a more elegant way, but let's just fire off the bazooka and restart the server. When it comes back up you'll notice that headers of response on port 80 will contain X-Cache headers (set by Varnish) and those on port on 6081 (where Nginx is running) do not. If you wish to do so may remove the line that opens up port 6081 in Firewall rules screen on Forge. And with that your Varnish installation should be up and running. Congratulations!

In closing

Varnish is a incredible powerful tool that can speed up your application immensely. In this post I've barely scratched the surface of what you can do with it. If you want to learn more about it, check out the official documentation or the aformentioned blogpost by Mattias Geniar. Out of the box Varnish can't handle https connections, so you'll need to do some extra configurating to make it work. For our particular project we settled on Varnish, but there are plenty alternatives to scale your app: Also keep in mind that you should only use Varnish if you expect high traffic. For smallish sites you probably shouldn't bother installing and configuring Varnish. If you do decide to use Varnish, be sure to take a look at our laravel-varnish package. If you like it, check out our other Laravel and PHP packages.

10 things I learned making the fastest site in the world

David Gilbertson made a lighting fast site and wrote a fantastic article about it.
Writing a fast website is like raising a puppy, it requires constancy and consistency (both over time and from everyone involved). You can do a great job keeping everything lean and mean, but if you get sloppy and use an 11 KB library to format a date and let the puppy shit in the bed just one time, you’ve undone a lot of hard work and have some cleaning up to do.
https://hackernoon.com/10-things-i-learned-making-the-fastest-site-in-the-world-18a0e1cdf4a7

Inside PHP 7’s performance improvements

On the Blackfire.io blog Julien Pauli peeks behind the curtains of PHP. In the five part series he explains how you should write your code to make the best use of the internal optimizations present in PHP 7.
This blog series will show you what changed inside the Zend engine between PHP 5 and PHP 7 and will detail how you, as a developer, may effectively use the new internal optimizations. We are taking PHP 5.6 as a comparison basis. Often, it is just a matter of how things are written and presented to the engine. Performance must be taken care of when critical code is written. By changing some little things, you can make the engine perform much faster, often without losing other aspects such as readability or debugging control.
https://blog.blackfire.io/php-7-performance-improvements-packed-arrays.html

Optimizing PHP performance by using fully-qualified function calls

A fully qualified function name is a little bit faster than a non-qualified one. Toon Verwerft explains it all in his lastest blogpost.
Today, a little conversation on Twitter escalated rather quickly. Apparently PHP runs function calls differently depending on namespaced or non namespaced context. When calling functions in a namespaced context, additional actions are triggered in PHP which result in slower execution. In this article, I'll explain what happens and how you can speed up your application.
http://veewee.github.io/blog/optimizing-php-performance-by-fq-function-calls/

Improving the performance of our PHP based crawler

Today a new major version of our homegrown crawler was released. The crawler is used to power our http-status-check, laravel-sitemap and laravel-link-checker packages. A new major feature is the greatly improved crawling speed. This was accomplished by leveraging multiple concurrent requests. Let's take a look at the performance improvements gained by using concurrent requests. In the video below, the crawler is started two times. On the left we have v1 of the crawler that just does one request and waits for the response before launching another request. On the right we have v2 that uses 10 concurrent requests. Both crawls will go over our entire company site https://spatie.be Even though I gave v1 a little head start, it really got blown away by v2. Where v1 is constantly waiting on a response from the server, v2 will just launch another request, while it's waiting for responses of previous requests. To make requests, the crawler package uses Guzzle, a the well known http client. A cool feature of Guzzle is that it provides support for concurrent requests out of the box. If you want to know more about on that subject, read this excellent blogpost by Hannes Van de Vreken. Here's the relevant code in our package. Together with the release of crawler v2: these packages have new major versions that make use of crawler v2: - http-status-check: this command line tool can scan your entire site and report back the http status codes for each page. We use this tool whenever we launch a new site at Spatie to check if there are broken links. - laravel-sitemap: this Laravel package can generate a sitemap by crawling your entire site. - laravel-link-checker: this one can automatically notify you whenever a broken link is found on your site. Integrating the crawler into your own project or package is easy. You can set a CrawlProfile to determine which urls should be crawled. A crawlReporter can be used to determine what should be done with the found urls. Want to know more? Then head over to the crawler repo on GitHub. If you like the crawler, be sure to also take a look at the many other framework agnostic and Laravel specific packages our team has created.

A full web request may be faster than a SPA

At this years Chrome Dev Summit Jake Archibald gave an excellent talk on some new features that are coming to the service worker. In case you don't know, a service worker is a piece of JavaScript that sits between the network request sent by the JavaScript in your browser and the browser itself. A common use case for a service is to display a custom page when there is no internet connection available instead of showing the default error message in your browser. And of course you can use a service worker to have a high degree of control on how things are cached. I really like Jake's presentation style in general. He always injects a lot of humour. This time he's presenting in a tuxedo, with no shoes on and he uses a Wii Controller to control his slides. Go Jake! A very interesting part of the talk is when he touches on the time needed to display a page. Turns out a full web request can be a faster than a fancy single page application. Watch that segment on YouTube by clicking here. Or you can choose to just watch the whole presentation in the video below.

Varnish explained

Varnish is a piece of software that, amongst other things, can make your website much faster. In a new post on his blog, Mattias Geniar tells you all about it.
Varnish can do a lot of things, but it's mostly known as a reverse HTTP proxy. It brands itself as an HTTP accelerator, making HTTP requests faster by caching them. ... Varnish is usually associated with performance, but it greatly increases your options to scale your infrastructure (load balancing, failover backends etc) and adds a security layer right out of the box: you can easily let Varnish protect you from the httpoxy vulnerability or slowloris type attacks.
https://ma.ttias.be/varnish-explained/ Be sure to watch Mattias' excellent talk he gave at this years Laracon:

What’s in store for PHP performance?

Jani Tarvainen explains where PHP is heading performance-wise.
PHP 7.0 was a leap in performance that came with very easy adoption. Simply verify compatibility with the version and upgrade your server environment. Speeding up many older architecture apps like WordPress and Mediawiki by a factor of two is a testament to backwards compatibility. In 7.1, the language runtime will continue to make modest improvements, but bigger gains will have to wait. One of these opportunities for a bigger improvement is the JIT implementation that is now bound for PHP 8.0
https://www.symfony.fi/entry/whats-in-store-for-php-performance

Understanding generated columns

Generated columns where introduced in MySQL 5.7. In the latest post on her blog Gabriela D'Ávila explains the feature.
There are two types of Generated Columns: Virtual and Stored. ... Consider using virtual columns for data where changes happens in a significant number of times. The cost of a Virtual Column comes from reading a table constantly and the server has to compute every time what that column value will be. ... You should consider using Stored Columns for when the data doesn’t change significantly or at all after creation,
https://blog.gabriela.io/2016/10/17/understanding-generated-columns/ If you like the post, be sure to check out Gabriela's excellent talk at Laracon EU as well: