Pagespeed “Scores” are for developers

I often hear staff / customers say our sites have “Pagespeed Issues”. It instantly makes me tilt my head because “low scores” are not issues. In fact, they don’t even tell you anything is broken. Tests can reveal issues, but results are simply weighted suggestions for improvement. Suggestions for developers to scope alternate solutions beyond what was tested, equate that to cost, and then determine if the juice is worth the squeeze.

But for non-developers, scores are often taken entirely out of context – sometimes because competitors twist the truth – and read without much understanding of the cost involved to improve results. Results that may not necessarily fix or gain anything.

 

If you could buy an “A” for $5000, but at almost no ROI, would you buy it?

  • Everyday people see an “F” they don’t understand and instantly think there is an issue or something is broken.
  • I see a business decision Q4 made with very minimal performance trade-off, but a large cost savings.

 

Wait, So “F” = Good? Maybe. Let’s Explain

Suggestions mean exactly what what they says:

“F” — Serving appropriately-sized images MAY save many bytes of data and improve the performance of your webpage, especially on low-powered (eg. mobile) devices.

That’s a great suggestion, not an issue.

  • That “F” does suggest there are many more efficient ways to do what we’re doing
  • That “F” doesn’t suggest a cost to be more efficient, or if the return on that effort outweighs the cost

But, let’s just say for that test our site gets an “F”.

For Q4 to get an “A” it means we would have to programmatically duplicate and scale a new image, for each uploaded image, for each device size from desktop to mobile. That’s many images per 1 image uploaded.

  • i.e. 1 uploaded image = a dozen or more copied individually scaled images to match all device widths

Now multiply that by all the images Q4 uploads per day, per website.

For simple static websites, not a big deal. For very active marketing websites, very expensive.

Here’s a hypothetical example:

  • 150 websites x 6 images uploaded per week x 12 duplicates per image x 1 year = 561,600 images

Gigabytes of storage per day with rapidly expanding hosting accounts and server costs.

There’s also CPU, Ram, Management, Development costs – but I think you get the picture.

In the end, even if that “performance score” was an “A” we’d possibly see fractionally better performance (page speed / seo / roi). But we absolutely would see very high cost for an ever expanding storage system. A cost that would have to be passed on to the customer with little ROI for either partner.

So instead, we upload 1 reasonably sized image and scale it down to all smaller size widths as needed. A solution with very little performance cost (bytes of data), but with tremendous storage savings.

It’s not to say that these decisions won’t be reevaluated in the future, but for now that is our conclusion for this “F”.

And that goes for all the scores you see. We’ve seen them, we’ve evaluated them long before anyone else, and we continually reevaluate them.

So to conclude, these performance testing tools are for developers to make cost/performance evaluations. Unfortunately, they also put up false flags for customers (and others) that read them out of context because they have less understanding of their actual use.

I hope that all makes sense.

Leave a Reply