How to make your blogs load at least 90% faster

Thoughts By 4 weeks ago

At b2cloud we were recently developing a website with ReactJS connecting to a Laravel API to fetch blog article content that was taking too long to load. Users expect data to be delivered instantaneously these days and our blog landing page was taking over 4 seconds to load. We wanted to get this down to under 1 second for a better user experience.

Twelve posts = 141Kb

The blog landing page listed twelve blog posts in a grid format, allowing for lazy loading when a trigger point was met, with a further twelve posts being requested at a time from a paginated API endpoint. Our data was stored in individual records within Redis – adhering to our API model structure to provide a fast mechanism to transform the data and return the response efficiently.

However, it became apparent that the response time to fetch twelve records was not optimal because our API would fetch and return a collection of the twelve posts in full – returning up to fifty fields of information per post, factoring in data such as images, alternative text descriptions, height, width, relational attributes such as author information etc. The combined size of the initial twelve blogs was 141Kb, while our second API request to fetch the next twelve blogs was 90Kb. This was overload for a landing page, which got us thinking….. what fields are actually required for each post in this list page?

This is what we came up with as the essential fields for each blog summary on the landing page:

  • Title
  • Excerpt
  • Thumbnail url
  • Thumbnail alternative text
  • Blog category title
  • Author name
  • Author thumbnail url
  • Author thumbnail alternative text

Why not just create another API endpoint?

It’s very easy to fall into the trap of creating another API endpoint to filter the data accordingly. In the past we would’ve called /api/v1.0/posts and created a new endpoint such as /api/v1.0/post-landing. We would then have created a custom extractor that would limit the response to a subset of our original model.

While this method would have worked, it adds duplication and clutter to the API. What if mobile devices require different fields returned? What if our original model changes its data structure, or you need to add new fields?

In this scenario you could end up with a lot of hard-coded routes designed for individual cases, with limited flexibility for change in the future.

GraphQL to the rescue

After researching GraphQL, we spent a couple of hours implementing a prototype that connected to our existing API solution, so we didn’t have to modify our existing API solution at all.

GraphQL is a server-side query language that can access your existing API, or connect directly to your datasource, if you’d prefer to bypass your existing API altogether. GraphQL uses a type system which requires initial configuration to set up data schemes to map your data into types. This has the added benefit of being able to easily update new data fields and relations down the track.

When requesting data through GraphQL, you need to specify every field that you’d like returned in your request query. It’s so simple and efficient at customising data relationships and children objects that it returns them all exactly as you request them.

Following a quick walk through on the GraphQL official website and a configured service, we created a new GraphQL endpoint such as /graphql/query=[query params].

When we refined the fields to the eight listed above that we wanted returned in our query, the results were outstanding.

Twelve posts = 5.35Kb

By utilising GraphQL, we reduced our paginated API blog listing response time by up to 96% for each request.

Initially the data response size of the first twelve blogs was 141Kb, while the GraphQL size reduced it to  5.35Kb. For the next twelve blogs our initial data response size was 90Kb while the graphQL response size was 6.7Kb – a data saving of 96 and 92% respectively.

It’s worth noting that GraphQL behind the scenes is still fetching the original full API data. GraphQL filters and mutates the data returned to the request.

The future of Query

After completing our prototype, we were very impressed with the potential for GraphQL to really change how we interact with data in relation to data size and response time. GraphQL also has a gem of an administrator runtime for development called GraphiQL. It’s an amazing tool which provides a GUI interface and an API reference guide to query your GraphQL API in realtime. It’s one of the most useful tools I’ve seen in the development community since I can remember.

Taking our iOS & Android developers on a walk through of GraphiQL with our existing Laravel API was a ‘light bulb’ moment that has them asking WHEN will can start implementing this with other APIs.

We see GraphQL not as a replacement of our existing API platform, but as an exciting option which must be investigated for all our new projects to speed up delivery and processing time, especially for mobile where data size is paramount.

If you haven’t had a chance to try GraphQL out yet, I highly recommend making the time to set up a playground and just ‘play’.

Have you had any positive or negative experiences you’d like to share around similar challenges in reducing the speed of payload responses?