I do not reach for GraphQL by default.
Most of the time, it is too much ceremony. There is a schema to maintain, tooling to wire, errors that do not look like normal HTTP failures, and a strong temptation to pretend every frontend problem is a query-language problem.
For a blog, REST sounds like the obvious choice. Fetch articles. Fetch categories. Fetch one article by slug. Done.
That was my starting bias too.
But this project pushed me into a more uncomfortable place: the public blog was not really asking for resources. It was asking for page-shaped data. And once I admitted that, GraphQL stopped feeling like architecture astronautics and started feeling like a boring, practical boundary.
Not because GraphQL is better than REST.
Because in this one part of the system, the page shape mattered more than the resource shape.
Real Situation
The blog is backed by Strapi, rendered by Next.js 16, and optimized around a static-first publishing model.
That last part matters. The public route should not behave like a chatty CMS client. It should work from a clean published snapshot whenever possible, and it should avoid dragging CMS concerns into the browser.
The current snapshot loader is intentionally boring:
const loadPublishedBlogSnapshot = cache(
async (): Promise<{
articles: StrapiArticle[];
categories: StrapiCategory[];
}> => {
const { articles, categories } = await getBlogListingDataOptimized({
pageSize: PUBLISHED_BLOG_SNAPSHOT_PAGE_SIZE,
});
return {
articles,
categories: mergeCategories(categories, articles),
};
}
);
That snippet is from the actual repo. The public blog does not need a GraphQL client in the browser. It needs one server-side data boundary that can load the content shape cleanly, cache it, and hand the rest of the app a stable snapshot.
At first glance, articles and categories look like REST resources.
But the page does not only need an article.
It needs:
- article fields for the card or detail page
- author data for attribution
- category data for routing and grouping
- cover image formats from Strapi
- related article candidates
- all categories for navigation and filters
The article shape alone already crosses resource boundaries:
const ARTICLE_GRAPHQL_FIELDS = `
documentId
title
description
slug
originalPublishedAt
publishedAt
createdAt
updatedAt
content
readingTime
tags
cover {
url
name
alternativeText
formats
}
author {
documentId
name
email
bio
slug
}
category {
documentId
name
slug
}
`;
With REST, I kept coming back to the same awkward choice:
Do I keep the API resource-oriented and stitch data together in application code?
Or do I create page-specific REST endpoints and quietly rebuild GraphQL with a worse vocabulary?
That was the tension.
REST was simpler in isolation. GraphQL was simpler at the page boundary.
What Went Wrong
My first instinct was to treat GraphQL as something I had to justify with performance claims.
That is a trap.
If the argument is only "GraphQL is faster", it becomes hand-wavy very quickly. Faster than what? Under which cache state? With which payload size? Against which CMS latency? On which route?
I did not want this article to become that kind of engineering theater.
The real problem was not a benchmark. The real problem was ownership of composition.
With plain REST, the shape naturally drifted into multiple calls:
- load the article by slug
- load the article list for related posts
- load categories
- normalize Strapi's nested response shape
- dedupe and sort
- remember which route needed which combination
None of that is impossible. It is not even hard.
But it spreads the page contract across call sites. After a while, the route is no longer asking, "give me the data for this page." It is asking, "give me these resources and I will reconstruct the page contract myself."
That is where the design started to smell.
Tension
GraphQL was still not free.
The repo has explicit fallback controls because the GraphQL path is not something I wanted to trust blindly:
const STRICT_GRAPHQL_MODE = process.env.STRAPI_STRICT_GRAPHQL === 'true';
const ALLOW_REST_FALLBACK = process.env.STRAPI_ALLOW_REST_FALLBACK === 'true';
function shouldUseRestFallback() {
return ALLOW_REST_FALLBACK && !STRICT_GRAPHQL_MODE;
}
That little function says a lot about the actual posture.
This is not "GraphQL everywhere."
It is "GraphQL where it makes the page boundary cleaner, REST where it still gives us a safer escape hatch."
There were also schema edges. The code has to handle originalPublishedAt as an optional field because not every Strapi environment may expose it:
if (
result.errors &&
query.includes(OPTIONAL_ARTICLE_PUBLISH_FIELD) &&
hasUnsupportedOptionalFieldError(result.errors)
) {
const fallbackQuery = stripUnsupportedOptionalFields(query);
if (fallbackQuery !== query) {
result = await executeGraphQLRequestWithRetry(fallbackQuery);
}
}
That is the part people skip when they pitch GraphQL too cleanly.
Schemas drift. Environments differ. CMS plugins change. The failure mode is not always a neat 404 or 500.
So yes, GraphQL solved one problem. It introduced a few of its own.
Mistake
The mistake was almost defending GraphQL as a general architectural upgrade.
It is not.
For simple CRUD, I would still choose REST without much debate. If all I need is GET /articles, GET /articles/:slug, and a couple of admin mutations, REST is easier to inspect, easier to cache at the edge, easier to debug with curl, and easier for another engineer to reason about at 11 PM.
The choice only became defensible once the unit of design changed from "resource" to "rendered page."
That distinction is everything.
Insight
The useful question was not:
"Should this app use REST or GraphQL?"
The useful question was:
"Where does data composition belong?"
For this blog, I wanted composition at the server boundary, close to the CMS, before the content became a published snapshot.
That made the GraphQL query feel less like a frontend convenience and more like a page contract:
const GET_BLOG_DETAIL_MEGA_QUERY = `
query GetBlogDetailMegaQuery($slug: String!, $pagination: PaginationArg, $sort: [String]) {
articles(filters: { slug: { eq: $slug } }) {
${ARTICLE_GRAPHQL_FIELDS}
}
allArticles: articles(pagination: $pagination, sort: $sort) {
${ARTICLE_GRAPHQL_FIELDS}
}
categories {
${CATEGORY_GRAPHQL_FIELDS}
}
}
`;
That query is not pretty for its own sake. It says what the page needs:
- the current article
- the article pool used for related content
- the categories used around the page
The listing path follows the same idea:
const GET_BLOG_LISTING_MEGA_QUERY = `
query GetBlogListingMegaQuery($pagination: PaginationArg, $sort: [String]) {
articles(pagination: $pagination, sort: $sort) {
${ARTICLE_GRAPHQL_FIELDS}
}
categories {
${CATEGORY_GRAPHQL_FIELDS}
}
}
`;
Then the server code turns that response into the app's internal shape:
export async function getBlogListingDataOptimized(options?: { pageSize?: number }): Promise<{
articles: StrapiArticle[];
categories: StrapiCategory[];
}> {
const pageSize = options?.pageSize ?? 50;
const cacheKey = `blog-listing-mega-query-${pageSize}`;
const cachedData = serverCache.get<{
articles: StrapiArticle[];
categories: StrapiCategory[];
}>(cacheKey);
if (cachedData) {
return cachedData;
}
try {
const result = await fetchGraphQL(GET_BLOG_LISTING_MEGA_QUERY, {
pagination: { page: 1, pageSize },
sort: ['publishedAt:desc'],
});
That is the part I care about.
The route does not need to know how Strapi wants nested relations expressed. It should not care whether the author came from /authors, a populated REST relation, or a GraphQL selection set. It should receive article data in the shape the app uses.
GraphQL earned its keep because it made that boundary explicit.
Surprise
The surprising part was that the best GraphQL design here was not client-heavy.
No Apollo cache strategy. No browser query orchestration. No clever normalized client store.
The repo does include GraphQL tooling:
generates:
lib/generated/graphql.ts:
plugins:
- 'typescript'
- 'typescript-operations'
- 'typescript-react-apollo'
config:
withHOC: false
withComponent: false
withHooks: true
apolloReactCommonImportFrom: '@apollo/client'
apolloReactHooksImportFrom: '@apollo/client'
scalars:
JSON: Record<string, unknown>
nonOptionalTypename: true
avoidOptionals:
field: true
But the important runtime path is server-side and deliberately small:
const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify({
query: gqlQuery,
variables,
}),
next: {
tags: ['blog-posts', 'categories'],
},
signal: controller.signal,
});
That next.tags line is the quiet win. The data boundary still fits the Next.js caching model. GraphQL did not replace the framework. It became the shape of one server fetch inside the framework.
That is the only version of GraphQL I was comfortable defending.
Learning Moment
GraphQL did not remove complexity.
It moved complexity to a place where I could name it, test it, and contain it.
The tests reflect that. They do not assert "GraphQL is better." They assert things that matter operationally:
- optimized listing performs a single mega query
- optimized detail performs a single mega query and filters related posts
- repeated optimized calls use cache
- REST fallback still maps Strapi entities when GraphQL fails
- failed optimized calls return safe defaults instead of breaking the public route
That is a healthier defense than a benchmark slide.
The point is not to prove GraphQL wins every comparison.
The point is to prove this boundary fails predictably.
Principle
My rule after this work is simple:
Default to REST until the page shape starts fighting the resource shape.
REST is still the better default for straightforward CRUD, simple admin screens, public endpoints, cache-friendly resource access, and teams that do not want schema/tooling overhead.
GraphQL becomes worth defending when all of these are true:
- the response is a composed view, not a single resource
- nested relations are part of the normal page contract
- the server owns the query boundary
- the app can cache the composed result
- schema drift and fallback behavior are tested
- GraphQL does not leak into places that only need plain data
That last one matters most.
I do not want GraphQL to become the personality of the codebase. I want it to be a sharp tool behind one well-owned boundary.
In this project, the boundary is Strapi-to-Next.js content loading. That is where the page-shaped query earns its place. Outside that boundary, the rest of the app should stay boring.
That is the defense.
Not "GraphQL is modern."
Not "REST is old."
Just this: when your rendered page is the product contract, and your CMS resources are only the raw material, GraphQL can be the least messy way to say exactly what the page needs.
Cited resources