Series navigation
Written by
Jagdish Salgotra
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
to join the discussion
Part 1 of the series explains the framework tradeoffs behind building this blog with Next.js 16 and Strapi instead of WordPress, with a focus on performance, control, and developer experience.
Written by
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
I've spent a lot of time tuning systems that handle serious traffic. So when I sat down to build a blog about performance engineering, I had a choice: go with WordPress like the 43% of the web that already uses it, or build something that matched the way I think about performance.
I'd used WordPress enough on client projects to know where it shines and where it starts to fight you. WordPress can absolutely be fast with the right hosting, caching, and discipline. But if I was going to write about shaving milliseconds and watching CPU graphs, I wanted tighter control over the stack.
So this wasn't a "WordPress is bad" argument. It was a tool choice for a very specific job.
Before making any decisions, I did what any performance engineer should do: audit the alternatives. I analyzed several WordPress blogs in the performance/tech space and measured typical hosting scenarios.
Here's what I found across different WordPress setups:
# WordPress Performance Analysis (various hosting tiers)
Basic Shared Hosting ($5-12/month):
TTFB: 800-1200ms
FCP: 1.5-2.5s
LCP: 2.0-3.5s
Typical plugins: 15-25
Database queries: 30-50 per page load
Managed WordPress ($25-50/month):
TTFB: 300-600ms
FCP: 0.8-1.5s
LCP: 1.2-2.0s
Built-in caching enabled
Database queries: 20-35 per page load
Premium Optimized ($50-100/month):
TTFB: 150-300ms
FCP: 0.6-1.0s
LCP: 0.8-1.5s
CDN + advanced caching
Database queries: 15-25 per page load
What stood out wasn't that WordPress was slow. It was that getting it fast usually meant paying for hosting and optimization that modern frameworks give you much earlier.
I narrowed it to three realistic options:
Pros: Lightning fast, cheap hosting Cons: No dynamic content, rebuild for every change Expected TTFB: ~50ms Cost: ~$5/month (Netlify/Vercel)
Pros: Familiar ecosystem, existing themes/plugins Cons: Database dependency, plugin complexity Expected TTFB: ~200-400ms (with good hosting + caching) Cost: ~$25-50/month (managed hosting + CDN)
Pros: Modern dev experience, static-first rendering with targeted refresh Cons: More initial setup, newer ecosystem Expected TTFB: ~80-150ms Cost: ~$15-25/month (Vercel + Strapi Cloud)
The numbers mattered, but developer experience is what tipped it. I've built enough React apps to know that going back to WordPress PHP feels like coding with mittens on.
Next.js 16 shipped the thing that immediately got my attention: Server Components by default. I've seen 500KB React bundles make mobile browsers miserable. For a blog, moving more work back to the server felt like the right default.
// Traditional React Component (runs on client)
'use client'
import { useState, useEffect } from 'react'
export default function BlogPost({ slug }: { slug: string }) {
const [post, setPost] = useState(null)
useEffect(() => {
// This runs on the client, adds to bundle size
fetch(`/api/posts/${slug}`)
.then(res => res.json())
.then(setPost)
}, [slug])
return <div>{post?.title}</div>
}
// Next.js 16 Server Component (runs on server)
import { getBlogDetailDataOptimized } from '@/lib/strapi-server'
export default async function BlogPost({ slug }: { slug: string }) {
// This runs on the server, zero client JS
const { article } = await getBlogDetailDataOptimized(slug)
return <div>{article.title}</div>
}
The practical impact? Based on typical React apps, server components can reduce initial JavaScript bundles by 40-70% because the data fetching stays on the server. For a blog, that usually means a faster Time to Interactive.
People don't talk enough about how much build tooling affects day-to-day work. In WordPress projects, local setup and slower iteration loops were often the part that wore me down.
With Next.js 16 and Turbopack, the development experience is significantly faster:
# Development server startup (typical times)
WordPress (LAMP stack): 2-5 minutes
Next.js (Webpack): 30-90 seconds
Next.js (Turbopack): 5-15 seconds
# Hot reload comparison
WordPress: Full page refresh (1-3s)
Next.js (Turbopack): Hot module replacement (~50ms)
When you're tuning performance, that fast loop matters. Waiting on refreshes is an easy way to lose the thread.
Here's the stack I landed on, and why each piece was chosen for performance:
// next.config.mjs - The performance foundation
const nextConfig = {
output: 'standalone',
compiler: {
removeConsole: process.env.NODE_ENV === 'production',
},
reactCompiler: true,
experimental: {
optimizePackageImports: [
'@apollo/client',
'react-markdown',
'remark-gfm',
],
optimizeServerReact: true,
staleTimes: {
dynamic: 30,
static: 30,
},
},
images: {
remotePatterns: [
{ protocol: 'http', hostname: 'localhost', port: '1337', pathname: '/uploads/**' },
{ protocol: 'https', hostname: 'engnotes.dev', pathname: '/uploads/**' },
],
formats: ['image/webp', 'image/avif'],
minimumCacheTTL: 86400,
},
}
One of the biggest performance wins comes from data fetching strategy. Traditional WordPress themes make multiple database queries and API calls per page load.
WordPress typically requires multiple requests:
# WordPress approach (multiple queries)
- Main post data query
- Author information query
- Category/tags query
- Related posts query
- Comment count query
- Meta fields query
Total: 6-15 database queries per page load
With GraphQL and Strapi, I can fetch everything in a single optimized query:
# Single optimized query
query BlogPost($slug: String!) {
articles(filters: { slug: { eq: $slug } }) {
title
content
publishedAt
author {
name
bio
}
category {
name
slug
}
cover {
url
alternativeText
}
}
}
Result: Single query with exact field selection, reducing both network requests and data transfer.
After building this Next.js blog, here are the real performance numbers measured with Lighthouse and WebPageTest:
# Current Blog Performance (measured)
TTFB: ~90ms
FCP: ~0.4s
LCP: ~0.7s
CLS: ~0.02
TBT: ~20ms
Initial Bundle: ~180KB
Core Web Vitals: All Green
Compared to typical WordPress performance benchmarks:
# WordPress vs Next.js Comparison
WordPress (optimized) Next.js (this blog)
TTFB: 200-400ms ~90ms
Bundle Size: 400-800KB ~180KB
Database Calls: 15-30 per page 1 per page (via GraphQL)
Monthly Cost: $25-50 ~$15
The most significant win: moving from 15-30 database queries per page to a single optimized GraphQL query.
Let me be honest – building a custom blog wasn't all sunshine and sub-100ms response times. Here are the real challenges:
Unlike WordPress's 5-minute install, setting up Next.js + Strapi + deployment pipeline takes significantly more initial effort:
// Example: Strapi content type setup
interface BlogPost {
title: string
content: string
slug: string
publishedAt: string
author: {
name: string
bio: string
}
category: {
name: string
slug: string
}
}
import {
getPublishedArticleBySlug,
getPublishedArticleParams,
} from '@/lib/blog/published-snapshot'
export async function generateStaticParams() {
return getPublishedArticleParams()
}
export default async function BlogDetailPage({ params }: ArticlePageProps) {
const { slug } = await params
const article = await getPublishedArticleBySlug(slug)
if (!article) {
notFound()
}
return <CMSBlogPost article={buildCMSArticle(article, article.category?.slug || 'technology')} />
}
Moving from WordPress themes to React components requires significant frontend knowledge:
// WordPress: PHP template system
<?php
$posts = get_posts(['category' => $category_id]);
foreach($posts as $post) {
echo "<h2>" . $post->post_title . "</h2>";
}
?>
// Next.js: React component system
export default function BlogList({ posts }: { posts: BlogPost[] }) {
return (
<div>
{posts.map(post => (
<h2 key={post.slug}>{post.title}</h2>
))}
</div>
)
}
WordPress has 60,000+ plugins. With Next.js, you often build functionality from scratch or find npm packages that may not integrate as seamlessly.
A few benefits showed up that I wasn't really optimizing for:
Deployment pipeline is significantly faster with modern tooling:
TypeScript prevents entire classes of runtime errors that can impact performance:
// TypeScript catches this at build time
interface BlogPost {
title: string
slug: string
}
// This would error: Property 'content' doesn't exist
const post: BlogPost = { title: "Test", content: "..." }
Next.js bundle analyzer helps optimize what gets shipped to users:
// Before: Imports entire Lodash library
import _ from 'lodash'
const debounced = _.debounce(search, 300)
// After: Tree-shaken specific function
import { debounce } from 'lodash/debounce'
const debounced = debounce(search, 300)
The cost breakdown is one reason modern stacks are easier to justify:
# Realistic monthly costs comparison
WordPress (Optimized):
Managed hosting: $25-50
Premium theme: $5-10 (amortized)
CDN service: $0-15
Backup service: $5-10
Security plugin: $5-10
Total WordPress: $40-95/month
Next.js Stack:
Vercel (Hobby/Pro): $0-20
Strapi Cloud: $15-25
Domain: $1
Total Next.js: $16-46/month
Potential Savings: $24-49/month
Cost mattered, but the bigger advantage was predictable scaling. WordPress hosting costs usually climb with traffic, while Vercel's edge caching absorbs spikes much more gracefully.
If I were doing this again, I'd change three things:
Start with TypeScript from day one: I initially migrated to JavaScript and converted to TypeScript later. Big mistake.
Set up monitoring first: I waited two weeks to implement performance monitoring. Those early performance regressions went unnoticed.
Design content refresh earlier: I started static-first, then had to tighten the webhook and revalidation story later.
If you're considering a similar migration, here's the framework I'd use:
What stayed with me from this project wasn't "Next.js beat WordPress." It was how quickly the product conversation became a performance conversation.
When a blog loads in 400ms instead of 1.5 seconds, people feel it immediately. They read more, they bounce less, and the whole thing feels more trustworthy.
It's easy to obsess over TTFB, LCP, and bundle size because those are the numbers we can measure. But the point of the numbers is the reading experience on the other side.
In Part 2 of this series, I'll get into the static-first + Server Components setup behind the blog, the cache layers that actually exist in the repo, and the webhook paths that keep content fresh without turning the whole site into a fully dynamic app.
If you've made a similar tradeoff, I'd be interested in what held up in practice and what didn't. Those details are usually more useful than framework hot takes.
A lot of performance work gets decided before the first line of application code is written.