Series navigation
Written by
Jagdish Salgotra
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
to join the discussion
Part 2 explains the static-first rendering approach in this repo, including cache layers, route revalidation, and deploy-driven freshness paths.
Written by
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
This is Part 2 of the "Building a High-Performance Blog" series. In Part 1, we covered why I chose Next.js 16 over WordPress. Now we get into the static-first rendering strategy that makes this blog fast without making content updates painful.
What this project really taught me about frontend caching is that high cache hit rates and fresh content are not opposites. You can have both, but you have to stop thinking in the usual static-vs-dynamic split.
With the current setup here, the goal is straightforward: keep public pages static by default, then refresh them deliberately when content changes. That gets you most of the caching upside without pretending every page needs to be dynamic all the time.
In backend systems, caching is table stakes. I've seen a single uncached query turn into an outage, and I've worked on distributed caches that took endpoints from seconds down to double-digit milliseconds. Frontend caching took me longer to think about correctly.
Static Site Generation (SSG): extremely fast, until one content change means rebuilding hundreds of pages and tying up your pipeline.
Server-Side Rendering (SSR): always fresh, but every request has a bill attached to it. Under traffic, that starts to feel like a regular app wearing a static-site costume.
Client-Side Rendering (CSR): flexible, but rough on SEO and initial load time. It also means shipping a lot more application code to the browser than a content site usually needs.
The idea that finally clicked for me was simple: most content is static most of the time, and only occasionally needs to act dynamic.
What the repo actually does today is simpler than the draft version I started with. The blog precomputes known routes, serves them as static output, and refreshes them through revalidation paths and deploy hooks when content changes.
// app/blog/[category]/[slug]/page.tsx
export async function generateStaticParams(): Promise<{ category: string; slug: string }[]> {
return getPublishedArticleParams();
}
export default async function BlogDetailPage({ params }: ArticlePageProps) {
const { category, slug } = await params;
const article = await getPublishedArticleBySlug(slug);
if (!article) {
notFound();
}
const canonicalCategory = article.category?.slug || category;
const [{ categories }, relatedArticles] = await Promise.all([
getPublishedBlogSnapshot(),
getPublishedRelatedArticles(slug, 3),
]);
const finalArticle = buildCMSArticle(article, canonicalCategory);
const transformedRelatedArticles = relatedArticles.map((relatedArticle) =>
buildCMSArticle(relatedArticle, relatedArticle.category?.slug || 'technology')
);
return (
<CMSBlogPost
article={finalArticle}
relatedArticles={transformedRelatedArticles}
categories={categories}
/>
);
}
export const dynamic = 'force-static';
export const dynamicParams = false;
The important part is the shape of it:
revalidate exportStatic rendering helped, but it wasn't the whole story. The bigger shift was Server Components and how much client code they let me stop shipping.
In a traditional React app, the browser gets the data fetching logic, the GraphQL client, and the transformation layer even when the page is basically static content.
Server Components eliminate this entirely:
// Traditional Client Component
'use client'
import { useQuery } from '@apollo/client'
import { GET_BLOG_POSTS } from '@/graphql/queries'
export default function BlogList() {
// Ships GraphQL client + query logic to browser
const { data, loading, error } = useQuery(GET_BLOG_POSTS)
if (loading) return <div>Loading...</div>
return (
<div>
{data?.articles.map(article => (
<ArticleCard key={article.slug} article={article} />
))}
</div>
)
}
// Server Component (This blog's approach)
import { getBlogListingDataOptimized } from '@/lib/strapi-server'
export default async function BlogList() {
// Runs on server, zero client JavaScript
const { articles } = await getBlogListingDataOptimized()
return (
<div>
{articles.map(article => (
<ArticleCard key={article.slug} article={article} />
))}
</div>
)
}
The impact: Server Components eliminate data fetching logic from the client bundle entirely. For this blog, that means the initial JavaScript bundle is around 180KB instead of the 400-600KB typical for React apps with client-side data fetching.
The cache setup that has held up best here has three layers:
// next.config.mjs
const nextConfig = {
async headers() {
return [
{
source: '/_next/static/(.*)',
headers: [
{
key: 'Cache-Control',
value: 'public, max-age=31536000, immutable'
}
]
},
{
source: '/_next/image(.*)',
headers: [
{
key: 'Cache-Control',
value: 'public, max-age=86400, s-maxage=86400'
}
]
}
]
}
}
// lib/blog/published-snapshot.ts
import { cache } from 'react'
import { getBlogListingDataOptimized } from '@/lib/strapi-server'
const loadPublishedBlogSnapshot = cache(async () => {
const { articles, categories } = await getBlogListingDataOptimized({
pageSize: 1000,
})
return {
articles,
categories: mergeCategories(categories, articles),
}
})
// lib/strapi-server.ts
const serverCache = new Map<string, any>()
const requestCache = new Map<string, any>()
const failedRequestCache = new Map<string, { error: Error; timestamp: number }>()
export function clearServerCache(): void {
serverCache.clear();
requestCache.clear();
failedRequestCache.clear();
}
The most satisfying part is still the webhook path. The repo has two different refresh flows: one route for direct path/tag revalidation, and another that can trigger a Vercel deploy hook in production.
// app/api/revalidate-blog/route.ts
import { revalidatePath, revalidateTag } from 'next/cache'
export async function POST(request: NextRequest) {
const parsedBody = RevalidateBlogRequestSchema.safeParse(await request.json())
if (!parsedBody.success) {
return NextResponse.json({ message: 'Invalid payload' }, { status: 400 })
}
const body = parsedBody.data
switch (body.model) {
case 'blog-post':
revalidatePath('/blog')
revalidatePath('/')
if (body.entry?.category?.slug) {
revalidatePath(`/blog/${body.entry.category.slug}`)
}
if (body.entry?.slug && body.entry?.category?.slug) {
revalidatePath(`/blog/${body.entry.category.slug}/${body.entry.slug}`)
}
revalidateTag('blog-posts', 'max')
break
}
return NextResponse.json({ revalidated: true })
}
The checked-in handler also validates a secret before any of this runs. app/api/revalidate/route.ts is the other half of the story: it validates Strapi webhooks, deduplicates events, and can trigger VERCEL_DEPLOY_HOOK_URL in production. So the freshness model here is a mix of targeted revalidation and deploy-driven regeneration, not just timed ISR.
Here are the realistic performance targets this architecture achieves:
# Blog Performance Metrics
TTFB: ~90ms (average)
FCP: ~0.4s
LCP: ~0.7s
Build Time: <2 minutes for full site
Bundle Size: ~180KB initial load
Core Web Vitals: All green in Lighthouse
Nothing exotic here. This is what a static-first App Router setup with Server Components looks like when it settles down.
I get asked some version of "when do I keep it on the server and when do I move it to the client?" often enough that I ended up with a simple rule of thumb:
// ✅ Server Component - Data fetching and static content
import { getPublishedArticleBySlug } from '@/lib/blog/published-snapshot'
export default async function BlogPost({ slug }: { slug: string }) {
const article = await getPublishedArticleBySlug(slug)
return (
<article>
<h1>{article.title}</h1>
<div dangerouslySetInnerHTML={{ __html: article.content }} />
</article>
)
}
// ✅ Client Component - Interactivity and state
'use client'
export default function ShareButton({ title, url }: ShareButtonProps) {
const [copied, setCopied] = useState(false)
const handleShare = () => {
navigator.clipboard.writeText(url)
setCopied(true)
setTimeout(() => setCopied(false), 2000)
}
return (
<button onClick={handleShare}>
{copied ? 'Copied!' : 'Share'}
</button>
)
}
The rule: Server Components for data, Client Components for interactivity. If it doesn't need state or event handlers, leave it on the server.
Server Components follow a simple rule: only ship JavaScript that provides user interactivity.
For this blog, that breakdown looks like:
# Current Bundle Analysis
Total Initial Bundle: ~180KB
- UI Components: ~80KB
- Client Interactions: ~35KB (share buttons, theme toggle)
- Next.js Runtime: ~45KB
- Utilities: ~20KB
# Compare to typical React blog:
Typical Bundle: 400-600KB
- GraphQL Client: ~120KB
- Apollo/React Query: ~80KB
- Data fetching logic: ~100KB
- UI + interactions: ~200KB+
That's the whole trade: the client stops carrying the data layer.
Let me share the hardest debugging session I've had with this architecture. Two weeks after launch, I noticed something odd: updating articles in Strapi wasn't reflecting on the live site.
The webhook was firing. One revalidation path was returning success. But the site still had moments where old content seemed to hang around longer than I expected.
After a few hours of tracing it through, the problem was simpler and more annoying: I was reasoning about one cache layer at a time, while the real system had several.
The part of the code that made this easier to reason about was at least straightforward:
// lib/strapi-server.ts
export function clearServerCache(): void {
serverCache.clear();
requestCache.clear();
failedRequestCache.clear();
}
Clearing the explicit server caches helped, but it also made the broader lesson obvious: freshness here depends on the in-memory caches, the route revalidation, and the production deploy path lining up.
After a few rounds of trial and error, this is the rendering configuration the repo actually uses:
// app/blog/page.tsx
export const dynamic = 'force-static';
// app/blog/[category]/page.tsx
export const dynamic = 'force-static';
export const dynamicParams = false;
// app/blog/[category]/[slug]/page.tsx
export const dynamic = 'force-static';
export const dynamicParams = false;
The strategy: precompute known routes, serve them statically, and make refresh behavior explicit when the CMS changes.
This is the sort of monitoring that actually paid for itself:
// next.config.mjs
import withBundleAnalyzer from '@next/bundle-analyzer'
const bundleAnalyzer = withBundleAnalyzer({
enabled: process.env.ANALYZE === 'true',
openAnalyzer: true,
analyzerMode: 'static',
reportFilename: './analyze/report.html',
})
What I like about this setup is that the defaults are already pretty good:
# Measurable Architecture Benefits
First Contentful Paint: <0.5s consistently
Lighthouse Score: 95+ performance
Time to Interactive: <1s on 3G
Cumulative Layout Shift: <0.1
Static Routes: /blog, categories, and article pages pre-rendered
Client Bundle: ~180KB initial load
These aren't hero numbers. They're what this architecture gives you when the basics are set up properly.
If I were building this architecture again, here's what I'd change:
I waited too long to make the refresh path obvious. I would add better visibility around which webhook fired, which paths got revalidated, and when a deploy hook was triggered.
Right now the logic is there, but you still have to read the code to understand which route owns freshness in which environment.
Instead of revalidating entire sections, implement more granular cache tags for surgical updates.
If you're deciding between static rendering, ISR, and SSR, here's the framework I'd use:
In Part 3 of this series, I'll get into the TypeScript side of this: the kinds of runtime bugs strict mode catches early, the ESLint rules that have actually been worth keeping, and where type safety saves you from expensive checks later.
The combination of static rendering, Server Components, and sane invalidation turns performance into the default behavior instead of a cleanup project.
If you've had to untangle cache invalidation in a real app, you already know the interesting part is never the happy path.
The hard part isn't only making a site fast once. It's keeping it fast while the content and the system keep changing.