Series navigation
Written by
Jagdish Salgotra
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
to join the discussion
Part 3 explains how this repo uses TypeScript strict mode and runtime validation with Zod to make request handling, environment setup, and response contracts more predictable.
Written by
Software engineer with 15 years work experience. Skills: Java, Spring Boot, Hibernate, SQL, Linux, Python, Telecom, IoT, Autonomous Systems
This is Part 3 of the "Building a High-Performance Blog" series. In Part 1, I wrote about choosing Next.js over WordPress. In Part 2, I got into the static-first rendering setup. This part is about a quieter layer of the stack: the type and validation guardrails that keep the blog from doing dumb things at runtime.
After the rendering and caching work settled down, the next thing that started to matter was not raw speed. It was confidence.
I didn't want to spend time chasing bugs caused by missing env vars, malformed request bodies, optional fields I treated like required ones, or response objects that quietly drifted out of shape. None of those are glamorous problems. They are just the kind that waste afternoons.
I also don't have a clean vanity metric here. I can't honestly say "strict mode caught 23 bugs" or "Zod reduced incidents by 41%." I distrust those numbers a little when I see them in posts like this anyway. What I can say is simpler: this repo is much calmer because TypeScript handles compile-time shape checks, and Zod handles the places where the outside world can still lie to you.
That split ended up mattering more than any one flag.
The tsconfig.json in this repo is not trying to win a purity contest. It is trying to make common mistakes loud:
{
"compilerOptions": {
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
"useUnknownInCatchVariables": true,
// "exactOptionalPropertyTypes": true,
}
}
strict: true is the obvious one, but the flags that keep paying rent are the narrower ones.
noUncheckedIndexedAccess is a good example. It forces you to admit that array access can fail. That sounds trivial until you remember how often code quietly assumes "there will always be at least one item." In this repo, there are a few places where the code reads like it has already been burned once:
const providers = await getAllCostStatus();
const totalSpend = providers.reduce((sum, provider) => sum + provider.currentSpend, 0);
const totalLimit = providers.reduce(
(sum, provider) => (provider.limit > 0 ? sum + provider.limit : sum),
0
);
const percentageUsed = totalLimit > 0 ? (totalSpend / totalLimit) * 100 : 0;
const period = providers[0]?.period || 'unknown';
return NextResponse.json({
providers,
totalSpend,
totalLimit,
percentageUsed,
period,
timestamp: new Date().toISOString(),
} satisfies CostStatusResponse);
That providers[0]?.period || 'unknown' is not exciting code. Good. Exciting code in admin status endpoints is usually a bad sign.
The satisfies CostStatusResponse at the end is another pattern I like a lot. It keeps the response shape honest without forcing weird casts. If I add or remove something carelessly, TypeScript complains immediately. That's exactly the kind of argument I want from the compiler.
This is the part people blur together when they talk about "type safety."
TypeScript can tell me whether my code is internally consistent. It cannot tell me whether process.env is valid, whether a webhook payload is shaped the way I expect, or whether a user actually sent the fields my handler assumes exist. At that point, the compiler is out of the room. You need runtime validation.
The environment setup in this repo is a straightforward example:
const parsed = serverEnvSchema.safeParse(process.env);
if (!parsed.success) {
console.error('Invalid server environment variables:');
console.error(parsed.error.flatten().fieldErrors);
throw new Error('Invalid server environment variables');
}
return parsed.data;
I like this for two reasons.
First, it fails early. If the server boots with a bad AUTH_SECRET, malformed DATABASE_URL, or missing STRAPI_API_TOKEN, I want that to be a startup problem, not a mystery bug three requests later.
Second, it makes the boundary explicit. There is a server schema and a separate client schema. That matters because public and private config have very different blast radii. The helper below is blunt on purpose:
export function getStrapiToken() {
if (!isServer) {
throw new Error('STRAPI_API_TOKEN can only be accessed on server-side');
}
if (!serverEnv) {
throw new Error('Server environment not initialized');
}
return serverEnv.STRAPI_API_TOKEN;
}
That is not fancy. It is just the code refusing to be vague about where secrets are allowed to live.
There are tests around this too, which matters more to me than having a nice config screenshot. tests/lib/env-errors.test.ts intentionally loads invalid env combinations and asserts that the module throws. That makes the guardrail real.
I didn't end up using Zod everywhere. I don't think that's a good goal.
The useful pattern in this repo is narrower: validate external inputs at the edges, then work with typed data after that. The comments API is a good example because it has the whole pipeline in one place:
const body = await request.json();
const validation = createCommentSchema.safeParse(body);
if (!validation.success) {
return NextResponse.json(
{ error: 'Invalid input', details: validation.error },
{ status: 400 }
);
}
const { articleSlug, content, parentId } = validation.data;
const rateLimit = await checkCommentRateLimit(session.user.id);
if (!rateLimit.allowed) {
return NextResponse.json(
{
error: 'Rate limit exceeded',
limit: rateLimit.limit,
remaining: 0,
reset: rateLimit.reset,
},
{ status: 429 }
);
}
const sanitizedContent = sanitizeHtml(content);
This is the kind of route I want in production. It does not pretend the request body is trustworthy. It validates first, narrows the type, then moves on to rate limiting, moderation, and sanitization in a predictable order.
The schema itself is small, which is another thing I like:
export const createCommentSchema = z.object({
articleSlug: z.string().min(1).max(200),
content: z.string().min(1).max(2000).trim(),
parentId: z.string().uuid().optional(),
});
There is no ceremony here. Just the rules the route actually cares about.
And again, the tests make it less theoretical. tests/lib/validation-comments.test.ts checks that invalid UUIDs are rejected and that update content gets trimmed. That may sound minor. It is minor. That's the point. A lot of production stability is just taking small annoyances off the table before they stack up.
One of the more useful things Zod gave me in this repo was the ability to be strict in one place and intentionally looser in another.
For internal admin settings, the patch schema is strict:
const GlobalSettingsPatchSchema = z
.object({
quotaLimits: QuotaLimitsSchema.optional(),
featureToggles: FeatureTogglesSchema.optional(),
alerts: AlertsSchema.optional(),
announcementBanner: AnnouncementBannerSchema.optional(),
llmConfig: LlmConfigSchema.optional(),
costCeilings: z
.object({
groq: CostCeilingSchema.optional(),
claude: CostCeilingSchema.optional(),
deepseek: CostCeilingSchema.optional(),
})
.partial()
.optional(),
rateLimiting: RateLimitingSchema.optional(),
emailNotifications: EmailNotificationsSchema.optional(),
})
.strict();
That makes sense. If the admin UI sends extra keys, I want to know.
But the Strapi revalidation webhook takes a different stance:
const RevalidateBlogRequestSchema = z
.object({
model: z.string().optional(),
entry: z
.object({
slug: z.string().trim().min(1).optional(),
category: z
.object({
slug: z.string().trim().min(1).optional(),
})
.optional(),
})
.optional(),
})
.passthrough();
That .passthrough() is doing real work. Webhook payloads tend to pick up extra fields over time, and I don't need to reject a valid event just because Strapi included more metadata than this route cares about.
So the rule is not "always be strict." The rule is "be strict about the parts of the contract you actually own."
That distinction saved me from writing validation that looked impressive and aged badly.
I wouldn't oversell linting as a performance tool. It is mostly a codebase hygiene tool. But hygiene matters once the project stops being tiny.
The config here is doing a few practical things:
rules: {
'prettier/prettier': 'error',
'@typescript-eslint/no-unused-vars': [
'warn',
{ vars: 'all', varsIgnorePattern: '^_', args: 'after-used', argsIgnorePattern: '^_' },
],
'import/order': [
'error',
{
groups: ['builtin', 'external', 'internal', ['parent', 'sibling'], 'index', 'object'],
'newlines-between': 'always',
alphabetize: {
order: 'asc',
caseInsensitive: true,
},
},
],
'unused-imports/no-unused-imports': 'error',
}
This is not heroic, but it keeps dead imports, drifting file structure, and "I'll clean that up later" noise from accumulating. That's worth more than it sounds like once you have enough routes, utilities, and tests moving at once.
I also like that the config makes room for reality. Test files get looser rules where Jest patterns need them. Tailwind plugin rules are explicitly turned off until v4 support is stable. That's a better trade than pretending every rule should be equally enforced forever.
One reason I trust this setup more is that it doesn't pretend to be finished.
allowJs is still on. skipLibCheck is still on. exactOptionalPropertyTypes is still commented out. If I were trying to write a self-congratulatory post, I would hide that. But it is the honest state of the repo.
And honestly, I think that's fine.
The job was not to make the type system aesthetically pure. The job was to catch the expensive mistakes first:
This stack does that already.
Would I like to tighten it more over time? Yes. Especially exactOptionalPropertyTypes. But I would still make that change the same way the current repo got here: one friction point at a time, not as a giant "strictness migration" that burns a week and leaves everybody annoyed.
If I were doing this part again from scratch, I'd keep the same broad rule:
TypeScript for internal consistency. Zod for external boundaries. Tests for the guardrails that would be painful to debug in production.
That mix has been much more useful than trying to force one tool to do all three jobs.
It also fits the kind of app this is. A content-heavy site with a handful of sensitive server routes does not need exotic type-level gymnastics. It needs boring contracts, clear failure modes, and enough friction that bad inputs don't casually reach the interesting parts of the system.
That is not glamorous engineering. It is still the kind that holds up.
In Part 4, I'll get into the styling side of this stack: Tailwind CSS v4, the zero-runtime tradeoffs, and the parts of frontend styling work that actually affect performance once the JavaScript and caching story are under control.