Error Handling in AI-Built Apps: The Patterns AI Never Generates
AI tools generate the happy path. Production is the unhappy path. Here are six error handling patterns that AI consistently skips, and the code to fix each one.
Your AI-built app works flawlessly. The demo is polished. The auth flow is smooth. The dashboard renders crisp data. Everything functions exactly as expected.
Then a user's internet drops mid-checkout. An API returns a 500 while they're loading their dashboard. A session token expires while they're halfway through a form. A third-party service rate-limits you on launch day.
AI coded the world where everything goes right. Your users live in the world where everything goes wrong.
Every one of those scenarios produces the same result in AI-generated code: a blank screen, a cryptic error, or a silent failure that leaves the user staring at a spinner that will never stop.
AI Codes the Happy Path
AI coding tools optimize for one question: does it work? Not what happens when it breaks?
This isn't a knock on the tools. It's a structural reality. When you prompt Cursor or Copilot to "build a dashboard that fetches user data," you get a clean component that fetches data and renders it. You don't get the loading skeleton. You don't get the empty state. You don't get the error boundary. You don't get the retry logic. You get the happy path, because that's what you asked for.
The data tells the story. A 2025 Pieces developer survey found that 63% of developers spent more time debugging AI-generated code than they would have spent writing it manually. A CodeRabbit analysis of millions of pull requests showed AI-generated code produces 1.7x more issues than human-written code. The majority of those issues aren't logic errors. They're missing error handling, unhandled edge cases, and absent defensive code.
The happy path problem compounds. Each AI-generated component that skips error handling creates a potential crash point. A typical AI-built app with 20 data-fetching components and zero error boundaries has 20 places where a single network hiccup can kill the entire page.
Production is the unhappy path. Here's what your users will encounter that your AI-generated code doesn't handle:
- Network failures. WiFi drops, cellular dead zones, flaky connections.
- Expired tokens. Sessions time out while the tab is open.
- Empty states. New users with zero data, filtered views with no results.
- Malformed data. Unexpected nulls, missing fields, type mismatches.
- Rate limits. Third-party APIs throttling you under load.
- Server errors. 500s, timeouts, partial responses.
Every one of these is a blank screen or a cryptic Unhandled Runtime Error in AI-generated code. Let's fix that.
Six Patterns AI Never Generates
Each of these patterns addresses a specific failure mode. Each includes a realistic before (what AI generates) and after (what production needs). All code is TypeScript targeting Next.js App Router.
1. API Error Boundaries
The problem: AI generates bare fetch calls with no error handling. When the API returns a non-200 response, the component either crashes or silently renders stale or undefined data.
Before (what AI generates):
// app/dashboard/page.tsx
async function DashboardPage() {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/projects`)
const data = await res.json()
return (
<div>
{data.projects.map((p: Project) => (
<ProjectCard key={p.id} project={p} />
))}
</div>
)
}This works perfectly until res is a 401, a 500, or a network timeout. Then res.json() either throws or returns an error object, and data.projects.map blows up with Cannot read properties of undefined.
After (production-ready):
// lib/api/client.ts
type ApiResult<T> =
| { ok: true; data: T }
| { ok: false; error: string; status: number }
export async function api<T>(
path: string,
init?: RequestInit
): Promise<ApiResult<T>> {
try {
const res = await fetch(
`${process.env.NEXT_PUBLIC_API_URL}${path}`,
init
)
if (!res.ok) {
const message = await res.text().catch(() => 'Unknown error')
return { ok: false, error: message, status: res.status }
}
const data = (await res.json()) as T
return { ok: true, data }
} catch (err) {
// Network failure, DNS error, timeout
return {
ok: false,
error: err instanceof Error ? err.message : 'Network error',
status: 0,
}
}
}// app/dashboard/page.tsx
import { api } from '@/lib/api/client'
import { redirect } from 'next/navigation'
async function DashboardPage() {
const result = await api<{ projects: Project[] }>('/api/projects')
if (!result.ok) {
if (result.status === 401) redirect('/login')
return <ErrorState message="Failed to load projects. Please try again." />
}
if (result.data.projects.length === 0) {
return <EmptyState message="No projects yet." cta="Connect a repo" />
}
return (
<div>
{result.data.projects.map((p) => (
<ProjectCard key={p.id} project={p} />
))}
</div>
)
}The ApiResult type forces every consumer to handle the error case. You can't access .data without first checking .ok. This is the single highest-leverage pattern on this list.
2. React Error Boundaries
The problem: Without an error boundary, a single component crash kills the entire page. AI never generates error boundaries because the code it writes doesn't crash until it hits production data.
Before (what AI generates):
Nothing. AI generates components. It doesn't generate the safety net around them.
After (production-ready):
Next.js App Router has built-in error boundary support via error.tsx files. Add one at every route segment that matters.
// app/dashboard/error.tsx
'use client'
import { useEffect } from 'react'
export default function DashboardError({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
useEffect(() => {
// Send to your error reporting service
console.error('Dashboard error:', error.message)
}, [error])
return (
<div className="flex flex-col items-center justify-center min-h-[400px] gap-4">
<h2 className="text-xl font-semibold">Something went wrong</h2>
<p className="text-muted-foreground max-w-md text-center">
We hit an unexpected error loading your dashboard.
Your data is safe. This is a display issue.
</p>
<button
onClick={reset}
className="px-4 py-2 bg-primary text-primary-foreground rounded-md"
>
Try again
</button>
</div>
)
}This catches any unhandled error in the /dashboard route tree and renders a recovery UI instead of a blank screen. The reset function re-renders the route segment, giving the user a one-click recovery path.
Add a global fallback too:
// app/global-error.tsx
'use client'
export default function GlobalError({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
return (
<html>
<body className="flex items-center justify-center min-h-screen">
<div className="text-center">
<h2 className="text-xl font-semibold mb-2">Something went wrong</h2>
<button onClick={reset} className="underline">
Try again
</button>
</div>
</body>
</html>
)
}3. Loading and Empty States
The problem: AI-generated components render data or nothing. There's no loading state while data is being fetched, no empty state when there's no data, and no error state when the fetch fails. The user sees a flash of blank content or a layout shift.
Before (what AI generates):
// components/ProjectList.tsx
'use client'
import { useEffect, useState } from 'react'
export function ProjectList() {
const [projects, setProjects] = useState([])
useEffect(() => {
fetch('/api/projects')
.then((res) => res.json())
.then((data) => setProjects(data.projects))
}, [])
return (
<ul>
{projects.map((p: any) => (
<li key={p.id}>{p.name}</li>
))}
</ul>
)
}Three problems: no loading indicator, no handling for an empty list, and a failed fetch silently produces an empty array (or crashes on undefined.map).
After (production-ready):
// components/ProjectList.tsx
'use client'
import { useEffect, useState } from 'react'
import { api } from '@/lib/api/client'
type Status = 'loading' | 'error' | 'empty' | 'ready'
export function ProjectList() {
const [projects, setProjects] = useState<Project[]>([])
const [status, setStatus] = useState<Status>('loading')
useEffect(() => {
async function load() {
const result = await api<{ projects: Project[] }>('/api/projects')
if (!result.ok) {
setStatus('error')
return
}
setProjects(result.data.projects)
setStatus(result.data.projects.length === 0 ? 'empty' : 'ready')
}
load()
}, [])
if (status === 'loading') {
return (
<ul className="space-y-3">
{Array.from({ length: 3 }).map((_, i) => (
<li key={i} className="h-12 bg-muted animate-pulse rounded-md" />
))}
</ul>
)
}
if (status === 'error') {
return (
<div className="text-center py-8 text-muted-foreground">
<p>Failed to load projects.</p>
<button
onClick={() => window.location.reload()}
className="underline mt-2"
>
Retry
</button>
</div>
)
}
if (status === 'empty') {
return (
<div className="text-center py-8">
<p className="text-muted-foreground">No projects yet.</p>
<a href="/new" className="text-primary underline mt-2 inline-block">
Connect your first repo
</a>
</div>
)
}
return (
<ul className="space-y-3">
{projects.map((p) => (
<li key={p.id} className="p-3 border rounded-md">
{p.name}
</li>
))}
</ul>
)
}Four states, explicitly modeled. The loading skeleton matches the shape of the final content, preventing layout shift. The empty state gives the user a next step instead of a blank list. The error state offers recovery.
4. Auth Token Expiry
The problem: AI-generated auth code handles the initial login flow. It almost never handles token expiry. When a session expires (and sessions always expire) the user gets a blank screen, an infinite redirect loop, or a cryptic 401 that they don't know how to fix.
Before (what AI generates):
// middleware.ts
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
import { createServerClient } from '@supabase/ssr'
export async function middleware(request: NextRequest) {
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
{ cookies: { /* cookie config */ } }
)
const { data: { user } } = await supabase.auth.getUser()
if (!user && request.nextUrl.pathname.startsWith('/dashboard')) {
return NextResponse.redirect(new URL('/login', request.url))
}
}This works on day one. But it doesn't refresh expired tokens. Supabase's getUser call will silently fail with an expired refresh token, and the user gets bounced to /login without understanding why. Worse, if the login page also checks auth, you get an infinite redirect loop.
After (production-ready):
// middleware.ts
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
import { createServerClient } from '@supabase/ssr'
const PUBLIC_PATHS = ['/login', '/signup', '/auth/callback', '/']
export async function middleware(request: NextRequest) {
let response = NextResponse.next({ request })
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
{
cookies: {
getAll() {
return request.cookies.getAll()
},
setAll(cookiesToSet) {
// Write refreshed tokens back to the response
cookiesToSet.forEach(({ name, value, options }) => {
request.cookies.set(name, value)
response.cookies.set(name, value, options)
})
},
},
}
)
// getUser() triggers token refresh if the access token is expired
// The refreshed tokens are written back via setAll above
const { data: { user }, error } = await supabase.auth.getUser()
const isPublicPath = PUBLIC_PATHS.some((p) =>
request.nextUrl.pathname.startsWith(p)
)
if (!user && !isPublicPath) {
// Preserve the intended destination so login can redirect back
const returnTo = request.nextUrl.pathname + request.nextUrl.search
const loginUrl = new URL('/login', request.url)
loginUrl.searchParams.set('returnTo', returnTo)
return NextResponse.redirect(loginUrl)
}
return response
}
export const config = {
matcher: ['/((?!_next/static|_next/image|favicon.ico|api/webhooks).*)'],
}The critical difference: the setAll callback writes refreshed tokens back to the response cookies. Without this, Supabase refreshes the token internally but the browser never receives the new cookies, and the next request fails again.
The returnTo parameter is small but critical. Without it, successful login dumps the user back to the dashboard root. With it, they land exactly where they were before the session expired. This matters especially for deep links and shared URLs.
5. Network Failure Handling
The problem: AI generates fetch calls. When the network is down, fetch throws. The thrown error is unhandled. The component crashes.
Before (what AI generates):
async function saveProject(data: ProjectData) {
const res = await fetch('/api/projects', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(data),
})
return res.json()
}If the user's WiFi drops while this fires, the promise rejects and the error bubbles up as an unhandled rejection. No retry. No user feedback. Just a silent failure.
After (production-ready):
// lib/api/retry.ts
interface RetryOptions {
maxRetries?: number
baseDelay?: number
retryableStatuses?: number[]
}
export async function fetchWithRetry(
url: string,
init?: RequestInit,
options: RetryOptions = {}
): Promise<Response> {
const {
maxRetries = 3,
baseDelay = 1000,
retryableStatuses = [408, 429, 500, 502, 503, 504],
} = options
let lastError: Error | null = null
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const res = await fetch(url, init)
if (!retryableStatuses.includes(res.status)) {
return res
}
// Retryable server error, fall through to retry logic
lastError = new Error(`HTTP ${res.status}`)
} catch (err) {
// Network failure (offline, DNS, timeout)
lastError = err instanceof Error ? err : new Error('Network error')
}
if (attempt < maxRetries) {
// Exponential backoff with jitter
const delay = baseDelay * Math.pow(2, attempt) + Math.random() * 500
await new Promise((resolve) => setTimeout(resolve, delay))
}
}
throw lastError!
}// Usage in a component
async function saveProject(data: ProjectData) {
try {
const res = await fetchWithRetry('/api/projects', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(data),
})
if (!res.ok) {
const error = await res.text()
return { success: false, error }
}
return { success: true, data: await res.json() }
} catch (err) {
return {
success: false,
error: 'Unable to save. Check your connection and try again.',
}
}
}Key details: exponential backoff prevents hammering a struggling server, jitter prevents thundering herd when multiple clients retry simultaneously, and the retry only applies to network errors and server errors (5xx), never to 4xx client errors, which retrying won't fix. Mutations (POST/PUT/DELETE) should only be retried if the operation is idempotent.
6. Form Validation
The problem: AI-generated forms either have client-side-only validation (bypassable with curl) or no validation at all. When invalid data reaches your API, you get database errors, type mismatches, or corrupted records.
Before (what AI generates):
// app/api/projects/route.ts
export async function POST(request: Request) {
const body = await request.json()
const { data, error } = await supabase
.from('projects')
.insert({ name: body.name, url: body.url })
return Response.json(data)
}No validation. body.name could be undefined, an empty string, or a 10,000-character injection payload. body.url could be anything.
After (production-ready):
// lib/validations/project.ts
import { z } from 'zod'
// Shared schema, used by BOTH client and server
export const createProjectSchema = z.object({
name: z
.string()
.min(1, 'Project name is required')
.max(100, 'Project name must be under 100 characters')
.trim(),
url: z
.string()
.url('Must be a valid URL')
.regex(
/^https:\/\/github\.com\//,
'Must be a GitHub repository URL'
),
})
export type CreateProjectInput = z.infer<typeof createProjectSchema>// app/api/projects/route.ts
import { createProjectSchema } from '@/lib/validations/project'
export async function POST(request: Request) {
let body: unknown
try {
body = await request.json()
} catch {
return Response.json(
{ error: 'Invalid JSON body' },
{ status: 400 }
)
}
const parsed = createProjectSchema.safeParse(body)
if (!parsed.success) {
return Response.json(
{ error: 'Validation failed', details: parsed.error.flatten() },
{ status: 400 }
)
}
const { data, error } = await supabase
.from('projects')
.insert(parsed.data)
.select()
.single()
if (error) {
return Response.json(
{ error: 'Failed to create project' },
{ status: 500 }
)
}
return Response.json(data, { status: 201 })
}// components/NewProjectForm.tsx (client-side uses the SAME schema)
'use client'
import { useForm } from 'react-hook-form'
import { zodResolver } from '@hookform/resolvers/zod'
import {
createProjectSchema,
type CreateProjectInput,
} from '@/lib/validations/project'
export function NewProjectForm() {
const {
register,
handleSubmit,
formState: { errors, isSubmitting },
} = useForm<CreateProjectInput>({
resolver: zodResolver(createProjectSchema),
})
async function onSubmit(data: CreateProjectInput) {
const res = await fetch('/api/projects', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(data),
})
if (!res.ok) {
const err = await res.json()
// Handle server validation errors
return
}
// Success: redirect or update UI
}
return (
<form onSubmit={handleSubmit(onSubmit)} className="space-y-4">
<div>
<label htmlFor="name" className="block text-sm font-medium">
Project name
</label>
<input
id="name"
{...register('name')}
className="mt-1 block w-full border rounded-md px-3 py-2"
/>
{errors.name && (
<p className="text-sm text-red-500 mt-1">{errors.name.message}</p>
)}
</div>
<div>
<label htmlFor="url" className="block text-sm font-medium">
GitHub repo URL
</label>
<input
id="url"
{...register('url')}
className="mt-1 block w-full border rounded-md px-3 py-2"
placeholder="https://github.com/you/your-repo"
/>
{errors.url && (
<p className="text-sm text-red-500 mt-1">{errors.url.message}</p>
)}
</div>
<button
type="submit"
disabled={isSubmitting}
className="px-4 py-2 bg-primary text-primary-foreground rounded-md disabled:opacity-50"
>
{isSubmitting ? 'Creating...' : 'Create project'}
</button>
</form>
)
}The key insight is the shared schema. Define your Zod schema once in a shared file. Import it in both your API route and your form component. Client-side validation gives instant feedback. Server-side validation prevents bypass. Same schema, zero drift.
The Console.log Problem
Every AI-generated codebase has the same tell: console.log everywhere. During development, that's fine. In production, it's three problems at once.
No visibility. Console output in a serverless function disappears into the void unless you have log aggregation set up. When your app breaks at 2 AM, those console.log("user data:", userData) statements help exactly no one.
Security risk. AI-generated console.log statements routinely dump full request bodies, auth tokens, user data, and API responses to stdout. In any environment with log aggregation, that's sensitive data sitting in plaintext.
Noise. When every function logs every variable, finding the one log that matters is like finding a needle in a haystack of other needles.
Replace console.log with structured logging. You don't need a heavyweight library. A simple wrapper gives you levels, context, and the ability to turn off debug output in production without a find-and-replace across your entire codebase.
// lib/logger.ts
type LogLevel = 'debug' | 'info' | 'warn' | 'error'
const LEVELS: Record<LogLevel, number> = {
debug: 0,
info: 1,
warn: 2,
error: 3,
}
const MIN_LEVEL = LEVELS[(process.env.LOG_LEVEL as LogLevel) ?? 'info']
function log(level: LogLevel, message: string, meta?: Record<string, unknown>) {
if (LEVELS[level] < MIN_LEVEL) return
const entry = {
level,
message,
timestamp: new Date().toISOString(),
...meta,
}
if (level === 'error') {
console.error(JSON.stringify(entry))
} else {
console.log(JSON.stringify(entry))
}
}
export const logger = {
debug: (msg: string, meta?: Record<string, unknown>) => log('debug', msg, meta),
info: (msg: string, meta?: Record<string, unknown>) => log('info', msg, meta),
warn: (msg: string, meta?: Record<string, unknown>) => log('warn', msg, meta),
error: (msg: string, meta?: Record<string, unknown>) => log('error', msg, meta),
}// Before (AI-generated)
console.log("fetching projects for user", userId)
console.log("got projects", projects)
console.log("error!", error)
// After (production-ready)
logger.info('Fetching projects', { userId })
logger.info('Projects loaded', { count: projects.length })
logger.error('Failed to load projects', { userId, error: error.message })Structured JSON logs work with every log aggregation service out of the box. You get searchable, filterable, parseable logs instead of string concatenation.
Finding Error Handling Gaps Automatically
You shouldn't have to find these gaps by hand. A few ESLint rules catch the most common issues:
{
"rules": {
"no-console": ["warn", { "allow": ["warn", "error"] }],
"@typescript-eslint/no-floating-promises": "error",
"@typescript-eslint/no-misused-promises": "error"
}
}no-floating-promises is the single most useful rule for AI-generated code. It catches every fetch() call without an await or .catch(), the exact pattern that leads to unhandled promise rejections in production.
Beyond linting, tools like FinishKit scan your entire codebase for unhandled async operations, missing error boundaries, console.log usage, and routes without validation. It prioritizes findings by impact: auth error handling gaps rank higher than cosmetic UI issues.
The prioritization matters. Not all error handling is equally urgent:
- Auth and session errors. Users locked out of your app.
- Data mutation errors. Lost writes, corrupted state.
- Data fetching errors. Blank screens, stale data.
- UI rendering errors. Broken layouts, missing states.
Fix them in that order. Auth errors affect every user on every visit. A broken loading skeleton is annoying but survivable.
Ship the Unhappy Path
Error handling isn't glamorous. Nobody screenshots your error boundary. Nobody tweets about your retry logic. But it's the difference between an app that works in a demo and an app that works in production.
The six patterns above (API error boundaries, React error boundaries, loading and empty states, auth token expiry, network failure handling, and form validation) cover roughly 90% of what AI misses. They're not complex. They're not clever. They're just the work that AI skips because nobody prompted it.
Add them, and your users will never see a blank screen again.
More on shipping AI-built apps:
- How to Ship Your AI-Built App: the full production readiness guide
- Testing AI-Generated Code: minimum viable test strategy
- Deploy Your Next.js App to Production: deployment checklist
- AI Code Security Vulnerabilities: the security gaps AI introduces