All Articles
Frontend Architecture & Best Practices

Your Next.js App Does Not Have a Performance Problem. It Has a Data Fetching Problem.

Most Next.js performance issues trace back to one root cause: nobody decided how data fetching would work before the first page was built. Here is the strategy we define at the start of every project.

Velox Studio7 min read

Most Next.js performance problems are not performance problems.

They are data fetching problems that look like performance problems by the time someone notices them.

The page loads slowly. The user sees a flash of empty content. The dashboard feels sluggish even though the individual components are fast. The API is getting hammered with requests for data that has not changed in six hours.

Every one of these symptoms has the same root cause. Nobody decided how data fetching would work before the first page was built. So each developer made their own call. Some pages use server components. Some use useEffect. Some use React Query. Some call the API directly. Some call it through an abstraction layer. The result is an application that fetches data in four different ways with no consistent pattern and no shared caching layer.

This is completely avoidable. It just requires one conversation at the start of the project, before the first page exists.

Here is the data fetching strategy we define at Velox Studio before we write a single line of code.

The Decision That Everything Else Depends On

The first question is not which library to use. It is where the data fetching happens.

In Next.js, you have two fundamentally different options. Server-side fetching happens in Server Components, server actions, or getServerSideProps. The data arrives with the page. The user never sees an empty state. The HTML is rendered with real content.

Client-side fetching happens in the browser after the component mounts. The user sees the page structure first, then the data fills in. This is where the loading spinners and skeleton screens come from.

Both are valid. Neither is universally correct. The mistake is mixing them without intention.

The rule we apply is straightforward. If the data is required for the page to be useful, it is fetched on the server. If the data is secondary, user-triggered, or frequently updated after the initial load, it is fetched on the client.

A product page needs the product details, the price, and the images on the first render. That is server-side. The live inventory count that updates every 30 seconds is client-side. A dashboard needs the summary metrics on load. That is server-side. The real-time feed that updates as users take actions is client-side.

Make this decision for every data type in the application before writing a single fetch call. Write it down. Put it in the project documentation. Enforce it in code review.

Server Components Are Not Always the Answer

Next.js 13 and beyond has pushed hard on Server Components, and the community has followed. The result is that a lot of teams default to Server Components for everything without thinking about the tradeoffs.

Server Components are excellent for initial page loads. They eliminate client-side waterfalls, reduce JavaScript bundle size, and deliver content faster to the user. For pages where the data does not change based on user interaction, they are almost always the right choice.

But they have real limitations that matter in production applications.

They cannot access browser APIs. They cannot use state or effects. They cannot respond to user interactions directly. And critically, they do not cache by default in the way client-side caching libraries do.

If you build an application where every data fetch is a Server Component request, you will hit a point where a user action needs to refresh data, and you have no clean way to do it without a full page navigation. Or you will find that the same data is being fetched on every request because there is no client-side cache holding it between navigations.

The answer is not to abandon Server Components. It is to combine them correctly with client-side fetching for the right use cases.

The Client-Side Layer Needs One Tool, Used Consistently

For client-side data fetching, pick one tool and use it everywhere. Not three tools. One.

We use React Query on every project that requires client-side data fetching. It handles caching, background refetching, loading states, error states, and stale data invalidation out of the box. Building any of this manually is a waste of developer time and produces inconsistent results across the application.

The consistency matters as much as the tool choice. When every client-side fetch goes through React Query, the caching layer is unified. Data fetched on one page is available instantly on another page that needs the same data. Background refetching happens on a predictable schedule. Cache invalidation is explicit and controlled.

When different pages use different tools, there is no unified cache. The same user profile data gets fetched three times on three different pages because each page has its own data fetching logic and does not know about the others.

Define the tool at the start of the project. Document how it is configured. Set up the QueryClient with sensible defaults for stale time and cache time. And never let a second tool for client-side fetching enter the codebase.

Waterfall Fetching Is the Most Common Performance Killer

A waterfall happens when one data fetch depends on the result of another. Component A fetches the user. Component B fetches the user's projects, but only after Component A finishes because it needs the user ID. Component C fetches the project details, but only after Component B finishes. Three sequential round trips to the server where one parallel request would have been enough.

Waterfalls are easy to miss in development because local API latency is near zero. They appear in production when API calls are taking 200 to 400 milliseconds each and suddenly a page that felt fine in development takes two seconds to load for a real user.

The fix is to identify data dependencies before building the components that need that data. Draw the dependency graph. If Component B needs data from Component A, ask whether that dependency is necessary. Can Component B receive the user ID as a prop instead of fetching the user itself? Can both fetches happen in parallel at the page level and the results be passed down?

In Next.js, Promise.all in a Server Component is the cleanest solution for parallel server-side fetches. For client-side fetches, React Query's parallel queries handle this correctly. The key is catching the waterfall at the design stage, not the debugging stage.

Caching Is a Strategy, Not an Afterthought

Every piece of data in your application has a staleness profile. Some data changes constantly. Some data changes hourly. Some data almost never changes.

User-generated content and real-time feeds need to be fresh. Product catalogue data, pricing, and configuration might be stale for minutes or hours without causing any real problem. Static content like blog posts and documentation can be cached for days.

Treating all data the same way is a performance mistake. If you refetch product catalogue data on every page navigation because that is the default behaviour, you are making unnecessary API calls that slow the application down and cost money at scale.

Define the staleness profile for each data type at the start of the project. Configure stale times in React Query accordingly. Use Next.js cache directives for server-side fetching. Make the caching behaviour explicit rather than relying on defaults.

This is a thirty-minute conversation at the start of the project that saves hours of performance debugging later.

Error States Are Part of the Data Fetching Strategy

Most data fetching strategies cover the happy path completely and the error path not at all.

Every fetch can fail. The API can be down. The network can time out. The user can lose connectivity. The response can come back in an unexpected format. What happens in each of these cases?

If the answer is "the component throws an error and the whole page breaks," that is a data fetching strategy problem. Error boundaries exist for this reason. Every section of the application that fetches data independently should be wrapped in an error boundary that shows a meaningful fallback instead of crashing the page.

Define how errors are handled for each data type. Critical data that the page cannot function without should surface a clear error state and an option to retry. Secondary data that enhances but is not required for the page to be useful should fail silently or show a minimal fallback. Define this at the start of the project. Not after the first production incident.

The Conversation That Saves the Most Time

All of this sounds like a lot. It is not. It is a one-hour conversation at the start of the project that answers five questions.

Where does each type of data get fetched, server or client? What tool handles client-side fetching? What are the staleness profiles for each data type? Where are the data dependencies and how do we avoid waterfalls? How do we handle errors for each category of data?

Write the answers down. Share them with everyone building the project. Reference them in code review.

That one conversation is the difference between a Next.js application that performs consistently as it grows and one that accumulates performance problems until someone budgets a week to fix them.

Data fetching is not a detail. It is the foundation that determines how the application behaves under real conditions, for real users, at real scale. Treat it like one.


Building a Next.js product that needs to perform at scale from day one? At Velox Studio, we define data architecture before writing the first component. Talk to us about your project

Building a Next.js product that needs to perform at scale?

We define data architecture before writing the first page. Fast, consistent, production-ready from launch.

View Next.js Development

Tags

Next.jsdata fetchingReact Queryserver componentsweb performancecaching strategy

V

Velox Studio

AI-Powered Development Studio

Share

Related Articles

Frontend Architecture & Best Practices

Your React Codebase Should Not Become Unmanageable After 50 Components.

Most React projects fall apart within 6 months. Not because of React, but because nobody planned the architecture before writing the first component. Here is how to fix that from day one.

8 min readRead Article
Agency & White-label Partnerships

How to Scale Your Agency Without Hiring Full-Time Developers

Hiring full-time developers to handle capacity spikes is expensive, slow, and risky. Here is how growing agencies scale their development output without adding headcount.

7 min readRead Article
Figma to Code / Design Systems

Why Most Figma-to-Code Handoffs Fail (And How to Fix Them)

The gap between what designers create and what developers build is not a talent problem. It is a process problem. Here is what a proper Figma-to-code handoff looks like and why it matters.

7 min readRead Article