Data Fetching Patterns Every Developer Should Know (And When to Actually Use Them)

About a year ago, I was working on a payment app. Solid architecture, clean API design, decent frontend on paper, everything looked good. But a few months after launch, the ratings started tanking. Users were complaining about slow loads, failed transactions, and the whole thing falling apart on spotty connections.

I spent three months debugging those performance issues, and the fix wasn’t some clever algorithm or a server upgrade. It was rethinking how we fetched data. That’s it. Same features, same infrastructure, same design just smarter data fetching patterns. The app went from 3.2 stars to 4.7, and transaction volume jumped 30% within two months.

That experience a year ago changed how I think about data flow end-to-end. Most apps don’t have a “feature” problem they have a “how we get data to the screen” problem. And the difference between a mediocre app and a great one often comes down to picking the right data fetching pattern for the right situation.

Here’s everything I learned and wish I’d known sooner.

The Basics: Request-Response

This is where everyone starts, and for good reason. You ask the server for something, you wait, you get it back. It’s the foundation of HTTP, and it handles the majority of use cases just fine.

const fetchUser = async (id: string) => {
  const response = await fetch(`/api/users/${id}`);
  return response.json();
};

Think of it like ordering at a counter you place your order, you wait, you get your food. Simple and predictable.

This works great for standard CRUD operations: loading a user profile, submitting a form, fetching account details on page load. Where it falls apart is when you start chaining multiple requests together. If your page needs data from five endpoints and each one takes 300ms, your user is staring at a spinner for 1.5 seconds. That adds up fast.

The key is recognizing when request-response stops being enough which brings us to everything else.

Polling: The “Are We There Yet?” Approach

Polling is exactly what it sounds like. You ask the server for updates on a regular interval. Every 5 seconds, every 30 seconds, whatever makes sense for your use case.

const pollForUpdates = () => {
  const intervalId = setInterval(async () => {
    try {
      const response = await fetch('/api/updates');
      const data = await response.json();
      updateUI(data);
    } catch (error) {
      console.error('Polling failed:', error);
    }
  }, 5000);

  // Don't forget cleanup
  return () => clearInterval(intervalId);
};

I’ve seen polling get a bad reputation, and honestly, sometimes it deserves it. Naive polling hammers your server with requests even when nothing has changed. On mobile, it eats battery life. And you’ll always have that gap between intervals where updates get missed.

But here’s the thing polling is dead simple to implement, works everywhere, and for many use cases (dashboards refreshing every 30 seconds, checking job status on a build pipeline, order tracking) it’s perfectly fine. Not everything needs to be real-time. Sometimes “close enough” is the right engineering decision.

The smarter version is long polling, where the server holds the connection open until it actually has something to send back. It’s a nice middle ground before committing to WebSockets.

WebSockets: When You Need Actual Real-Time

WebSockets maintain a persistent, two-way connection between the client and server. Unlike polling, neither side has to ask data flows both directions whenever either side has something to say.

const socket = new WebSocket('wss://example.com/socket');

socket.onopen = () => {
  console.log('Connected');
};

socket.onmessage = (event) => {
  const data = JSON.parse(event.data);
  updateUI(data);
};

socket.onclose = () => {
  // You'll want reconnection logic here connections drop
  console.log('Disconnected');
};

This is what powers chat apps, multiplayer games, collaborative editors like Google Docs, and trading platforms where milliseconds matter. If your users need to see changes the moment they happen, and especially if they need to send data back frequently, WebSockets are the right call.

The tradeoff is complexity. You need to handle reconnections (connections will drop). You need to think about scaling every connected user holds an open connection on your server. You need to deal with authentication differently than with regular HTTP. It’s not hard, but it’s more surface area than a simple fetch call.

My rule of thumb: if you’re polling more than once every 5 seconds, it’s probably time to consider WebSockets.

Server-Sent Events: Real-Time’s Simpler Cousin

SSE is the pattern I wish more developers knew about. It’s a one-way channel the server pushes updates to the client over a long-lived HTTP connection. No polling, no WebSocket complexity.

const eventSource = new EventSource('/api/stream');

eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  updateUI(data);
};

eventSource.onerror = () => {
  eventSource.close();
};

See how much simpler that is compared to WebSockets? And you get automatic reconnection for free the browser handles it.

SSE is perfect for notifications, live sports scores, progress bars for long-running tasks (think file processing or deployment pipelines), newsfeeds, and anything where the server is doing the talking and the client is just listening.

The limitation is right there in the name: server-sent. If your client needs to send data back frequently, SSE isn’t enough. But for a surprising number of “real-time” features, one-way is all you need.

Caching: Making Your App Feel Instant

Caching is less of a fetching pattern and more of a fetching strategy that you layer on top of other patterns. The idea is simple: store data you’ve already fetched so you don’t have to fetch it again.

import { useQuery } from 'react-query';

const { data, isLoading, error } = useQuery(
  'user',
  () => fetch('/api/user').then((res) => res.json()),
  {
    staleTime: 5 * 60 * 1000,
    cacheTime: 30 * 60 * 1000,
  }
);

Libraries like React Query and SWR have made caching dramatically easier. They handle stale-while-revalidate (show cached data immediately, then refresh in the background), cache invalidation, and deduplication of simultaneous requests.

The impact is hard to overstate. When a user navigates to a page they’ve already visited and the content appears immediately while a background refresh happens silently that’s the kind of thing that makes an app feel native-quality.

The classic challenge is cache invalidation (there’s a reason Phil Karlton called it one of the two hard things in computer science). You have to decide: how long is cached data acceptable? What events should invalidate the cache? What happens when two tabs have different cached versions? These are solvable problems, but they require deliberate thinking.

Lazy Loading: Don’t Fetch What You Don’t Need Yet

The fastest network request is the one you never make. Lazy loading defers fetching until the user actually needs the data typically triggered by scrolling, clicking a tab, or navigating to a new section.

const loadMore = async () => {
  if (isNearBottom()) {
    const response = await fetch(`/api/items?page=${nextPage}`);
    const newItems = await response.json();
    setItems(prev => [...prev, ...newItems]);
    setNextPage(prev => prev + 1);
  }
};

window.addEventListener('scroll', loadMore);

You see this everywhere: infinite scroll on social feeds, images loading as you scroll past them, tabs that only fetch their content when clicked. It makes initial page loads fast because you’re only loading what’s visible.

The gotchas are UX-related. Infinite scroll can make it impossible for users to reach the footer. Loading new content can cause layout shifts that make users lose their place. And for accessibility, you need to make sure screen readers can navigate lazy-loaded content properly.

For very large lists (thousands of items), pair lazy loading with virtualization only render the DOM elements that are visible in the viewport. Libraries like react-window or tanstack-virtual make this manageable.

Background Sync: Building for the Real World

This one is close to my heart because it solved the biggest pain point in that payment app I worked on last year. Background sync lets users take actions (send a message, submit a form, record a transaction) even when they’re offline. The operations get queued and processed automatically when connectivity returns.

// Service Worker
self.addEventListener('sync', (event) => {
  if (event.tag === 'sync-transactions') {
    event.waitUntil(processQueuedTransactions());
  }
});

// Application code
async function recordTransaction(transaction) {
  await saveToLocalQueue(transaction);

  // Show the transaction in the UI immediately
  updateUIOptimistically(transaction);

  if ('serviceWorker' in navigator) {
    const registration = await navigator.serviceWorker.ready;
    await registration.sync.register('sync-transactions');
  }
}

This pattern is essential for mobile apps used in areas with unreliable connections field service apps, delivery tracking, healthcare in rural areas, anything where you can’t assume a stable connection.

The complexity lives in conflict resolution. What happens if two offline users edit the same record? What if the server rejects a queued operation? You need clear strategies for these cases, and they’re not always straightforward. But the user experience improvement is massive. Going from “you can’t do anything without internet” to “everything just works, and syncs when it can” is a night-and-day difference.

Batch Fetching: One Trip Instead of Ten

If your page makes 8 separate API calls to render, something is probably wrong. Batch fetching combines multiple requests into a single network call.

// Instead of this:
const user = await fetch('/api/users/1');
const posts = await fetch('/api/users/1/posts');
const notifications = await fetch('/api/users/1/notifications');

// Do this:
const dashboard = await fetch('/api/dashboard?userId=1');
// Returns user, posts, and notifications in one response

The savings come from reducing HTTP overhead connection setup, headers, TLS handshakes. On mobile networks with high latency, the difference between one request and ten is very noticeable.

The downside is coupling. When you batch things together, you can’t cache or invalidate them independently. If the notification data changes every 30 seconds but user profile data changes once a month, batching them means either over-fetching profile data or under-fetching notifications. You have to think about which data actually belongs together.

GraphQL: Ask for Exactly What You Need

GraphQL flips the traditional REST model. Instead of the server deciding what data each endpoint returns, the client specifies exactly what it needs.

const query = `
  query GetUser($id: ID!) {
    user(id: $id) {
      name
      email
      posts(last: 5) {
        title
        preview
      }
    }
  }
`;

const response = await fetch('/graphql', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ query, variables: { id: '123' } }),
});

With REST, a mobile app and a desktop app hitting the same /api/user endpoint get the same response even if the mobile app only needs the name and avatar while the desktop app needs the full profile. GraphQL eliminates that mismatch. Each client asks for exactly what it needs.

This matters most when you have multiple clients with different data requirements, deeply nested data relationships, or you’re tired of creating one-off REST endpoints for every new screen design.

The investment is real, though. You need a GraphQL server, a schema, resolvers, and your team needs to learn a new paradigm. Caching is trickier than REST because everything goes through a single endpoint. And poorly written queries can cause serious performance issues on the backend (the N+1 query problem is very real with GraphQL).

For smaller apps with a single client, REST with good API design is usually simpler and sufficient.

Federated Fetching: Unifying Microservices

In microservice architectures, the data a single page needs might live across five different services. Federated fetching usually through a BFF (Backend-For-Frontend) layer or API gateway aggregates that data so the client makes one clean request.

// BFF endpoint
app.get('/api/dashboard/:userId', async (req, res) => {
  const [user, accounts, activity] = await Promise.all([
    fetch(`http://user-service/users/${req.params.userId}`),
    fetch(`http://account-service/accounts?userId=${req.params.userId}`),
    fetch(`http://activity-service/recent?userId=${req.params.userId}`)
  ]);

  res.json({
    user: await user.json(),
    accounts: await accounts.json(),
    recentActivity: (await activity.json()).slice(0, 5)
  });
});

The BFF pattern is a lifesaver in complex systems. Instead of the frontend knowing about every microservice and making separate calls to each, it talks to one unified API that handles the orchestration. The frontend stays clean, and you can tailor responses to what each client actually needs.

The downside is obvious you’re adding another service to build, deploy, and maintain. And if your BFF goes down, everything goes down. It’s a pattern that makes sense at a certain scale, but overkill for smaller applications.

Combining Patterns: Where It Gets Interesting

No real application uses just one pattern. The interesting decisions happen when you combine them. Here’s what that looks like in practice:

A messaging app might use WebSockets for incoming messages, background sync for sending messages in poor connectivity, caching for conversation history, and lazy loading for scrolling through older messages.

An e-commerce app might use request-response for search, caching for product pages, SSE for inventory availability, and batch fetching for the cart summary.

A trading platform might use WebSockets for live prices, polling as a fallback, GraphQL for portfolio data, and caching for historical charts.

The point is to match each data need to the pattern that best serves it. Not every piece of data on a screen has the same freshness requirements, the same access patterns, or the same tolerance for latency.

Quick Reference

Pattern Use When Complexity Offline Support
Request-Response Standard CRUD, simple pages Low No
Polling Periodic updates, status checks Low No
WebSockets Two-way real-time (chat, collaboration) High No
Server-Sent Events One-way real-time (notifications, feeds) Medium No
Caching Repeated data access, speed matters Medium Partial
Lazy Loading Large lists, heavy initial loads Low No
Background Sync Offline-first, unreliable connections High Yes
Batch Fetching Multiple related data needs Low No
GraphQL Complex/varied data requirements High No
Federated Fetching Microservices, unified APIs High No

Final Thoughts

A year ago, when that payment app went from 3.2 stars to 4.7, we didn’t add a single new feature. We just changed how existing features got their data. Caching made it feel instant. Background sync made it work offline. WebSockets made payments confirm in real-time. Batch fetching cut load times by 80%.

Looking back, that project taught me something I keep coming back to: users don’t care about your architecture. They care that things are fast, reliable, and don’t waste their time. Data fetching patterns on both the backend and the frontend are how you deliver on that promise.

Pick the right pattern for each situation. Combine them thoughtfully. And when your app ratings start climbing, you’ll know why.

Leave a Reply