Core Web Vitals

Core Web Vitals (CWV) are a set of three specific metrics that Google considers essential to every web page's user experience. Since June 2021, these metrics have been part of Google's page experience ranking signals, meaning they directly influence where your site appears in search results. But the real reason to care about them is simpler: they measure what users actually feel when they visit your site.

The Three Core Web Vitals

Google distilled decades of user experience research into three numbers. Each metric captures a different dimension of the loading and interaction experience.

Largest Contentful Paint (LCP)

What it measures: LCP records the time it takes for the largest visible content element in the viewport to finish rendering. This is typically a hero image, a large heading, or a prominent block of text. It answers the question users subconsciously ask: "Has the main content loaded?"

Target: Less than 2.5 seconds. Pages that load their primary content in under 2.5 seconds are rated "good." Between 2.5 and 4.0 seconds is "needs improvement," and anything above 4.0 seconds is "poor."

Common causes of slow LCP include:

  • Slow server response times — the browser cannot render anything until it receives HTML from the server
  • Render-blocking CSS and JavaScript — the browser pauses rendering until it finishes downloading and parsing these resources
  • Unoptimized images — large hero images that haven't been compressed or served in modern formats
  • Client-side rendering — frameworks that ship empty HTML and build the page entirely in JavaScript
Tip: CodeFrog's page size test helps identify oversized pages that directly impact LCP. A page that transfers several megabytes of data will struggle to achieve a good LCP score, regardless of other optimizations.

Interaction to Next Paint (INP)

What it measures: INP replaced First Input Delay (FID) as a Core Web Vital in March 2024. While FID only measured the delay before processing the first interaction, INP measures the responsiveness of all interactions throughout the page's lifecycle. It records the latency of clicks, taps, and keyboard inputs, then reports a value that represents the worst-case interaction (technically, the 98th percentile).

Target: Less than 200 milliseconds. Interactions that complete their visual update within 200ms feel instant to users. Between 200ms and 500ms feels sluggish, and above 500ms the page feels broken.

INP captures three phases of every interaction:

  1. Input delay — the time between the user's action and the browser starting to run event handlers, often caused by the main thread being busy with other tasks
  2. Processing time — the time spent executing the event handler code itself
  3. Presentation delay — the time for the browser to calculate layout changes and paint the visual update

Long JavaScript tasks are the most common culprit behind poor INP. If the main thread is busy parsing a large script bundle or running a computationally expensive function, user interactions queue up and feel unresponsive.

Cumulative Layout Shift (CLS)

What it measures: CLS quantifies how much the visible content of a page shifts unexpectedly during its lifecycle. It captures those frustrating moments when you're about to tap a button and suddenly the page rearranges itself, causing you to click the wrong thing. CLS is measured as a score, not a time — it considers the fraction of the viewport that shifted and the distance of the shift.

Target: Less than 0.1. A CLS score under 0.1 means the page is visually stable. Between 0.1 and 0.25 needs improvement, and above 0.25 is poor.

Common causes of layout shift include:

  • Images without dimensions — when width and height attributes are missing, the browser doesn't know how much space to reserve until the image loads
  • Ads and embeds — dynamically injected content that pushes existing content down the page
  • Web fonts — text that reflows when a custom font finishes loading and replaces the fallback font
  • Dynamic content — JavaScript that inserts elements above existing visible content without reserving space first

How Google Uses Core Web Vitals for Ranking

Core Web Vitals are part of Google's broader "page experience" signals, which also include mobile-friendliness, HTTPS, and the absence of intrusive interstitials. Google collects CWV data from real users through the Chrome User Experience Report (CrUX), which aggregates anonymized field data from Chrome users who have opted in.

The ranking impact works as a tiebreaker: when two pages have equally relevant content, the one with better Core Web Vitals gets a ranking advantage. CWV alone won't compensate for poor content, but among competitive results, performance can be the deciding factor.

Google evaluates CWV at the 75th percentile (p75) of user experiences. This means 75% of your visitors need to have a "good" experience for the page to pass. You cannot game this metric with a fast CDN that serves a handful of users quickly — the majority of your real audience must experience fast, stable, responsive pages.

Tools for Measuring Core Web Vitals

There are two categories of CWV measurement: lab tools that simulate user conditions in a controlled environment, and field tools that collect data from real users.

Lab Tools

  • Chrome DevTools Performance panel — provides detailed traces showing exactly where time is spent during page load and interaction. You can throttle the CPU and network to simulate slower devices.
  • Lighthouse — runs automated audits and reports LCP, CLS, and simulated interaction metrics with actionable recommendations. Available in Chrome DevTools, as a CLI, and as a CI integration.
  • WebPageTest — offers filmstrip views, waterfall charts, and the ability to test from different geographic locations and device types.

Field Tools

  • PageSpeed Insights — combines Lighthouse lab data with CrUX field data to show how real users experience your page. This is the closest view to what Google actually uses for ranking.
  • Chrome User Experience Report (CrUX) — the raw dataset behind PageSpeed Insights, available through BigQuery and the CrUX API for large-scale analysis.
  • web-vitals JavaScript library — a lightweight library (under 2KB) from Google that you can add to your site to measure CWV from your own users and send the data to your analytics platform.
Tip: Lab tools are great for debugging and development, but field data is what Google uses for ranking. Make a habit of checking PageSpeed Insights for your key pages to see how real users experience them, not just how they perform on your development machine.

A Practical Approach to Core Web Vitals

Improving CWV is not about chasing perfect scores — it is about systematically removing the bottlenecks that hurt real users. Start with the metric that is furthest from its target threshold, since that represents the biggest experience problem. Use lab tools to diagnose the issue, implement a fix, deploy it, and then monitor field data to confirm the improvement reaches real users.

The most impactful optimizations are often straightforward:

  • For LCP: optimize your largest above-the-fold image and reduce server response time
  • For INP: break up long JavaScript tasks and defer non-essential scripts
  • For CLS: add explicit width and height to images and reserve space for dynamic content

The following lessons in this topic dive deep into each of these optimization areas, starting with page speed and working through images, caching, and lazy loading.

Resources