Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

@iamakulov
Copy link

Summary

This PR implements the first change discussed in #16539. When real-world FCP and LCP are the same, we assume they originated from the same element (that’s roughly how it works inside Chromium), and we make the simulated LCP also equal the simulated FCP.

This leads to some sites getting better (and hopefully fairer) scores. How many sites are affected, and how better are the scores? See https://observablehq.com/d/f62cde3eb2ec3e76 for a study across ~350 sites.

Related Issues/PRs

#16539

@iamakulov iamakulov requested a review from a team as a code owner November 8, 2025 00:15
@iamakulov iamakulov requested review from connorjclark and removed request for a team November 8, 2025 00:15
@iamakulov iamakulov changed the title Lantern LCP: Compute LCP as FCP when real-world FCP/LCP occur on the same frame core(lantern): compute LCP as FCP when real-world FCP/LCP occur on the same frame Nov 8, 2025
@iamakulov iamakulov changed the title core(lantern): compute LCP as FCP when real-world FCP/LCP occur on the same frame core(lantern): compute LCP as FCP when they occur within the same frame Nov 8, 2025
@iamakulov
Copy link
Author

iamakulov commented Nov 8, 2025

I tried my best to measure the impact across the board, but I ran into one Lighthouse peculiarity that I couldn’t overcome. From the link above:

Can we trust this data?

Yes and no. I did my best collecting the data over a wide range of sites in a stable environment (t3.large), and filtering out sites that might be problematic (have warnings, etc). However, there’s still one peculiarity in Lighthouse traces that I can’t get rid of.

When you run a Lighthouse test from the Lighthouse CLI (as opposed to DevTools), the first paint often end up significantly delayed. For example, here is one site from the dataset (https://platepost.io/wywhcoffee):

DevTools Lighthouse CLI Lighthouse
CleanShot 2025-11-08 at 01 04 09@2x CleanShot 2025-11-08 at 01 02 18 2@2x
(Trace) (Trace)

Despite the site being the same, and the page being renderable after HTML/CSS loads, with CLI Lighthouse, the first paint would often happen 500-1000ms later than I’d expect it to. This happens both with a “cold” browser (a browser that’s re-launched for every Lighthouse test; the default Lighthouse behavior), and with a “warm” one (a single Chrome instance shared among all Lighthouse tests).

This likely skews the test results, making simulated FCP/LCP worse.

Would appreciate some tips if you know what’s causing this / how to solve this! (Pre-warming the browser where you open it ahead of time and load google.com doesn’t seem to help.)

Ah, and, yes, let me know what the next best steps are / how I can help to get this merged :)

@iamakulov
Copy link
Author

iamakulov commented Nov 12, 2025

@brendankenny mentions:

Full comment
  1. If we associate a nodeId with FCP, it would always be the same node as LCP. Look at the algo again: first, Chromium looks at each paint and determines whether it’s FCP/LCP. Then, it uses ImagePaintTimingDetector and TextPaintTimingDetector to determine what image or text rendered within that paint. And then, it associates that image or text with LCP.
    If we were to implement node detection for FCP, we’d use the same ImagePaintTimingDetector and TextPaintTimingDetector. And for the same paint, they would always return the same nodeIds.

I feel like I'm getting tripped up on this point. Having the same timestamp seems necessary but not sufficient, because many nodes can have very different critical paths but still be presented at the same time (since all the paint timing trace events set their ts to the paint time).

For example, a page with text included inline in the html document as the typical FCP but an LCP image requiring a fetch. Depending on the machine, the connection, and the rest of the page (e.g. render-blocking resources, if the LCP image is identifiable by the preload scanner, etc), the LCP image might be ready to paint in the same frame as the text. On a slower device and connection, though, the LCP image might not be ready by the time the browser can make that first contentful paint.

Am I missing something, though?

I do agree it will be difficult to impossible to annotate the FCP trace event with the node that was painted since flagging contentful paints is done in so many places, and they all only record the timestamp for use in PaintTiming::MarkPaintTimingInternal because that's all that's ever been needed. There are some other layout and painting trace events with somewhat extensive node info recorded, but it would likely be a lot of work and still not enough to establish equivalence. The effort to only evaluate nodes for timing/tracing on frames means this aliasing might be a fundamental problem for simulated throttling regardless, and that improvements will have to come entirely from the simulation side.

Having the same timestamp seems necessary but not sufficient, because many nodes can have very different critical paths but still be presented at the same time

Yeah no you’re right! TBH I was going forward off @paulirish’s “we should be fine”. But FCP and LCP having the same timestamp on an unthrottled connection is not a strong assumption.

Ultimately, what I’m trying to solve here is cases like this one or this one, where (upon a manual trace) it’s clear FCP and LCP refer to the same element and should be the same no matter what. Looking at all the pages I collected for the tests, it looks like this mainly affects text-first pages. So I wonder if we can make the heuristic more sound if we focus on text only? E.g.:

  • Make simulated LCP = simulated FCP when

    • unthrottled LCP = unthrottled FCP, and
    • the LCP element is text, and
    • the text has font-display: swap or fallback or optional (→ excludes the font loading delay)
    • the text was present in the initial HTML (→ excludes late-inserted LCP elements)
  • Same as above, but instead of making LCP = FCP, only make LCP optimistic waterfall = FCP optimistic waterfall

    • this is a bit more conservative
  • ??? (v happy to brainstorm)

For the context, I think it’s unlikely we’ll get to “100% accurate” with these heuristics – but my thinking here is “we already aren’t accurate today, can we make this more accurate even if it’s not perfect”? That said, if none of these look sound, happy to close this and move to the next improvement! Curious what you think :)

@iamakulov
Copy link
Author

@brendankenny kind follow-up re: the above q! curious about your thoughts (not necessarily the final answer). also happy to gchat (ivan at 3perf dot com) if it’s easier. thank you :)

@connorjclark
Copy link
Collaborator

connorjclark commented Dec 15, 2025

Catching up here. This seems like a sensible change, given we account for more than just "same ts" (for the reasons Brendan pointed out).

Same time + same node id seems sufficient to me, and has the benefit of being simple to understand. Would that still resolve the majority cases you tested against?

@iamakulov
Copy link
Author

iamakulov commented Dec 16, 2025

Thank you for catching up! I think checking for node ID won’t be helpful, for two reasons:

It doesn’t address what Brendan pointed out

Brendan describes the following scenario:

  • You have a page like this:
    <link rel="stylesheet" href="/tiny-stylesheet.css">
    <p>Hello!</p>
    <img src="/1mb-image.png">
  • On an unthrottled connection, both the stylesheet and the image take 200 ms to download (because the bottleneck is latency). The stylesheet is render-blocking, so it delays FCP by these 200 ms. When the stylesheet is downloaded, FCP happens with both text and image already rendered. Because the image is the largest page element, LCP also happens at the same paint timestamp as FCP.
  • On a throttled connection, the stylesheet takes 200 ms to download, but the image takes 1s. When FCP happens, only the text is visible. LCP happens 800 ms later.

This is a trivial example that breaks the heuristic implemented in this PR. The code here will see that unthrottled LCP is equal to FCP, and it will simulate the throttled LCP as also being equal to FCP, even though that’s not right.

Comparing node IDs doesn’t actually help

The other issue (as I realized earlier) is that comparing node IDs is useless.

For LCP, there’s a clear way to link it to a node ID: look at the paint → find the biggest text or image node → take its id. That’s what Chrome does.

But how do you do it for FCP?

  • Do you look at the FCP paint and also find the biggest text or image node? This is useless: whenever FCP and LCP would happen during the same paint, the biggest node in that paint will always be the same! As a result, if you check fcp.paintId === lcp.paintId¹ && fcp.nodeId === lcp.nodeId, the second condition will always be true when the first one is true, making it unnecessary
  • Or do you look at the FCP paint and find the first text or image node? This is misleading: the first text or image isn’t necessarily the largest, so you’ll have false negatives.
  • Or do you collect all node IDs rendered during FCP? But why also useless: if FCP and LCP happened within the same paint, it’s guaranteed the LCP element will be among FCP elements. So if you check fcp.paintId === lcp.paintId¹ && fcp.nodeIds.includes(lcp.nodeId), the second condition will also always be true when the first one is true

So, adding the fcp.nodeId === lcp.nodeId check doesn’t help in any way.

¹ — or, in practice, fcp.timestamp === lcp.timestamp, as this PR does. The timestamp has a microsecond precision (afair), so if two timestamps match, we can assume that was the same paint ID.


Hence, we have three routes:

  1. Accept Brendan’s case as a tradeoff, and merge this regardless
  2. Re-do the PR to narrow the heuristic. I personally like bullet point 1 from here – I think it addresses most cases I’ve seen, but I need your input on this, mostly re: any edge cases that pop up in your mind (white curtains? something else?)
  3. Close this, if those heuristics still don’t feel sound

(If this is still confusing, lmk! It’s 4 am here :D so maybe I’m not explaining this well.)

@iamakulov
Copy link
Author

To make it really clear what the issue is, the biggest example is Wikipedia. You can see how the LCP element gets rendered in the very first paint, and there’s no network or CPU delay (it’s plain HTML with a system font), and yet the simulated LCP still ends up much higher than FCP:

CleanShot 2025-12-16 at 04 32 22@2x

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.