Why Core Web Vitals Matter Not Only for SEO but Also for Leads

In many companies, Core Web Vitals sit on the SEO side of the table. Teams discuss red PageSpeed numbers, ship a technical fix, see a better score, and close the issue.

At the same time, sales complains about weak enquiry volume, marketing says paid traffic is getting more expensive, and the owner watches CPL creep upward. Those stories are directly connected.

This article explains why CWV is not just an SEO concern, how LCP, INP, and CLS change lead economics, and how to prioritise performance work as a commercial decision rather than a technical afterthought.

Glossary
C.W.V.
Core Web Vitals: A set of key Google metrics that describe page load, responsiveness, and visual stability.
LCP
Largest Contentful Paint: The time it takes for the main visible content of the first screen to appear on the screen.
INP
Interaction to Next Paint: Interface latency metric after clicking, tapping or typing, especially important for forms and buttons.
CLS
Cumulative Layout Shift: A measure of how much page elements shift while loading and interfere with the user's experience.
CR
Conversion Rate: the percentage of visitors who completed a target action, such as submitting an application.
CPL
Cost per Lead: The cost per lead that a business receives after dividing advertising costs by the number of leads.
RUM
Real User Monitoring: collecting field data on the speed and behavior of real users, rather than laboratory measurements.
CRUX
Chrome User Experience Report: Aggregated Chrome field data on user experience quality on real devices.
Lighthouse
Laboratory performance and page quality audit tool; useful for diagnostics, but does not replace real user data.

Key takeaways

Key takeaways in 30 seconds

  • Improving LCP by 31% gave Vodafone Italia +8% sales, +15% lead-to-visit and +11% cart-to-visit on the same traffic - this is direct proof that CWV is driving applications, not just SEO[1]
  • In an A/B test, Rakuten 24 received +33.1% in conversion and +53.4% in revenue per visitor due to Web Vitals optimization - that is, speed optimization works as an independent CRO tool[2]
  • Improving the speed of a mobile site by 0.1 second increases retail conversion by 8.4%, travel conversion by 10.1% and reduces the bounce rate in the lead gen by 8.3% - tenths of a second are worth quite measurable money[3]
  • In March 2024, Google replaced FID with INP, and now slow interfaces (search, filters, forms, popups) affect not only UX, but also directly affect the page experience signal[4]
  • According to Web Almanac 2024, only 43-48% of sites pass CWV entirely, and LCP remains the main bottleneck - that is, the competitive advantage due to speed is still on the floor and no one is picking it up[5]
  • If you want to understand how many applications your site loses on LCP, INP and CLS in numbers, order a Core Web Vitals audit in conjunction with the funnel from the Ontop team, we will show losses in BYN and priorities for development[7]

Why “CWV is about SEO” is long outdated

The thesis “Core Web Vitals is a ranking story” was true in 2020–2021, when Google had just announced the page experience update, and the community was hotly debating whether it would be a serious ranking factor. Four years later, the picture looks different: the documented ranking effects of CWV were modest—a weak tiebreaker for equal content relevance, rather than a major factor. But the effects on conversion and revenue were severe. With the same traffic, without changes to the campaign and without new offers, LCP optimization gives an increase in sales that SEO promotion will not give even in a quarter.[1]

Vodafone Italia conducted a pure A/B test: the same page in two versions, the same traffic from paid channels, the only difference is optimization for Web Vitals. The version with a 31% improved LCP showed 8% more sales, 15% better lead-to-visit and 11% better cart-to-visit. No changes in offers, design, or funnel. Only the speed of drawing the main content - and plus 8% to the money.[1]This is the only correct frame in which to discuss CWV: as metrics that directly convert into applications and revenue, and not as technical ones. SEO specialist's duty.

Rakuten 24 repeated this logic in e-commerce. In their A/B test, they received +33.1% to conversion, +53.4% to revenue per visitor, +15.2% to average check and -35.1% to exit rate - due to systematic work with Web Vitals. Against the background of these numbers, any conversation “is it necessary to do CWV at all for a small b2b site” becomes meaningless: if you have paid traffic, a slow and unstable page is the biggest hole in the funnel after a weak offer.[2]

A separate argument is provided by data from Web Almanac 2024. As of 2024, only 43% of sites pass CWV entirely, taking into account INP, and LCP remains the main bottleneck - for most sites it does not reach 2.5 seconds for the median user. This means that more than half of the competitors in any niche are now burning part of the advertising budget on a slow landing - and CWV optimization today gives a real competitive advantage, which in 2027 will become a hygienic minimum.[5]

+8%
Vodafone Italia sales increase with LCP improvement by 31% in A/B test, without changes in offers and campaign[1]
+33.1%
Rakuten 24 conversion increase after optimizing Web Vitals on A/B test[2]
43%
Share of sites passing all three CWV on mobile in 2024 - more than half of the Internet burns traffic at speed[5]

LCP, INP and CLS in application language, not Lighthouse

For CWVs to stop being “the three colored numbers in PageSpeed,” they need to be translated from the language of engineering metrics to the language of visitor behavior. Each of the three metrics is responsible for a specific moment in the user’s experience, and each moment has its own direct relationship to the application. Since March 2024, the set has been finally fixed: LCP is responsible for “whether I managed to see the offer”, INP - for “whether the page reacts when I interact with it”, CLS - for “whether the button slipped under my finger a second before the click”. INP replaced the old FID precisely because the FID only measured the first interaction, and real tickets are lost on all interactions after, including on forms and filters.[4]

LCP (Largest Contentful Paint) is about the first impression and the decision to “stay or go.” If the largest significant block of a page—header, banner, main image, or product card—takes longer than 2.5 seconds to draw, some users simply don’t wait. This especially hits mobile paid traffic: the user came from an advertisement, has no invested interest, and easily closes the tab. On fast-loading product pages, e-commerce conversion can be 1.4–1.5 times higher than on slow-loading product pages, with equal traffic.[7]

INP (Interaction to Next Paint) is about honest interaction. When the user clicks a button, opens a drop-down list, clicks on a card, expands the FAQ, he waits for a visual reaction. If a page “thinks” for 300–500 ms, a person feels it as a lag, and if it thinks for a second, it feels like “the site is broken.” INP goes directly into application forms: if the submit button does not respond instantly, some users click again, then again, and then close.[4]

CLS (Cumulative Layout Shift) is about trust in the interface. When a user is reading a page and the text suddenly shifts, a banner “jumps,” or a button appears in a new place, they get a feeling of instability and cheapness. On landings with a high decision rate (medicine, legal services, B2B, repairs), such an experience breaks trust in 2-3 seconds and causes a drawdown of CR by 5-15%.[7]

LCP - about the offer and first impression

LCP answers the question “did the visitor manage to see why he came here before closing the tab.” The goal is <2.5 seconds at the 75th percentile of mobile traffic. Every extra 0.5 seconds at the top cuts out some of the paid clicks from the funnel even before scrolling.[3]

INP - about the interface reaction under your finger

INP replaced FID in March 2024 and evaluates all interactions, not just the first one. The goal is <200 ms at the 75th percentile. This is critical for application forms, filters, drop-down menus, and any modal windows where the user decides to click a button.[4]

CLS - about layout stability and trust

CLS evaluates how much page elements "jump" during loading and interaction. The goal is <0.1. Sites with CLS > 0.25 lose credibility on cold traffic faster than any other UX issue because the user reads the layout movement as "something is wrong here."[5]

LCP: first screen and application cost in BYN

LCP is the most expensive metric in monetary terms because it works at the narrowest part of the funnel: between “clicked on the ad” and “saw the offer”. Any losses here are not compensated for by anything further - the user doesn’t even get to the point where they can read your title, look at the cases and make a decision. Deloitte and Google in the Milliseconds Make Millions study showed that improving mobile site speed by just 0.1 seconds increases retail conversion by 8.4% and travel conversion by 10.1%. That is, tenths of a second on the LCP actually change the money at the cash register.[3]

To make this noticeable, let’s translate it into a funnel. Let’s take a typical paid campaign for Belarus for services: 1000 clicks per month, average rate 1.80 BYN, budget 1800 BYN. If the LCP on boarding is 4.2 seconds, we are losing some users before scrolling. Let's say this loss is 25% (this is a typical value for a slow mobile LCP according to RUM measurements on cold traffic). Out of 1000 clicks, only 750 people reach the content. Then the offer and trust blocks work, for example, at CR 3% - this is 22-23 applications. CPL turns out to be ~80 BYN. If LCP is compressed to 1.9 seconds, the content reaches, say, 920 people, with the same CR of 3% - that’s 27–28 applications and a CPL of about 64 BYN. One technical parameter shifted the CPL by 20% without changes to the offer and campaign.[3]

The most annoying thing about LCP is that it is not directly visible to the marketer. Google Analytics doesn't show that 25% of sessions were lost before the page had time to render the title. These users get bounced and are interpreted as “irrelevant traffic” or “weak offer.” In fact, the traffic was relevant, the offer was normal - it’s just that the landing page didn’t physically have time to show it before the tab was closed. This is a standard scenario that the Ontop team examines during audits: a marketer has been optimizing bids and creatives for years, not knowing that the main CRO reserve is sitting at the LCP stage.[7]

The third layer of the effect is cascading. LCP correlates with TTFB and slow CSS/JS, and if the landing is slow now, it will be slow in six months, because the reasons for LCP lie in the architecture (heavy template, render-blocking code, unoptimized images, lack of preconnect and preload). This means that without structural changes, the slow LCP burns up part of the budget every month, and over the course of a year, an amount accumulates that can be used to completely redo the landing frontend.[5]

1000
Paid clicks per month with a budget of ~1800 BYN
→
750
Reaches content with LCP 4.2 sec (25% lost before rendering)
→
22
Applications at CR 3% for those who reach the content
→
80
CPL to BYN with slow LCP
→
64
CPL in BYN with LCP 1.9 seconds and 920 reached - −20% without changes in the campaign

The calculation shows that LCP is not a “technical” metric, but a point of direct loss of applications. Deloitte recorded an effect similar in structure in real RUM data for 37 brands.[1]

INP: why a lagging form loses a ready client

INP is the most “insidious” of CWV, because it no longer hits a cold user, but a warm one who has decided to click something. On LCP we lose those who did not wait for rendering. On INP we are losing those who waited, read, became interested, clicked “Submit Application” - and nothing happened. This is the most expensive class of losses: you paid for the click, the landing page worked all the way, the person has already decided to leave the request, and at this moment the interface is silent for 600 ms. Some of the audience simply closes the tab at this point, especially mobile users.[4]

In real projects, INP problems are concentrated in a few common places. The first is the application form itself: a heavy JS validator, a synchronous analytical pixel, an extra call to the anti-bot service. The “Submit” button is clicked, nothing changes on the page for 800 ms, the user thinks that it didn’t work and clicks again - this can be seen in the logs as “double” requests or immediately as closing the tab. The second is filters and drop-down lists in catalogs and calculators: each checkbox recalculates a huge JS object, and the interface “chews” clicks. Third, modal windows of pop-ups and online chats, which themselves are heavy and eat up the main thread.[6]

Unlike LCP, INP is not “cured” by optimizing images or connecting a CDN. INP is treated with a reasonable client-side JavaScript architecture: dividing large handlers into parts, asynchronously loading what is not needed for the first interaction, eliminating unnecessary synchronous connections of third-party scripts, and competent work with web workers. This is the work of a developer, not a layout designer, and it cannot be “quickly closed on the knee” - but it is precisely this that eliminates losses at the most important stage of the funnel.[4]

The business meaning of INP is easiest to demonstrate on the class of “expensive” sites: medicine, legal services, real estate, B2B services. Here, the cost of one application can be 80–250 BYN, and the loss of 10–15% of forms due to a lagging interface results in dozens of applications and thousands of rubles per month. The Ontop team regularly sees how heavy client JS on the form eats up 20-30% of real submissions - they don’t reach the CRM, and the sales manager doesn’t even know about them.[5]

Difficult application form

RISK

Synchronous third party scripts on onClick
Huge JS validator that runs on every input
Anti-bot check that slows down the main thread
Asynchronous dispatch, split validation, lazy script initialization
Lagging filters in the catalog

RISK

Each checkbox recalculates the entire JS state
Redrawing the entire list instead of changed cards
Synchronous request to the server for each click
List virtualization, query debounce, spot redraw
Heavy pop-ups and chats

M.I.D.

Third-party scripts are loaded onto the main thread immediately
Heavy modals are rendered before interaction
Chat widget blocks main thread for 1–2 seconds
Lazy loading by intent, web worker for heavy logic
Onboarding and calculators

M.I.D.

Each click recalculates the large object
Animations are made using JS instead of CSS transforms
CSS animations, memoization, splitting into small handlers

CLS: How a jumping layout kills trust in 2 seconds

CLS is the most subtle of the CWVs because its effect on conversion is not through speed, but through trust. When the user reads the first screen, and after a second the font changes size, the banner jumps down, the “Order” button moves to a new place, and a cookie block appears - the person gets the feeling “something is not completed here.” On sites with a high decision rate, this feeling is instantly converted into distrust of the company itself, especially in niches where the user is already cautious: services, medicine, B2B, expensive purchases.[7]

The worst scenario is the “bouncing button”. The user sees a CTA on the first screen, takes aim, clicks - and at the moment of clicking, the button goes down because a lazy banner was loaded on top or an advertisement was loaded. The user clicks the wrong button, ends up in an unexpected place, and loses confidence in the interface. On click maps, this can be seen as a cluster of “misses” in one area of the page. In conversion measurements, this gives a drawdown of 5–10% for cold traffic and up to 20% on mobile, where the screen is smaller and the shifts are more sensitive.[5]

A less obvious scenario is fonts and icons. When a page draws content in a fallback font, and a second later loads the main font, the length of the lines changes, and the entire layout “distorts”. Technically this is not critical, but psychologically it reads as instability. The user gets a short feeling of “the site is not ready yet,” and this reduces the willingness to provide personal data in the application form. CLS catches this problem and digitizes it - but it is solved not by a marketer, but by a developer: correct font-display, reserving space for media, abandoning late JS injections into the DOM.[7]

The third layer of CLS is ad units and cookie consent popups. This is technically an explainable "jump", but the user does not distinguish between "explainable" and "unexplainable" CLS - he just sees a twitch. According to Web Almanac 2024, CLS has been largely “corrected” in most sites, but it is precisely on those projects that actively use third-party scripts that CLS remains a problem. And it is precisely these projects that are often lead generation sites with a large advertising load - that is, where CLS hits the weakest point: the application form.[5]

Without working with CLS

Jumping layout - distrust in 2 seconds

The CTA button moves down at the moment of clicking due to the lazy banner on top. The font changes size and the title “twitches.” The cookie banner appears after 800 ms and moves half of the first screen. The user reads this as “a crude site,” and trust in the company is lost even before reading the offer. Conversion on mobile drops by 10–20%.[7]

With normal CLS

Stable layout - the user is focused on the offer

Space is reserved for all media, banners and dynamic blocks. Fonts are loaded with the correct font-display strategy. The cookie banner does not shift the content. CLS < 0.1, and the user reads the page as a finished, proven product. Trust is maintained and the application form gets its fair share of conversions without hidden losses.[5]

SEO effect versus business effect: which weighs more?

Discussions about CWV almost always conflate two stories: how they affect rankings and how they affect conversions. These are two different effects with different mathematics, and they need to be compared separately. The SEO effect of CWV, according to the confirmed position of Google, exists, but it is a weak signal that works as a tiebreaker for equal content relevance. If your content is weak, a perfect CWV won't get you to the top. If you have strong content and average CWV, you will beat competitors with strong content and poor CWV, but modestly.[6]

The business effect of CWV works completely differently: it does not depend on your position in the search results or whether you rank at all. It works for every user who comes to the site - from any channel. Paid traffic from Google Ads, Yandex.Direct, targeting in social networks, SEO, direct traffic from business cards and brochures, conversions from mailings - all these visitors equally face a slow LCP, lagging INP and jumping CLS, and equally lose some conversions. Therefore, the economic weight of CWV is multiples greater than the SEO weight: the first works on 100% of traffic, the second only on that part of it that comes from organic sources and competes for positions with a close competitor.[2]

The third detail is the recoil speed. The SEO effect of CWV, if there is one, manifests itself after weeks and months of re-indexing and recalculation of signals. The conversion effect is visible 7–14 days after the deployment of optimizations - just enough to accumulate a statistical base for comparing CR and CPL before and after. Therefore, in terms of payback, working with CWV for the sake of applications returns earlier and more consistently than working with CWV for the sake of SEO.[3]

The main practical conclusion: asking the question “do we need to do Core Web Vitals for SEO” means initially lowering expectations and incorrectly budgeting the task. The correct statement is “how many applications and money are we losing due to bad CWV now, and how much of this will we get back during optimization.” Then the task falls under the responsibility of marketing and product, and does not remain in the corner of one SEO specialist, and developers’ time is allocated for it, and not the “residual” hour in the sprint.[5]

CriterionSEO effect of CWVBusiness effect on applications
Coverage areaOrganic traffic onlyAll traffic: SEO, advertising, direct mail, mailings
Signal strengthWeak tiebreaker with equal contentDirect CR multiply by 1.1–1.8x (Vodafone, Rakuten)
Upload speedWeeks and months of reindexing7–14 days after deployment
Where is the effect visible?Search Console, positions by queriesCR landing, CPL, revenue
Who measuresSEO specialistMarketing and product together
Budget priorityOften residualMust be a business priority

How to measure CWV in conjunction with CR and CPL

For CWV to stop being “laboratory numbers from PageSpeed” and become a manageable business metric, they need to be collected in conjunction with real behavior and the funnel. The Lighthouse 92 score itself does not say anything about applications: it shows how the page behaves on a synthetic measurement from a specific device and network. Real users come from different devices, on a different Internet, with a different set of installed extensions, and their CWVs may be completely different from those in the developer’s report. Therefore, the first step is the transition from lab measurements to field data.[5]

Field data is what Chrome User Experience Report (CrUX) and any RUM (Real User Monitoring) system collects. RUM writes real LCP, INP, CLS in each session and allows you to segment them by device, country, page and traffic channel. At this stage, the first useful picture appears: what is the LCP of mobile users who came from Google Ads to the landing page of the service, and how different it is from the LCP of the desktop on the main page. Next, this picture needs to be crossed with a funnel - a segment with a bad LCP should be compared with a segment with a good LCP in terms of conversion and CPL.[4]

When the segment picture is ready, a management decision appears: where to send the developer. If the CR on a segment with LCP > 4 sec is 40% lower than on a segment with LCP < 2 sec, and half of the paid traffic goes to this segment, then LCP optimization will have a measurable effect on CPL. If there is a difference, but it is small, or there are few segments with bad LCP, the priority should be shifted to INP or CLS. This is the normal work of a product analyst on top of CWV: not to “improve everything,” but to close those metrics where the main losses of applications are located.[5]

The final stage is A/B testing of optimizations. Vodafone and Rakuten built their well-known cases on A/B: half of the traffic to the old page, half to the optimized one, the only difference is Web Vitals, sales and conversion are measured. This is the gold standard, which allows you to accurately say “LCP optimization brought +8% of sales” instead of the streamlined “we have become faster, and sales seem to have grown.” Without A/B, many changes can easily be attributed to seasonality or other marketing factors.[1]

How to measure CWV in conjunction with CR and CPL

Stage 1

Collect field data via CrUX and RUM

Connect RUM (or at least analyze the CrUX dataset) and start collecting real LCP, INP, CLS by segment: device, country, page type, traffic source. Leave Lab measurements in PageSpeed as an auxiliary tool, not as the main one.[5]

Stage 2

Segment users by CWV

Divide traffic into “good CWV”, “needs improvement”, “bad CWV” for each metric separately. Compare how different segments behave in the funnel: bounce, viewing depth, CR, CPL. Find segments with high traffic volumes and bad CWVs - that's where the money lies.[3]

Stage 3

Link CWV to funnel and applications

Link CWV segments to real conversions in CRM. Calculate how many applications and money in BYN are lost on a slow LCP, lagging INP and jumping CLS. Turn technical metrics into financial statement lines that the owner understands.[7]

Stage 4

Run an A/B test of optimizations

Following the example of Vodafone and Rakuten - run the optimized and old version on 50/50 traffic, measure CR and revenue. This eliminates the “did it help or not” debate and provides accurate attribution of the effect of Web Vitals optimization.[1]

Why “optimization for Lighthouse” does not move applications

The most common mistake when working with CWV is “optimizing for the Lighthouse score.” The team sees red numbers in PageSpeed and sets the goal to “raise it to 90.” The developer suppresses the most prominent warnings: rewrites images on webp, adds lazy-loading, removes several unnecessary scripts. The score of the synthetic Lighthouse rises from 60 to 90, a check mark is given in the report, and the topic is closed. After a month, the marketer sees no change in CR, and the team concludes “CWVs don’t work for our niche.” The problem is not in CWV, but in the fact that what was optimized is not what is losing applications.[7]

Lighthouse is a synthetic estimate with fixed device and network parameters. Real users on a cold mobile LTE connection see the page completely differently, and their LCP may not be 1.8 seconds as in the report, but 4.5 seconds. A score of 92 in Lighthouse easily coexists with a real p75 LCP of 4.2 seconds on mobile, and it is this real LCP that controls the conversion. Therefore, work should be carried out not according to a lab score, but according to a field metric, and usually the first honest measurement of a company’s RUM data looks like a cold shower.[5]

The second mistake is optimizing for the sake of the “green badge”, and not for the sake of applications. The team fixes what is easier to fix, not what brings the most CR. For example, it compresses CSS to increase the Speed Index, but does not touch the heavy third-party chat script that kills INP on the form. It's getting better in terms of metrics, but not in terms of requests. To prevent this from happening, CWV optimization should be planned with the marketer and analyst, and not separately within the development team. For the Ontop team, this gap is the most common reason for “failed” optimizations that we see at the start of projects.[7]

The third mistake is one-time optimization without support. CWV is a living metric; it degrades with every new feature, new advertising script, new affiliate integration. A site that showed good CWV in January may slip into the red zone by May simply because they added a chat, installed a new analytical pixel and expanded the header. Without a process of regular RUM monitoring and blocking releases with CWV drawdown, any one-time improvements are rolled back. And then again the illusion appears that “CWVs do not work” - although in fact no one monitored that they remained at the achieved level.[4]

!

Main mistake

Getting the Lighthouse score to 90 is not the goal. The goal is to reduce p75 LCP, INP and CLS on real users from real devices and traffic channels. The score and the real metric diverge regularly, and it is the real metric that drives applications.[5]

If CWV optimization is carried out in synthetic points, and not in RUM metrics for traffic segments, it is almost guaranteed not to advance applications. Any work with Web Vitals should be planned from the funnel and from the channels, and not from the PageSpeed report.[7]

Where to start to get CWV back on your advertising budget?

If you are just starting to work with CWV in the logic of requests, it is important not to try to “fix everything at once”, but to go through a short sequence that quickly leads to a visible effect. This sequence works the same for a service landing page, a B2B site, a corporate site, and an average online store. It does not replace deep technical optimization, but it gives the first round with a measurable result and justifies the further development budget.[5]

First we fix the starting point. We collect CrUX or RUM data for the last 28 days, divide it by device and key page templates (home page, landing page for services, application form, catalogue). We get p75 for LCP, INP and CLS on each segment. This is what we will move. At the same time, we record the current CR and CPL for each segment - without them, any optimization will remain without a business justification.[3]Next, we select an application point - usually these are 1-2 page templates that attract the most paid traffic and where p75 LCP or INP is the worst. There is no need to fix the main one first if 80% of advertising traffic goes to service landing pages.[1]

The third step is to divide the work into “fast” and “architectural”. Fast ones include optimizing images and fonts, removing junk third-party scripts, lazy-loading off-screen content, preconnecting to critical domains. These works give 30–50% effect and are completed in 2–3 weeks. Architectural ones include reworking heavy client-side JS, abandoning render-blocking libraries, server-side rendering of critical parts, and switching to modern delivery formats. This has been a month or two of development, but they are the ones shooting INP and heavy LCP scenarios.[4]The final step is to integrate CWV into the process: RUM monitoring, drawdown alerts, checking releases for regression. Without this, everything you fixed slowly degrades back.[5]

In parallel with technical work, it is important to involve an analyst so that every two weeks he compiles a report on “how CR and CPL have changed by segments with improved CWV.” This turns CWV from an IT task into a business task with a clear ROI and eliminates the very gap between teams, due to which many companies cannot complete optimization for years. When this cycle is closed, CWV becomes not a one-time project, but a permanent channel for reducing CPL - the same as working with offers and trust blocks.[7]

Where to start to get CWV back on your advertising budget?
  1. Take CrUX or RUM measurements within 28 days by devices and key page templates
  2. Record current CR and CPL for the same segments - this is the baseline for business effect
  3. Select 1–2 pages with the maximum paid traffic and the worst p75 LCP/INP - send the resource there
  4. Close “quick” optimizations: images, fonts, junk third-party scripts, preconnect, lazy-load
  5. Launch architectural changes to client JS - the main thing against bad INP on forms and filters
  6. Install RUM monitoring and alerts so that releases do not roll back the achieved level
  7. Every 14 days, create a report “CWV → CR → CPL” by segments - turn metrics into money
  8. On key edits, run an A/B test using the Vodafone and Rakuten scenario - accurately attribute the effect

How Ontop solves this

In Ontop projects, work with Core Web Vitals does not fall on one specialist - it is built into the intersection of SEO, development, analytics and performance marketing. We start not with a PageSpeed report, but with a cross-section of real field data on traffic channels and page templates. First, we see where the main losses lie: on which template and on which device a bad LCP cuts out part of the paid traffic, which application form is slowed down due to heavy client JS, and in which sections of the CLS it jumps enough so that it hurts trust.

Further, we do not “optimize the site at all.” We highlight 1-2 pages through which the main flow of applications and paid traffic flows, and we prioritize work based on the expected effect on CR and CPL, and not on the Lighthouse score. We close some tasks quickly - this is work with statics, scripts, images, fonts. Part requires an architectural rework of the frontend, and in these cases the Ontop development team leads it as a separate sprint with a measurable goal for p75 LCP and INP. After this, everything is closed with RUM monitoring and release regulations, so that the achieved level does not slip after two months.

For a business owner, this means that CWV ceases to be a “tick in the SEO audit” and becomes a separate channel for reducing the cost of an application. We show losses and effects in BYN, not points, and tie optimization to specific numbers in the funnel. This allows you to plan your development budget as an investment with a clear payback window - and not as just another “technical piece” that is done because it’s supposed to be so.

Order an audit of Core Web Vitals in conjunction with the funnel from the Ontop team - we will show losses in applications and BYN on a slow LCP, lagging INP and jumping CLS, and draw up a prioritized optimization plan.

Order an audit

Frequently asked questions

We have a PageSpeed score of 92, is everything good?

Not necessarily. PageSpeed shows synthetic measurements from a specific device and network, but real users can see completely different LCPs and INPs. A correct assessment is field data from CrUX or RUM for p75 on key traffic segments. Often a score of 92 in lab coexists with p75 LCP > 4 sec on a cold mobile - and it is this real indicator that controls applications.

We are a small B2B site, we only get 200 clicks a month - do we even need this?

If these 200 clicks are paid and cost 1.5–2 BYN each, any hole in your funnel costs significant money. With low traffic there is no point in large optimization projects, but basic work with the LCP of the application form and the INP of the “Submit” button pays off even after 200 clicks - the effect will simply be in tens, not hundreds, of BYN savings per month.

What is more important in 2024 - INP or LCP?

Depends on where your bottleneck is. If you have a “heavy” offer and slow loading of the first screen, LCP is the priority. If you have complex forms, filters, calculators and interactive elements, INP is a priority. On most projects, the LCP is first repaired (it controls how the user gets into the funnel), then the INP (it controls the conversion inside the funnel), and CLS most often along the way.

Is it possible to just enable the CDN and not do the rest?

CDN removes some of the delays on TTFB and helps LCP, but it does not cure heavy client JS, bad fonts and third-party scripts on the main thread. For INP and CLS, the reasons usually lie there. Therefore, CDN is a useful step, but not a sufficient one. On sites with a heavy frontend, the CDN effect is modest, but the main losses remain.

How long does it take to bring CWV to the green zone?

The basic cycle is 1–3 months, depending on the launch of the project. The first 2–3 weeks are diagnostics and quick edits (images, scripts, fonts). Next comes the architectural work on client JS, forms and heavy components. The final stage is RUM monitoring and release regulations. Without the last step, after six months the situation rolls back.

What about CWV on a CMS like WordPress, Bitrix, Tilda?

On average, “boxed” CMS and CWV designers are worse than individual solutions, because templates and plugins carry a lot of unnecessary JS and CSS. This can be cured, but requires targeted work: disabling unused modules, reworking the theme, correctly setting up caching. On Tilda and similar constructors there is a hard optimization ceiling - and that is why projects with a serious advertising load usually leave the constructors.

How to sell CWV optimization to an owner who does not understand the technology?

Through the funnel and BYN. You remove p75 LCP by traffic segments, segment the funnel by CWV, count how many applications are lost in slow segments, convert it into money. Next you show A/B tests of Vodafone and Rakuten as a reference for the real effect. This moves the conversation from the plane of “we need to speed up the site” to the plane of “we are losing X BYN per month on a slow LCP, and we will return X of them with optimization.”

Conclusion

Core Web Vitals deserve business attention because they change what percentage of paid and organic visitors ever reaches the conversion point.

Teams that measure CWV as part of lead economics make better prioritisation decisions than teams that treat it as isolated SEO hygiene.

Sources

  1. Vodafone — A 31% improvement in LCP increased sales by 8% — A/B test Vodafone Italia: 31% improvement in LCP gave +8% sales, +15% lead-to-visit, +11% cart-to-visit with identical traffic and offer. The main proof that CWV moves money, not just SEO:https://web.dev/case-studies/vodafone
  2. How Rakuten 24's investment in Core Web Vitals increased revenue per visitor by 53.37% and conversion rate by 33.13% - A/B test Rakuten 24: Web Vitals optimization gave +33.1% to CR, +53.4% to revenue per visitor, +15.2% to average check, -35.1% to exit rate. Confirms the CWV effect in e-commerce:https://web.dev/case-studies/rakuten
  3. Milliseconds make millions — How improvements in mobile site speed positively affect a brand's bottom line — Google and Deloitte research on 37 brands and 30+ million sessions: 0.1 sec speed improvement gives +8.4% to retail conversion, +10.1% to travel conversion, −8.3% to bounce rate in lead gen.:https://web.dev/case-studies/milliseconds-make-millions
  4. Interaction to Next Paint is officially a Core Web Vital - Official announcement from web.dev/Chrome about replacing FID with INP in March 2024 as Core Web Vital. Confirms that interactions (forms, filters, popups) now feed directly into the page experience:https://web.dev/blog/inp-cwv-launch
  5. Performance - The 2024 Web Almanac by HTTP Archive - Industry data HTTP Archive for 2024: 43-48% of sites pass all CWV entirely, LCP remains the main bottleneck, INP and CLS trends on mobile and desktop:https://almanac.httparchive.org/en/2024/performance
  6. Introducing INP to Core Web Vitals - Google Search Central Blog - Explanation of Google Search Central about INP, its thresholds (200 ms - good, 500 ms - bad) and the role of the page experience signal in search. Basis for reasoning about the SEO effect of CWV:https://developers.google.com/search/blog/2023/05/introducing-inp
  7. The business impact of Core Web Vitals - A summary review of web.dev on CWV business effects: CR growth on fast pages, typical loss scenarios on slow LCP and INP, the difference between lab and field measurements:https://web.dev/case-studies/vitals-business-impact
Author:
Konstantin Klinchuk
Views
15
Similar Articles
A strong brand grows through an integrated approach. Here are related materials on SEO, development, and website growth.
Views 32
How to Improve Website Conversion with Proven Trust-Building Methods
Views 27
Which lead form and callback mistakes most often kill ad conversion?
Views 20
Is the mobile version more important than desktop for leads and ads?
Views 36
Why Advertising Traffic Is Not Converting: Is the Problem the Campaign or the Landing Page?