Poor ad conversion becomes expensive because teams often argue about symptoms instead of isolating where the loss occurs.
Campaigns, landing pages, analytics, and CRM handoffs can all distort the same funnel, which is why “traffic does not convert” is not a diagnosis.
This article shows how to separate campaign-side failure from landing-page-side failure and how to prioritise fixes without guessing.
Key takeaways
Key takeaways in 30 seconds
- âAdvertising traffic does not convertâ is not one problem, but a gap in at least one of four points: key linkâÂÂad, adâÂÂpage, pageâÂÂform, formâÂÂCRM. There is no point in repairing everything: first you need to find exactly where the leak is.
- The average conversion of a landing page by industry is about 4.3%, the median fluctuates around 3â6%, and extremely rarely exceeds 10% even for the top. If you have 0.3â0.8%, the problem is almost certainly not with the campaign, but with the page. If there are 5â7% for warm keywords and there are few applications, you need to dig into the campaign and attribution.[3]
- Campaign symptoms: high cost per click with low CTR, cheap impressions for âjunkâ queries, low Quality Score, requests come from the âwrong segmentâ. Page symptoms: high bounce rate from one IP/device, short time on page, break before the form, mobile converts much worse than desktop.
- Quality Score in Google Ads is a bridge between two worlds: the rate depends not only on the campaign, but also on the landing experience. A weak page increases CPC and, with the same budget, reduces the number of impressions and clicks - that is, it âspoilsâ your campaign, even if it is set up perfectly.[1]
- Page loading speed is a separate factor: when the time to interactivity increases from 1 to 3 seconds, the probability of failure increases by 32%, from 1 to 5 seconds - by 90%, from 1 to 6 seconds - by 106%. On mobile itâs even tougher: the traffic came, the page didnât have time to open, the budget was spent.[2]
- Are you ready to receive a separate report âwhere is the real leak of your project - in advertising or on the websiteâ and an action plan with priorities? The Ontop team conducts end-to-end diagnostics of the funnel - from semantics and Quality Score to page speed, forms and CRM. Contact us through the form on the website.
- Where does âthere is traffic, but no applicationsâ come from?
- What is considered normal: conversion benchmarks by industry
- Funnel attribution: how to share responsibility
- Seven symptoms that indicate a campaign
- Seven symptoms indicating landing
- Where the funnel breaks down in practice
- Quality Score - a bridge between the campaign and the website
- Priority matrix: what to fix first
- Funnel diagnostic plan for 7 days
- Frequently asked questions
- Conclusions: where to look for the main gap
Where does âthere is traffic, but no applicationsâ come from?
When an owner or marketer says, âAdvertising doesnât work,â itâs almost always a shorthand. At the factual level, âdoesnât workâ means one of four things:
- There are a lot of clicks in your account, but for âjunkâ queriesâusers come with expectations that have nothing to do with your product.
- The clicks are conditionally correct, but people donât see on the page what the ad promised them and leave within the first 5â10 seconds.
- People finish reading the form, but the form itself creates friction - itâs long, breaks on mobile phones, requires a captcha, requires registration.
- The form works, the application is sent, but does not reach the CRM, is lost in the mail or reaches the manager with a delay of several hours - and the lead cools down.
Each of the four scenarios requires a different solution. Raising campaign rates when your mobile layout is falling apart is burning your budget for the same result. Reworking the landing page when your applications are lost in the spam mail filter is remaking what works. Therefore, the first thing you need to do before any optimization is to understand exactly which of the four points you have a gap at, and what scale it is.[5]
The most common diagnostic mistake is to argue about the culprit in an âeither/orâ style. In practice, two or three leak points are almost always operating at once. But they differ in scale: one takes 60â70% of losses, another takes 20â25%, and the third takes the remaining tail. The task is to find the largest one and fix it first, because this is the only way the budget for optimization gives a measurable result, and is not smeared into little things.
!
Any phrase âadvertising doesnât workâ without numbers at specific points in the funnel is an emotion, not a diagnosis. Before changing the contractor, redesigning the website or stopping the campaign, you need to see in one table: how many impressions there were, clicks that reached the content, reached the form, submitted applications and reached the CRM. Most solutions then become obvious.
What is considered normal: conversion benchmarks by industry
Without a reference point, any number means nothing. Is 1.5% conversion bad or good? Depends on the industry. In B2B services for complex products, 1.5% is a decent result. In a b2c online clothing store, this is a bad funnel. Therefore, the first thing to do before assessing whether advertising is working is to look at what average and good values exist for your segment and compare them with yours.[4]
| Industry/landing type | Low CR | Average CR | Good CR | Top quartile |
|---|---|---|---|---|
| Professional services (lawyers, audit, consulting) | < 1.5% | 3â5% | 7â9% | 10%+ |
| B2B services, complex products | < 1.0% | 2â4% | 5â7% | 8%+ |
| SaaS, IT products | < 1.5% | 3â5% | 6â8% | 10%+ |
| Construction, repair, finishing | < 1.5% | 3â5% | 6â9% | 10%+ |
| Medicine, dentistry, clinics | <2.0% | 3â6% | 7â10% | 12%+ |
| Auto, real estate (long cycle) | < 1.0% | 2â3% | 4â6% | 7%+ |
| E-commerce (mass segment) | < 1.5% | 2â4% | 5â7% | 8%+ |
| Educational services, courses | < 1.5% | 3â5% | 7â9% | 11%+ |
| Local services (cleaning, tire fitting, etc.) | <2.0% | 4â6% | 8â11% | 13%+ |
An important nuance: these are benchmarks specifically for traffic that came through thematically targeted advertising to a specially prepared landing page. If your landing page is the usual main site without an offer, without an explicit CTA and without blocks that respond to objections, your ânormalâ number automatically shifts down - and you cannot compare it with landing page benchmarks. This is a typical mistake: traffic is sent to the main page, they expect a landing page conversion from it and are surprised that it is not there.[3]
i
Find your industry, calculate your actual conversion (applications ÷ unique clicks from ads during the same period) and determine which column you fall into. If itâs âlowâ, you need to look at both the campaign and the page at the same time, the situation is systemic. If âaverage/goodâ, the focus shifts to subtle points: quality of leads, sales conversion, efficiency by time of day and segments.
Funnel attribution: how to share responsibility
Any advertising funnel is a chain of steps, at each of which the user either moves on or falls off. If you see the final number (for example, â5 applications per 1000 clicks = 0.5% conversionâ), you only see the result of the entire chain, but not the link where the collapse occurs. Attribution is cutting through the funnel step by step to understand which step consumed the most money.
The minimum sufficient scheme for an advertising funnel looks like this. Next, each link should be measured with a separate metric and each can be compared with the norm:
An example of a normal funnel in the B2B services segment: ~4.2% final conversion from click to application. We measure each step: âreached the contentâ - an eventpage_view+ LCP, âscrolled to offerâ -scroll 50%, âopened the formâ -form_focus, "sent" -form_submit. If one of the transitions falls out, you can see it right away - and you can see who needs to fix it.
When the funnel is laid out step by step, the âadvertising or websiteâ debate is closed in minutes. If you lose 350 people on the transition â1000âÂÂ650â (that is, half of those who came did not wait for the page to load or left in the first seconds), this is the responsibility of the page + speed combination. If you lose on the transition â650âÂÂ220â (the page opened, but the person did not find anything interesting on it), this is the responsibility of the offer and the âad â pageâ link. If everything goes fine up to the form and the collapse occurs at â90âÂÂ42â, this is the form and CRM.
Most projects break down in the same place: analytics shows only âclicksâ and âform submissionsâ, and everything in between remains a black box. If you donât measure page load, scroll depth, form opening, validation errors, and actual lead capture in your CRM, thereâs no point in arguing about the culprit. At this point, you are not comparing two versions of reality, but two versions of a guess.
Seven symptoms that indicate a campaign
Before redesigning the site, it is worth eliminating the basic problem of the account: advertising can systematically bring in the wrong demand, at the wrong time and at the wrong price. Below are not seven separate points, but four typical scenarios, which usually show that the failure begins even before the click or in the traffic purchasing setup itself.
CPC
Intent
Audience
Conversions
Additional signals on the campaign side usually confirm the same picture: CTR is 2-3 times lower than the typical one for the niche, the conversion between two campaigns on the same page differs by 5 times, and the main budget is spent on hours and segments where there are actually no sales. If several of these markers match, you need to repair the cabinet first, and not the first screen of the site.
Seven symptoms indicating landing
For landing, the logic is mirrored: if advertising traffic looks thematic, but after clicking people donât reach the offer, form or CRM, you need to look for the problem on the page. Here it is more useful to look not at a long list of âsymptoms in generalâ, but at specific behavioral signals that can be seen in Metrica, GA4, scroll maps and webvisor.
| Signal on page | What does this usually mean | Where to repair |
|---|---|---|
| Rejection rates above 70% for relevant queries | The ad promise did not match the first screen or the page did not have time to open[2] | Offer, speed, first screen |
| Average time on page less than 30 seconds | The user did not understand what was being offered to him and why read further | First screen structure and content |
| Mobile CR is 2â4 times lower than desktop | Mobile layout, form or speed under 4G is falling apart | Mobile UX and Web Vitals |
| 60â70% do not reach the second screen | The first screen is not engaging or there is no convincing continuation below the scroll[6] | Screen composition and trust blocks |
| Big gap form_focus â form_submit | The form creates friction: it is long, unclear, or breaks due to errors | Form, validation, microcopyright |
| LCP above 2.5â4 seconds | Some of the paid traffic does not reach the content | Hosting, images, frontend[2] |
| All campaigns convert equally poorly | After the click, the common denominator breaks, that is, the page itself | Landing as a system |
If several such signals coincide on a page, you can leave your advertising account alone for another week: first you need to bring the landing point itself back to normal. Otherwise, you will argue about which campaign is bad, although the problem is the same for all traffic sources.
Where the funnel breaks down in practice
After the symptoms, you donât need a second long list in the same format, but a map of the root causes. In real work, we almost always see the same set of root failures: some live in advertising, some in landing, and the most painful effect appears at the junction of these two layers. That is why it is useful to combine the reasons into one section, rather than dividing them into two mirror lists.
The campaign is subject to information requests, broad match is not controlled, and the negative list is not updated. As a result, there is traffic, but the commercial intent within it is weak.[4]
This is the most common gap at the interface between the campaign and the website: in the account they promise the price, term or format of the service, but on the first screen the user sees general text about the company. Conversion rates drop even before reading the details.[5]
One campaign works equally for mobile, desktop, weekdays, weekends, hot and cold segments. In strong slots you under-stake, in weak slots you burn your budget.
A proxy event like a scroll or a click on the phone goes into Ads. The system brings people who are good at generating this event, and not those who turn into a sale.
There is no clear H1, it is unclear who the service is for, where the price is, for what duration and what to do next. The user is not required to solve the page, he simply closes it.[6]
7â10 fields, required email, captcha, complex validation, no explanation of errors. Even a motivated user reaches the form and gets lost there.
On the desktop everything looks tolerable, but on the real mobile Internet the page takes 4-5 seconds to load, jumps and breaks the CTA button. This is no longer a UX nuance, but a direct drain on the budget.[2]
Even a good campaign and a neat form wonât save you if the lead doesnât reach the CRM, gets lost in the mail, or the manager calls back three hours later. The report says âthe site convertsâ, but there is no money in the business.
There is only one key practical mistake here: treating each item as a separate contractor war. When the account, page and CRM are looked at in one report, it quickly becomes clear where the primary gap is and where the secondary effect is. This is the correct way to bring the article and the work itself to the logic of the standard template: fewer identical lists, more of a coherent cause-and-effect picture.
Quality Score - a bridge between the campaign and the website
In the usual picture of the world, many people think that âadvertising and websiteâ are two independent systems. In fact, in Google Ads they are directly related through the Quality Score. This is a number from 1 to 10 for each key, and it is made up of three components:
Campaign
Campaign
Website
Budget
That is, the campaign has one parameter (Landing Page Experience), which completely depends on the site. If the site is slow, irrelevant to the ad, or inconvenient for mobile devices, Google increases the CPC and reduces the number of impressions. Outwardly, it looks like âadvertising has become more expensiveâ or âadvertising has stopped working,â and many people interpret it this way.[1]
i
If you have a low Quality Score, then improving the site (speed, content relevance to the ad, mobile layout) simultaneously gives +1â3 points to QS, which in turn reduces CPC by 20â40%. This is a rare case when working on a website âfor freeâ improves the economics of advertising - for the same budget you get more clicks and more applications without changing anything in the campaign itself.
Priority matrix: what to fix first
Once the diagnosis is made and several simultaneous problems are visible, the next question is what to tackle first. A good principle: high urgency ÃÂ low cost = first candidate. Below is a typical matrix that we use in Ontop projects. On the left is a typical problem, then its impact on conversion, the estimate of work in hours and priority.
| What's being repaired | Impact on CR | Complexity of work | Side | Priority |
|---|---|---|---|---|
| Negative keywords and cleaning search queries | +20â40% | 3â6 hours | Campaign | P0 |
| Correspondence of the first screen to the offer from the ad | +30â80% | 6â12 hours | Website | P0 |
| Bid adjustments by location/time/device | +15â35% | 3â5 hours | Campaign | P0 |
| Shortening the form (minimum fields, input mask) | +30â60% | 8â16 hours | Website | P1 |
| Transferring real conversions from CRM to Ads | +20â50% | 16â32 hours | Campaign | P1 |
| Core Web Vitals Optimization (LCP/INP/CLS) | +10â30% | 24â60 hours | Website | P1 |
| Reconfiguring the warm-up betting strategy | +10â25% | training 2â3 weeks. | Campaign | P2 |
| Complete redesign of the landing page (structure, trust) | +50â150% | 80â200 hours | Website | P2 |
| Launch of additional channels (YAN, remarketing) | +15â40% | 40â80 hours | Campaign | P3 |
Logic of priorities: P0 is âcheap and painfulâ, you get a noticeable increase in applications in a week and at the same time spend a minimum of team hours. P1 is already work for 2â4 weeks, which runs in parallel with P0. P2 are deep changes that make sense only after P0 and P1 are done, otherwise the result is difficult to measure. P3 is an expansion that should be done when the main funnel is working correctly.
The main prioritization mistake is almost always the same: the project starts with expensive P2 or âchannel expansionâ, skipping cheap and painful P0 tasks. As a result, after two months a new landing page or a new media channel appears, but the end-to-end mathematics remains almost unchanged. The correct sequence is more boring, but more profitable: first we close cheap leaks, then we measure the effect and only then we go to major alterations.
Funnel diagnostic plan for 7 days
A minimal, but working plan for an owner or marketer who wants to receive an answer âwhere is my leakâ and a list of specific actions within a week. Each stage produces a result that is measurable in numbersânot a âgeneral impression.â
Days 1â2
We download reports on campaigns, search queries, devices, geo, time from Google Ads / Yandex Direct. From GA4 / Metrics - page behavior, scrolling, form events. From CRM - real transactions and their attribution. We calculate end-to-end conversion and cost per lead.
Days 3â4
We build a single table: impressions â clicks â reached the content â scrolled through the offer â opened the form â sent â reached the CRM. Comparing transitions with benchmarks. We are recording the biggest losses - this is our main repair area for the next weeks.
Days 5â6
We check Quality Score, negative keywords, compliance of landing ad texts, speed according to PageSpeed Insights / Lighthouse, mobile layout on 4-5 devices, form behavior (form_focus/form_submit), the fact of delivery of applications to CRM (we send test leads).
Day 7
We put everything into one document: where is it leaking, by what percentage, what should be repaired first (P0), in 2â4 weeks (P1), in 2â3 months (P2). We calculate the economics of repairs in BYN - how much it costs to fix and how many applications will be received with the same budget.
- Upload a report on search queries.In the last 30 days. The share of âjunkâ queries above 25% is a reason to clean out the negative list.
- Combine CTR / CPC / Quality Score by groups.Groups with QS < 5 are candidates for reassembly: divide into narrower ones, rewrite the headers.
- Check the consistency of the ad texts and the first screen.Open the ad, open the page - go through the eyes for 10 seconds, answer âwas it the same or not.â
- Measure landing speed using PageSpeed / Lighthouse.Metrics LCP, INP, CLS separately for mobile and desktop versions.[2]
- Test mobile layout on real devices.Not on an emulator, but on 3â5 smartphone models: layout, form, CTA button, pop-ups.
- Create a funnel based on GA4 / Metrics events.page_view â scroll 50% â form_focus â form_submit. Every drop of more than 50% is a leak signal.
- Check applications on the website and in CRM.Have all forms been submitted? Don't they get lost along the way? Manager response time to an application.
- View scroll map.Hotjar / Yandex Webvisor / Microsoft Clarity. What percentage actually reaches the second and third screens.
- Check rate adjustments.By geo, time, devices, audiences. Compare the effectiveness of segments and bids.
- Check which conversions are transferred to Ads.Is it a âclick on the phoneâ or the real deal? If the first, Smart Bidding is learning from the wrong thing.
- Check TTFB hosting and response time.If TTFB > 600 ms, this is a speed bottleneck even with an ideal front.
- Run test requests.From different devices, at different times. Check notifications, arrival in CRM, absence of double leads.
- Compile industry benchmarks.Find the median CR / CTR / CPC for your niche - assess where you are relative to the market.[3]
- Prepare a final report.One page: where it leaks, how much, what to fix P0/P1/P2, economics in BYN.
No diagnostics
Decisions "by intuition"
The marketer assures that the budget needs to be raised by 40%. The web studio offers to âcompletely redo the site for 8,000 BYN.â The contextual contractor wants to add YAN and Performance Max. Everyone is right, no one is responsible for the outcome. After 3 months the numbers did not add up.
After diagnosis
Decisions on the facts
It can be seen: 20% is lost in the campaign (negative words, adjustments), 55% is lost on the page (compliance, form, speed). We repair P0 for 2 weeks for 30â40 hours of work. We measure CR before and after. Then we go to P1 and P2 with confirmed mathematics.
How Ontop solves this
Fundamentally, we do our work as one team, and not as a âcontextologist + studio + analyst separately.â This closes the classic responsibility gap: between âmy account is workingâ and âmy website is normalâ there is no longer a gray area in which the clientâs real budget is lost. All technical work - both on the site, and on the campaign, and on analytics - is carried out in one team with overall responsibility for the result.
The stack of tools is selected for the project task: Google Ads and Yandex Direct; GA4, Yandex Metrica, Microsoft Clarity and Hotjar for behavioral analytics; Lighthouse, PageSpeed Insights and WebPageTest for speed; separate infrastructure for the project to eliminate hosting and environmental problems; Drupal / Bitrix / Tilda on the site side depending on business requirements. What is important is not the set of services in itself, but a single diagnostic in which the traffic source, landing page and analytics are analyzed as one system.
Have you launched an advertisement, the budget is being spent, but there are few applications or they are âwrongâ? We will conduct an end-to-end diagnostic of the campaign and website and show you exactly where your problem is - within 5-7 working days, with a clear plan for correcting work.
Frequently asked questions
How many clicks does it take to understand whether a campaign or website is performing poorly?
The minimum sufficient sample is about 300â500 clicks per landing or campaign. At lower volumes, any conversion figure is noise. On 100 clicks, one or two applications can mean either 1% or 2% - the difference is statistically insignificant. If you have very small campaigns, it makes sense to look at a 4-6 week horizon and compare segments rather than individual days.
The homogeneity of traffic is also important: if these 500 clicks included search, YAN, and remarketing, you cannot look at the total figure, you need to separate it by source.
Is it possible to understand that the site is to blame without setting up individual events in analytics?
Partially yes. If you see a sharply high bounce rate in GA4 / Metrica (above 70%), a very short average time on page (less than 30 seconds), and at the same time different campaigns from different sources give the same low conversion, this is a strong signal in favor of a problem on the siteâs side. But pinpoint diagnostics (where exactly we are losing - on the offer, on trust, on the form, on speed) will not work without events.
Therefore, the minimum sufficient set is basic scroll events (25/50/75/100), focus on the form, form submission, validation errors. They can be set up in 4-8 hours and completely change diagnostic capabilities.
If we have a âregular main siteâ instead of a landing page, do we need to make a separate landing page?
In most cases, yes. The home page solves many problems: introduce the visitor to the company, give an overview of all services, tell about the team, news, etc. The advertising landing page solves one thing: sell a specific offer from a specific ad. This is a conflict of goals - the main one physically cannot do both equally well at the same time.
In practice, switching from the âregular main pageâ to a separate landing page for the direction most often gives +50â150% increase in conversion with the same advertising costs. This is one of the most effective investments in your funnel - especially if you have 3-5 different directions and each requires its own offer.
What to do if the budget for corrections is limited - what to choose?
First, P0 from the priority matrix: cleaning negative keywords, adjusting bids by geo/time, matching the first screen to the offer, shortening the form. This closes 50-70% of typical leaks and is cheap per hour of work - in a typical project, 30-50 hours in total.
Deep reworking of the landing page and serious work with speed (Core Web Vitals) is already P1/P2, it makes sense to do it when you have measured the effect of P0 and see that it is there, but not enough. So each subsequent investment is based on the numbers of the previous one, and not on hope.
Why canât you just listen to the contractor in context - he sees the campaign?
The contractor sees his half of the funnel by context: impressions, clicks, CPC, conversions in the account. He does not see what happens on the site after the click, and is not responsible for it. Therefore, his honest answer to the question âwhy are there few applicationsâ usually sounds like this: âI donât know what happens after the click - perhaps there is a problem on the site.â This is true, but this is not an answer to the owner who pays for the entire result.
To get an answer, you need to connect three sources: advertising account, site analytics (events, scroll, forms) and CRM. Only their intersection gives a picture of a through funnel and allows us to unambiguously say where exactly the leak is and who should fix it. Often this is the work of a separate team - one that is responsible for the entire result, and not for its part.
What is considered a ânormalâ end-to-end click-to-pay conversion for services?
For services with a long cycle (B2B, real estate, auto, expensive medicine), the end-to-end click-to-contract conversion is usually 0.3â1.5%. For short services (minor repairs, local services, delivery) - 1.5â4%. That is, out of 1000 clicks, only a few or tens reach payment. This is normal and does not mean that the funnel is bad - it means that the absolute numbers for each step and the cost of each stage are important.
It is correct to evaluate âwhether advertising worksâ through cost per lead (CPL) and cost per sale (CAC) in comparison with average ticket and LTV. If CAC ⤠25â30% of LTV, the funnel is healthy, even if it seems to you that there are âfew applications.â[4]
What is more important - to remake a campaign or a website if there is only resource for one?
Depends on where the leak is worst. But there is a general rule: equally poor conversion across different campaigns, different audiences and different sources is almost always the site. And vice versa: if one campaign has a 5% conversion, and another 0.3% with the same landing page, the site is not to blame, the difference between campaigns is only in the quality of traffic.
In borderline cases where it's hard to tell the difference, start with P0 on both sides: 6-8 hours on negative keywords and bid adjustments + 6-12 hours on first screen match and form shortening. These are the cheapest works that give the maximum increase and often close the issue even before serious investments.[6]
Conclusion
Better diagnosis beats more random optimisation. Once the leak is located, the next steps usually become obvious.
The strongest teams treat ad conversion as a systems problem, not as a blame game between traffic and landing pages.
Sources
- Google Ads Help componentLanding page experiencein Quality Score, how landing page experience affects cost per click and number of impressions:support.google.com/google-ads/answer/6167123
- Think with Google - a study of the relationship between mobile page loading time and the probability of failure: increasing LCP from 1 to 5 seconds increases the probability of failure by 90%:thinkwithgoogle.com/marketing-strategies/search/landing-page-load-times-mobile-bounce-rate
- Unbounce Conversion Benchmark Report - median and quartile landing page conversion rates by industry:unbounce.com/conversion-benchmark-report
- Wordstream - review of âWhat Is a Good Conversion Rateâ and comparison of conversion by industry and traffic channel:wordstream.com/blog/ws/2014/03/17/what-is-a-good-conversion-rate
- CXL is a practical guide to best practices for landing pages and ad page compliance testing:cxl.com/blog/landing-page-best-practices
- Nielsen Norman Group - UX research âThe Page Fold Manifestoâ: what and how the user perceives above and below the fold, how this affects conversion:nngroup.com/articles/page-fold-manifesto