Speed is money, not a technical indicator
In the marketing budgets of Belarusian businesses, it is customary to consider in detail the cost of advertising, SEO and content - and almost never consider how much a slow website and “economical” hosting cost. An entrepreneur is used to seeing hosting as an expense line that can be chosen as the “cheapest” one, as long as the site opens. And the loading speed is like a developer’s task, which somewhere “adjusts itself.” In practice, the opposite is true: website speed and hosting quality are one of the most undervalued investments in direct business revenue.
When a visitor clicks on your ad or link from a search, he begins to have an “attention window” of 2-3 seconds. If the page manages to draw the main thing in this window, he stays, reads the title, evaluates the proposal, and leaves a request. If the site loads for 5–7 seconds, he closes the tab and goes to a competitor who turns out to be faster. At the same time, the money that you paid for this click - in Yandex Direct, Google Ads or for SEO traffic - has already been written off. You paid for a visitor you never saw.
That is why, in the modern model of investing in a website and its promotion, hosting and speed are not “after design and content,” but next to them - in the foundation on which all other marketing efforts depend on whether they will work at all. In this article, we’ll look at why this is so, what numbers stand behind the word “fast website,” how speed turns into applications and revenue, and what makes up a professional website infrastructure that really pays for itself.
Three numbers that cause a slow website to lose business
To understand the scale of the effect, let's look at data from studies that aggregated billions of user sessions.
It is important to understand that these numbers are not about “perfectionism” and not about “doing it beautifully in the PageSpeed Insights report.” They are about the real behavior of living people. The modern user is mobile, inattentive, with neighboring tabs and notifications open. It won't wait for your site. He simply won’t see it - he will click on the next result in the search results or close the ad. The difference between a fast and a slow site on the same traffic is often 20-40% in terms of the number of readers brought to the page.
i
Google identifies three key Core Web Vitals metrics that describe the real user experience: LCP (main block rendering) - should be ≤ 2.5 s, INP (responsiveness to actions) - ≤ 200 ms, CLS (visual stability) - ≤ 0.1. A site is considered “fast” only if all three metrics fall into the “green” zone for 75% of visits - this is exactly the data collected in the Chrome UX Report and taken into account in the ranking.[1][2]
What is a “fast and reliable site” in numbers
Often an entrepreneur checks a website like this: he opens it from his computer in the office, sees that “everything works,” and concludes that “the site is normal.” This is the most common illusion. Your site on your work computer is a site on a warm cache, on a fast office Internet, in Chrome with pre-warmed connections. For most of your customers, the situation is different: mobile LTE in a car or on the street, first visit with cold cache, competition for the channel with background applications. It is in this reality that it is decided whether he will see your page or leave.
Therefore, it is correct to evaluate the speed of a site not subjectively, but by measurable engineering metrics. Here are the guidelines you should rely on:
- TTFB (Time To First Byte) - time to the first byte of the server response. On a high-quality infrastructure - 150–350 ms; on an overloaded shared hosting it easily takes 1.5–3 seconds. This is the foundation of everything else.
- LCP (Largest Contentful Paint)—the time before the main element (usually the first screen) is drawn. Normal - ≤ 2.5 s, bad - over 4 s. It is this metric that most strongly correlates with user behavior on the first screen.[1]
- INP (Interaction to Next Paint) - site responsiveness to actions (click, form entry). The norm is ≤ 200 ms. If INP ≥ 500 ms, the user feels the site as “stuck”, buttons “do not respond”, forms “do not work”.
- CLS (Cumulative Layout Shift) - how much the content “jumps” when loading. The norm is ≤ 0.1. High CLS is when the user wants to click on a button, but at the last moment it “moved” down from the banner that had loaded.
- Hosting Uptime - the proportion of time when the site is available. The norm for serious infrastructure is ≥ 99.9% per month, that is, no more than ~43 minutes of unavailability. Cheap shared hosting actually has 98-99% - that's up to 7 hours of downtime per month, often during peak visit times.
When at least one of these metrics fails, the “bottleneck effect” works: all marketing begins to reach the user through a thin “sieve” of slow infrastructure. Money is spent, visitors come, but they turn into applications that are many times worse than they could be - and it is often impossible to discover the reason without a targeted technical audit.
What happens to the user in loading seconds
It’s convenient to look at the user’s behavior on the time axis: it is in the first seconds that it is decided whether he will become your lead. Each interval is not just a “wait”, but a psychological milestone with a measurable effect on conversion.
User behavior by seconds of site loading
0–1 s
Instant zone
The user perceives the site as modern and high-quality. Attention is focused on content, not anticipation. This is a feeling of premium, which is difficult to buy with design, but easy to lose with bad hosting.
1–3 s
Tolerance zone
The user is still waiting, but is already evaluating: he noticed a pause, the brain begins formulate the decision “to leave/stay” in the background. The probability of failure increases by ~32% compared to 1 second.[3]
3–5 s
Loss zone
Most of the mobile audience has already left. The increase in the probability of failure relative to a 1-second site is about 90%. The advertisement has been paid for, the visitor has arrived - but he is no longer on the page.
5 s+
Rejection zone
Almost everyone leaves. Conversion drops significantly, the site receives a “bad UX” signal from search algorithms, and the cost of advertising traffic increases. Further improvements in design and content no longer solve the problem.
Four reasons why the site slows down and loses applications
There are always several reasons for the slow operation of a website, but in almost every project the problem is based on one of four scenarios. Let's look at them separately so that the entrepreneur can find out his situation.
1. Cheap shared hosting
typical
2. Nobody optimized the site
typical
3. No CDN and geographic cache
often
4. No monitoring and support
critical
!
Saving 20–40 BYN per month on hosting can cost a business tens or hundreds of lost applications per year. Hosting is not an “IT department expense”, but an infrastructure sales line. On the same traffic and the same site, switching from “cheap” hosting to an engineered one often gives +20–35% to conversion just by the difference in response speed.
Cheap shared hosting versus professional infrastructure
To make it clearer, let’s compare two typical options for website placement for a commercial project. In one column is “the minimum tariff for 10–15 BYN per month,” as hosting providers sell it to the mass client. In the other there is a professional infrastructure managed by a web studio, with settings for a specific site, CDN, caching and monitoring.
| Option | Cheap shared hosting | Professional infrastructure |
|---|---|---|
| Hosting type | Shared server with hundreds sites | VPS / cloud, resources allocated for the project |
| TTFB (typical) | 600 ms - 3 s | 150–350 ms |
| Uptime | 98–99% (up to 7 hours of downtime per month) | ≥ 99.9% (≤ 43 min. per month) |
| Scaling for peak | Rests against tariff limits, the site crashes | Vertical and horizontal, without downtime |
| Caching | No or “out of the box” without configuration | Pages + objects + OPcache + CDN |
| CDN for statics | No | Cloudflare / other CDN, edge servers near the user |
| Certificate HTTPS | Yes, but configured basicly | HTTPS + HTTP/2 / HTTP/3, HSTS, modern ciphers |
| Backups | Once a day or irregular | Daily + before each release, off-site storage |
| Uptime and metrics monitoring | No or only on the provider side | External + internal (Sentry, uptime-monitor, logs) |
| CMS and stack updates | On the owner’s conscience, often “never” | According to plan: CMS, modules, PHP, OS, security |
| Support | Tickets “server rebooted” | An engineer who understands your site, not just the server |
| What does this mean for business | Loss of applications, low Quality Score of advertising, failures in search | Stable sales channel, low CPA, growth of positions |
The key difference is not in price or “processor power”, but in the fact that in the first case the site lives “by itself” among other people’s projects, and in the second - its infrastructure is thought out as part of the selling system. The developer knows what traffic the site has, which pages are heavy, when there are peaks, and sets up hosting for a specific task, and not at an average rate.
Formula: how speed turns into money
Site speed is not an “abstract metric”, but a multiplier in the funnel. You can imagine website revenue as the product of several indicators, and see how each of them depends on speed.
Example in BYN. Let’s say the site receives 6,000 visits per month (SEO + advertising), on a slow infrastructure, 55% of visitors reach the page, site CR is 2%, sales CR is 25%, the average bill is 2000 BYN. Total: 6000 × 0.55 × 0.02 × 0.25 × 2000 ≈ 33,000 BYN/month. After switching to a fast infrastructure, revenue increases to 85%, site CR - to 2.5% (due to INP and the absence of “sticky” forms): 6000 × 0.85 × 0.025 × 0.25 × 2000 ≈ 63,750 BYN/month. The difference is +30,000 BYN per month on the same traffic, only due to infrastructure and speed optimization. This is ≈ +360,000 BYN per year, which is tens of times higher than the cost of professional hosting and optimization work.
Four directions for working on speed
When it becomes clear how much a slow website costs in money, a logical question arises: what exactly to do? It is convenient to divide the work to speed up a website into four areas - they complement each other, and skipping any one makes the others less effective.
Infrastructure
foundation
Site architecture
base
Front-end optimization
content
Monitoring and support
reliability
Spot approach
“We’ll just install caching and optimize the images”
The most common mistake: the site owner hears from someone is talking about WebP or a caching plugin and wants “fine-tuning”. In reality, 3–5 high-speed plugins installed without a general engineering approach conflict with each other, break the layout, do not solve the main problem (infrastructure) and create the illusion of work. The site becomes “sort of faster” in PageSpeed, but real users on mobile LTE see the same thing.
System approach
Infrastructure + architecture + frontend + monitoring
The site is considered as a single selling system. First we fix the foundation - hosting, stack, database, certificates. Then - caching and CMS architecture. Then - optimization of the frontend for real users and their devices. And only after that, constant monitoring is activated so that the metrics do not degrade over time. Each subsequent improvement builds on the working previous one - that is why the effect stacks and does not “overlap” the previous one.
Eight professional infrastructure modules
If you look “under the hood” of a site whose metrics are consistently in the green zone and the uptime is ≥ 99.9%, there is always a set of 8 modules. They do not depend on the CMS (Drupal, WordPress, Bitrix, self-writing) - this is an infrastructure checklist for a professional project.
Dedicated server infrastructure
VPS or cloud server with a known geography, provider SLA of at least 99.9%, dedicated CPU/RAM/disk resources and a clear billing forecast. The location is chosen for the real audience, and not “where is cheaper.”
Modern web stack
Nginx as a reverse proxy, modern PHP with OPcache/JIT, optimized database (MariaDB or PostgreSQL), Redis/Memcached for object cache. All this is configured for a specific CMS, and does not stand “as installed.”
Layered caching
Full-line page cache (Varnish/nginx fastcgi_cache), object cache, OPcache, HTTP headers for browsers, configuration without typical mistakes “we invalidate everything.” The load on the database drops significantly, TTFB reaches the level of 150–250 ms.
CDN and optimization of static delivery
Images, fonts, CSS and JS are distributed via CDN (Cloudflare or analogue) from edge servers closest to to the user. HTTP/2 or HTTP/3, Brotli, correct Cache-Control and immutable for immutable resources.
Front-end optimization for Core Web Vitals
WebP/AVIF with auto-generation of sizes, lazy-load for non-critical content, critical CSS and deferred JS, refusal of unnecessary third-party scripts. LCP, INP and CLS metrics are in the “green” zone for 75%+ of real sessions.[1]
Security and SSL
HTTPS with modern ciphers, HSTS, protection against typical attacks (SQL injections, XSS, brute force), WAF at the CDN level, regular CMS and stack updates. The site is not only fast, but also does not crash at the first suspicious traffic.
Backup and recovery
Daily backups of the database and files, plus a separate backup before each release, storage in remote storage, regular checks for restoreability. In case of any incident, the site returns in hours, not weeks.
Monitoring and technical support
External uptime-monitor, collection of Core Web Vitals RUM metrics, error logging (Sentry or analogue), alerts to a channel through which there is an engineer on duty. Not “it fell, we’ll see tomorrow,” but “it fell, we know in 60 seconds, we’ll fix it today.”
✓
Investing in site speed gives maximum effect in a certain order: first the infrastructure (hosting, stack, certificates), then caching and CMS architecture, then optimizing the frontend for real users, and only after that constant monitoring is turned on. An attempt to “optimize images and plugins” without a foundation gives a temporary improvement that goes away after 2-3 months under load.
Hosting “on its own”
Site with one provider, development with another, SEO with a third
When infrastructure, development and promotion distributed among different contractors, no one is responsible for the final result. The developer believes that the slowness is due to the hosting, the hosting believes that the code is to blame, the SEO specialist blames both of them for the drop in rankings due to Core Web Vitals. Time and money are spent on correspondence, and the site continues to lose requests every day.
Single ecosystem
Hosting, development and promotion in one hand
When the infrastructure, code and marketing are handled by one team, site speed becomes a common KPI, and not the responsibility of one developer. Hosting settings take into account SEO needs, front-end optimization takes into account advertising objectives, and monitoring takes into account actual user behavior. The result does not grow point by point, but at the level of the entire system.
Checklist: how to check the speed and hosting of your website
So that an entrepreneur can independently assess how ready his website is to work in modern conditions, below are eight practical points. They will not replace a full technical audit, but they will immediately show whether you have systemic problems with infrastructure and speed.
8 checks of your hosting and speed
- PageSpeed Insights for the mobile version. Paste the URL of the main and two or three key pages into
pagespeed.web.dev, see Core Web Vitals section. If LCP > 2.5 s, INP > 200 ms or CLS > 0.1 - you have a material problem with speed, which already affects ranking and conversion.[2] - Search Console → Core Web Vitals report.If in Google Search Console has your website, open the “Key Internet Indicators” section. Here Google shows how many pages you have fall into the “good”, “needs to be improved” and “bad” zones - and it is these statistics that affect search ranking.
- Measurement of TTFB from an external service.Through any uptime monitoring or online tool check TTFB from different points (Minsk, Moscow, Europe). If TTFB > 800 ms from at least one location where you have clients - it’s time to change the infrastructure.
- Site availability for the last 90 days.Request an uptime report from your hosting provider or SEO contractor. If you cannot get such a report at all, then no one is monitoring this, and the site may crash without your knowledge.
- Home page sizeOpen the site in the browser developer tools (Network), look at the total weight of downloaded resources on the first visit. For a modern website, 1–2 MB is considered the norm; 5–10 MB is a reason for serious optimization.
- Landing Page Experience in Google AdsIf you are running a context, open the “Landing Page Quality” column in your advertising account. If there is often “below average” - this is not about the “offer”, it is about the speed and convenience of the mobile version, which directly affect the Quality Score and the cost of a click.[5]
- Site behavior per hour peakCheck how your website opens at the moment when an active advertising campaign is launched or an SEO peak is in progress. If during “quiet” hours it loads in 1.5 seconds, and under load - 6 seconds, then the infrastructure does not scale, and advertising budgets are spent ineffectively precisely when they should work at their maximum.
- An action plan if the site crashes.Answer yourself the question: “What happens if the site stops opening on Friday evening?” If the answer sounds like “I don’t know”, “I’ll write a ticket to the hosting” or “I’ll call a freelancer” - you actually do not have operational support. This is a risk for business, no less serious than the lack of accounting.
How we at ONTOP approach speed and hosting
At the studio ONTOP we proceed from the fact that a website without a strong infrastructure is a marketing asset on an unreliable support. No matter how much we invest in professional website with a thoughtful structure, copywriting and SEO promotion, all this loses its effectiveness if the page takes a long time to open or the server is unstable. Therefore, we perceive hosting, stack and speed as part of marketing, and not as a “separate IT issue.”
In practice this means several principles. First: the new site is launched not on the “default shared hosting settings”, but on a professional infrastructure with correctly configured caching, CDN and monitoring from the very first day. Second: existing client sites first undergo a technical audit of speed and reliability, and only then we connect contextual advertising and SEO - because pouring traffic on a slow site means burning your budget faster than earning money. Third: hosting, development and promotion are held in in the same hands, so that one team is responsible for the speed of the site, and not three contractors shifting responsibility to each other.
Do you want to understand how many applications you lose due to site speed and “economical” hosting? We will do a technical audit of your infrastructure and show with numbers what can be improved.
Answers to frequently asked questions
How much does website speed affect rankings in Google?
Google has been officially using Core Web Vitals metrics (LCP, INP, CLS) since 2021 as part of the page experience assessment, one of the ranking factors. This is not the main factor, but all other things being equal (relevance, content, links), a fast site will be higher than a slow one.[4] On competitive queries, where the relevance of the top 10 is approximately the same, speed can be a decisive signal. Plus, speed indirectly affects through behavioral factors: a slow site receives more failures, and this also worsens its position.
Does site speed affect contextual advertising on Yandex Direct and Google Ads?
Yes, and directly through money. Google Ads has a Landing Page Experience metric that is part of the Quality Score formula. If the page is slow and inconvenient on mobile, the Quality Score drops, the cost per click increases, and the Ad Rank decreases. Two companies at the same auction can pay many times different amounts per click - and often the difference is not in the rates, but in the quality of the landing page.[5] Yandex Direct has a similar logic: the quality of the site is taken into account when calculating the cost of a click and impressions. Slow website = more expensive advertising for the same audience.
What is really important - the overall score in PageSpeed Insights or something else?
The final score in PageSpeed (that same 0–100) is a convolution of several metrics and a convenient reference point for comparison, but you should focus not on it, but on three specific Core Web Vitals: LCP, INP and CLS. The score can “jump” from launch to launch, and Core Web Vitals is already the data of real users (field data) in the Chrome UX Report, Google looks at them when ranking.[1][2] Plus, metrics that are not directly included in Core Web Vitals, but are critical for user: TTFB and uptime.
I have a simple WordPress site, does it need professional hosting?
“Simple” and “cheap” are two different things. WordPress or any popular CMS on shared hosting begins to slow down under any serious load, because it shares resources with hundreds of other sites, and itself generates each page from scratch without proper caching. If your site receives at least 30–50 visits per day from advertising or SEO and brings in requests, it is already profitable for it to stand on a normal infrastructure. The difference in cost is small, but the effect on conversion and traffic costs is immediately visible.
Is it possible to “speed up a site” with plugins, without changing hosting?
Partially and not for long. Image caching and optimization plugins can remove 20-30% of the load and improve PageSpeed performance in synthetic tests. But if the foundation is shared hosting with a high TTFB and without a normal stack, a real user on mobile LTE will continue to see a slow site. Under load (advertising, SEO peak, promotion) the situation gets worse. The correct sequence is infrastructure first, then caching and front-end optimization, and only then fine-tuning.
How much does professional hosting cost for a commercial website in BYN?
For a small and medium-sized commercial website in Belarus, a realistic budget for professional infrastructure starts from 50–120 BYN per month for a VPS/cloud with proper configuration and monitoring; for large projects with tens of thousands of visits and high load - from several hundred BYN. If we compare it with the revenue that the site brings in: for a project that earns at least 10–20 thousand BYN per month, saving on hosting 30–40 BYN is approximately 0.2–0.4% of revenue, which easily eats up tens of percent of lost applications due to slow work.
If I currently have a site on some kind of hosting, is it possible to move it without risk?
Yes, with proper planning, the move takes place without downtime and loss of position. The sequence is standard: preparing and configuring a new server, a full copy of the database and files, a test run on a temporary domain, setting up DNS with a low TTL, final synchronization and switchover. When moving, it is important to save all redirects, configure HTTPS correctly and not lose SEO settings (sitemap, robots, analytics tags). If this is done by a team that understands hosting, website, and SEO, the risk is minimal.
Bottom line: site speed is an investment that pays off immediately
A slow website and “saving” hosting are one of the most unobvious and expensive items of business expense. It is not visible in accounting: there is no line “losses due to 3 extra seconds of loading”. But it is there in the funnel - in the form of refusals, expensive advertising clicks, low search positions and customers who go to competitors. Unlike many marketing investments, the effect of working with site speed is immediately visible: in the very first week after the launch of a fast infrastructure, the bounce rate, time on page, form conversion and cost per click in advertising change.
The key idea is simple: site speed and hosting quality cannot be considered separately from marketing. This is the same system, only its technical side. A website is a sales channel, hosting is its engine, and speed is the power with which this engine pulls everything else: design, content, SEO, advertising. Trying to push marketing on a slow infrastructure is the same as pouring expensive fuel into an engine with a clogged filter: most of the energy goes not into movement, but into losses.
If you are faced with a choice today - save on hosting and speed or invest in professional infrastructure - the right question is not how much normal hosting costs per month. The right question is: how many applications, clients and revenue do you lose in the same month due to a slow website and unreliable hosting. In almost any commercial project, the answer to this question shows that the investment in speed is returned in the first weeks - and continues to work for the business every day after that.
Sources
- web.dev - Web Vitals - Google's official material on user experience quality metrics (LCP, INP, CLS), their threshold values and how they are collected from real users.
- Google PageSpeed Insights - About - Google documentation on the PageSpeed Insights tool, explaining the difference between synthetic (lab) and real (field) Core Web Vitals data.
- Think with Google - Mobile Page Speed: New Industry Benchmarks - Google industry research on the relationship between mobile page loading speed and the probability of failure: bounce increase by 32% when increasing from 1 to 3 seconds and by 90% - at 5 seconds.
- Google Search Central - Timing for Page Experience ranking signal - Google's official announcement that Core Web Vitals, along with other page experience signals, have become a ranking factor in search results.
- Google Ads Help - About Landing page experience - Google Ads reference material on how landing page quality (including speed and usability on mobile) affects Quality Score, Ad Rank and cost per click in advertising.
- Nielsen Norman Group - Response Times: The 3 Important Limitsis a classic NN/g piece on three speed perception thresholds (0.1 s, 1 s, 10 s), which still remain guidelines for the design of fast interfaces.