Enabling clients to deliver meaningful improvements to mobile experience
Written by Jonny Artis – Global Head of Experience
As the digital world becomes increasingly tilted towards mobile devices, a trend only accelerated by the effects of the Covid-19 pandemic, it is perhaps unsurprising that brands are placing greater emphasis on how consumers experience their presence on the small screen.
This raises the question; in these times of rapid change across the digital landscape, as new technologies and best practices emerge amidst a backdrop of increasing consumer expectation, how can brands go about identifying where improvements need to be made and how can we, as agency practitioners, support them in turning roadmaps into reality?
The second part of that question is particularly salient. Knowing where weaknesses and opportunities exist, and how to address the situation accordingly, is only a part of the challenge. Quite often, it is the leap from theoretical solution to practical implementation that poses the greatest headache of all.
As crucial as the mobile web (and how brands represent themselves on it) is in this breathless, modern era, it does not exist in a vacuum. Brands face an array of different challenges, both of a digital nature and beyond, all of which are competing for finite budget and resource in order to tackle them.
The biggest question of their respective agency partners is therefore not “what do we do?”, but rather “how, and why, do we make it happen?”.
In this whitepaper we’re going to touch on 5 key steps designed not only to identify what brands need to do in order to accelerate the advancement of their mobile experience, but also crucially how to help them navigate the potential pitfalls and roadblocks which can prevent highly valuable initiatives from coming to fruition. These steps are as follows:
- Identify issues and opportunities
- Benchmark the competition
- Build a business case
- Define the priorities
- Implement, test, report & refine
Let’s begin by looking at the steps we can take to identify where issues and opportunities exist.
-
Identify issues and opportunities
The first step to helping client deliver a truly first-class mobile experience, is to identify precisely where room for improvement exists. Fortunately, there are a number of tools and frameworks which exist to make our lives easier in this respect.
Google Mobile Friendly Test
Google’s mobile friendly test is a tool which assesses the fundamentals of mobile-friendly design; has the viewport been set, is text large enough to read comfortably without the need to zoom, are buttons large enough and far enough apart from one another? Basic, but crucial components of best practice mobile experience.
Passing the Google mobile friendly test alone is not necessarily sufficient to suggest a great mobile experience, however failing it is a sure sign that a page is likely to frustrate users on small-screen devices.
The tool can be found here: https://search.google.com/test/mobile-friendly
Google Page Speed Insights
One of the most fundamental components of mobile experience today is speed; if a user is unable to access the information they need or complete an action quickly, they are likely to go elsewhere. And although the technologies which transfer data over the web are faster than ever before, flaws in the technical composition of web pages themselves can nevertheless impact performance.
At a high-level, Google Page Speed Insights provides a simple, 1-100 for both mobile and desktop speed. Typically, anything less than 50 is considered poor, 50 to 89 moderate with 90+ being the optimal range (though of course, a perfect score should be the ultimate aspiration).
The tool provides a wealth of practical information and recommendations designed to improve page speed, be it minifying CSS, minimizing unused JavaScript or reducing initial server response time, all helping to inform a highly actionable to-do list of improvements.
The tool can be found at https://developers.google.com/speed/pagespeed/insights/
Core Web Vitals
Core Web Vitals are a series of user-centric metrics designed to measure how smooth and efficient the on-page experience is. Introduced by Google in 2020, they are relevant to both mobile and desktop devices, however performing well across all three metrics is pivotal to a great user experience.
The three metrics are as follows:
- Loading: Largest Contentful Paint (LCP) – in effect, how long does it take for the page’s primary content to load on the page? The sooner that is, the sooner the user is able to interact with the page.
- Interactivity: First Input Delay (FID) – again, this metric primarily relates to speed. In this instance, how long does it take for a particular action to take place when the user interacts with elements of the page, for instance clicking a button.
- Visual Stability: Cumulative Layout Shift (CLS) – this is a measure of how stable a page’s layout is as its component parts begin to load. The optimal experience dictates that elements should load and remain in their intended position on the page, rather than shifting about or jerking as things like adverts and plug-ins appear on screen.
Data relating to a given site’s performance across the three Core Web Vitals can be found within Google Search Console. The report will provide a detailed breakdown of URLs which perform well, those which need improvement and those which perform poorly, along with aggregated performance data relating to each metric for each URL.
In addition, all of Google’s popular tools for developers now support measurement of Core Web Vitals. You can read more at: https://web.dev/vitals-tools/.
Analytics Data (e.g. Google Analytics)
The tools and frameworks above provide a highly structured and quantifiable way to measure the health and quality of a client’s mobile experience, however there are nevertheless subtleties and nuances beyond these data points which are worthy of ongoing assessment.
There will always be a small element of subjectivity to what each and every respective user defines as great mobile experience in their own mind. But by and large the common signs and symptoms of weakness will be identifiable via analytics platforms such as Google Analytics. Signs worth investigating include pages which:
- Elicit high bounce or exit rates
- Elicit low average page view times
- Record a high-level of dropouts during the conversion funnel
It is worth noting that the reasons behind any such underwhelming performance could be related to the fundamental aspects of mobile experience design already touched upon, or broader considerations such as:
- Design and layout of the page
- Intuitiveness of navigation
- Quality of page content
- The page’s ability to service the most common audience needs
- E-A-T: does the page exude Expertise, Authority and Trust?
For potentially problematic pages, a deep analysis of these elements is likely to prove worthwhile. Above and beyond that, leveraging A/B or multivariate testing via platforms such as Google Optimize will provide a wealth of quantifiable data to help inform decisions and reduce subjectivity.
Usability testing
Beyond these various data points, observing your users navigating and interacting with your mobile site can provide highly valuable, human insights into the site’s usability and any potential weak spots.
There are a wide range of methods which enable you to do this, from physical UX testing labs to software-based solutions. Many of these solutions enable you to not only see what users do while on-site, but also to communicate with them and observe their expressions and body language in real time.
Such initiatives can be incredibly helpful in solving more subtle UX conundrums that data alone cannot easily identify. To learn more and to explore potential providers, search Google for “usability testing”.
-
Benchmark the competition
Working through the steps just outlined will deliver a wealth of quantifiable data points against which we can measure different aspects of mobile experience, identify weaknesses and propose appropriate solutions.
However, it is worthwhile noting that those ultimately signing-off on any such solutions client-side may not necessarily be technical experts. For instance, a report which outlines that 75% of pages score poorly across the LCP and FID components of Core Web Vitals is valuable analytical data for the technical practitioner, but in and of itself may not appear overwhelmingly compelling to a senior member of the client team with a broad remit and a limited technical understanding of what such a statistic means in practice.
By benchmarking these metrics against the client’s main competitors, we can begin to give the situation greater context. It also enables us to talk in a more natural language. For instance, we might be able to say:
“According to data provided by Google, our mobile experience is poorer than five out of six competitor brands we assessed. In particular, one piece of analysis showed that our mobile pages were on average more than 50% slower than Competitor X. Since more than three quarters of our traffic comes from users on mobile devices, this could mean we’re losing out on sales to our rivals.”
Whilst only a basic example, by limiting technical terms, focusing on metrics that are easily understood and referring to a commercial significance (e.g. impact on sales) we create a more effective conversation starter than rushing in with a stream of technical data. Crucially though, this is indeed merely a conversation starter; to get the level of buy-in we need in many cases, we’re going to need to build a business case.
-
Build a business case
At this stage, we know where the potential issues lie, we know what we need to do to address them and we know how the client fares versus their key rivals. As we touched on earlier, knowing where the problems are and how to fix them is only a part of the challenge; now we need to get the work done. And for that to happen, we need buy-in from our client decision makers.
It’s important to remember that a great mobile experience, in and of itself, isn’t commercially significant. At least not in the most direct sense; whilst there are many benefits to an enhanced small screen presence, it is the means to the end, rather than the end itself. The metrics we have discussed thus far do not appear in annual company reports, and they do not directly affect a client’s share price. They are, in effect, soft metrics.
However, in an increasingly mobile-first world, mobile best practice does directly impact metrics which are of great commercial significance Improved mobile experiences can improve organic rankings, which in turn drives additional traffic. Traffic from all channels is likely to convert at a higher conversion rate, driving increased sales and subsequently revenue. These are hard metrics, and the ones that we want at the heart of our business case.
So, how do we take these soft metrics, and turn them into the kind of compelling hard metric which may be necessary to achieve sign-off on the recommended course of action?
To begin to do this, we can split the metrics we hope to implement into three distinct segments; input, conversion and output. The input is anything whereby more throughput, such as traffic is being created, the conversion stage is anything which is affected between the first click and the point of sale whilst the output is anything which is affected by either or both input and conversion stages. Let’s illustrate the point with an example projection:
As you can see, the improvements forecast at input and conversion stages are relatively modest, with only conservative, fractional growth projected. However, because of the multiplying effects of those metrics at the conversion stage, the impact on the bottom-line output is profound both in monetary and percentage terms.
Whilst it is naturally difficult to forecast the precise impact an investment in mobile experience can make, growth and efficiencies akin to the example above are eminently achievable when taking a client from a sub-par mobile experience towards something approaching best practice element.
Ultimately, by expressing the benefits of mobile experience enhancement not in technical terminology but rather in terms of traffic, sales and revenue, we are far more likely to win the client-side hearts and minds that matter most when it comes to making key decisions.
-
Define the priorities
Having reached the point of understanding where improvements and optimizations are to be made, how the client stacks up versus the competition and what the long-term commercial value might look like, we must now seek to establish where our priorities lie.
Invariably, even with the most compelling of business cases, it is rare that all items on the mobile UX to-do list can be acted on with immediate effect. As such, a diligent and pragmatic approach is required in order to identify the requests which should be front and center of the implementation queue.
It’s perhaps natural to gravitate towards actions deemed likely to have the greatest commercial impact; after all, that is the ultimate litmus test upon which our business case is based. However, prioritizing initiatives with substantial commercial upsides but also substantial lead times due to their complexity can lead to the throttling of progress and a dampening of will to implement tasks lower down on the list.
By taking a more blended view of the potential upside, implementation complexity and the number of dependencies (such as third-party partner agencies or platform providers), we can start to make an impact sooner, building good will within the client organization which will be invaluable in seeing through the proper execution of subsequent requirements (particularly those likely to be lengthier and more expensive to deliver).
To aid with this, the scorecard below provides a helpful scoring system to help gauge where initiatives should sit on the priority list.
-
Implement, test, report & refine
With client buy-in established and a list of prioritized actions in hand serving as a clear roadmap for implementation, many of the major hurdles on the road to mobile excellence have been cleared. That said, it is important to note at this juncture that this exercise should not be considered a linear journey but rather a cyclical one as room for improvement inevitably remains and new potential is created as technologies, tools and frameworks evolve.
Walking the client through the implementation process with the aid of carefully structured, unambiguous briefs, underpinned by clear and tangible expectations as to what success looks like, is of course one of the single most important components of the process to get right. However, the questions we ask post-implementation are equally important:
- Test – Has the brief been carried out correctly? Does it meet our predetermined success criteria?
- Report – What impact has implementation had on performance, both in terms of soft and hard business metrics? How does performance compare to our projections?
- Refine – Were there any areas where the work carried out did not deliver the anticipated performance benefits? Where does room for improvement still exist? Are there any new opportunities to exploit?
This component should be considered ongoing from the point of the first implementation.
Testing, both pre-and-post production, is essential to ensuring that efforts are having the desired effect on soft metrics. Reporting back to the client serves as a key reminder that the work so far has been worthwhile – this may be crucial in ensuring that larger, more complex requests later in the pipeline go ahead on time.
Ongoing refinement is essential in establishing that the client not only leads the way in mobile experience in the now, but continues to do so for the foreseeable future. And given the scale of the potential benefits we explored earlier, that should be our number one goal.