Tag Archive | "rate"

Ask MarketingSherpa: Maturity of conversion rate optimization (CRO) industry

Marketers and experts weigh in on where CRO is in the adoption lifecycle.
MarketingSherpa Blog

Related Articles

Posted in Latest NewsComments Off

Google doc rekindles myth that click-through rate affects rankings

Google has said they do not use click data for search ranking purposes but here is a document from Google that triggers confusion around the topic again.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

When Bounce Rate, Browse Rate (PPV), and Time-on-Site Are Useful Metrics… and When They Aren’t – Whiteboard Friday

Posted by randfish

When is it right to use metrics like bounce rate, pages per visit, and time on site? When are you better off ignoring them? There are endless opinions on whether these kinds of metrics are valuable or not, and as you might suspect, the answer is found in the shades of grey. Learn what Rand has to say about the great metrics debate in today’s episode of Whiteboard Friday.

When bounce rate browse rate and ppc are useful metrics and when they suck

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about times at which bounce rate, browse rate, which is pages per visit, and time on site are terrible metrics and when they’re actually quite useful metrics.

This happens quite a bit. I see in the digital marketing world people talking about these metrics as though they are either dirty-scum, bottom-of-the-barrel metrics that no one should pay any attention to, or that they are these lofty, perfect metrics that are what we should be optimizing for. Neither of those is really accurate. As is often the case, the truth usually lies somewhere in between.

So, first off, some credit to Wil Reynolds, who brought this up during a discussion that I had with him at Siege Media’s offices, an interview that Ross Hudgens put together with us, and Sayf Sharif from Seer Interactive, their Director of Analytics, who left an awesome comment about this discussion on the LinkedIn post of that video. We’ll link to those in this Whiteboard Friday.

So Sayf and Wil were both basically arguing that these are kind of crap metrics. We don’t trust them. We don’t use them a lot. I think, a lot of the time, that makes sense.

Instances when these metrics aren’t useful

Here’s when these metrics, that bounce rate, pages per visit, and time on site kind of suck.

1. When they’re used instead of conversion actions to represent “success”

So they suck when you use them instead of conversion actions. So a conversion is someone took an action that I wanted on my website. They filled in a form. They purchased a product. They put in their credit card. Whatever it is, they got to a page that I wanted them to get to.

Bounce rate is basically the average percent of people who landed on a page and then left your website, not to continue on any other page on that site after visiting that page.

Pages per visit is essentially exactly what it sounds like, the average number of pages per visit for people who landed on that particular page. So people who came in through one of these pages, how many pages did they visit on my site.

Then time on site is essentially a very raw and rough metric. If I leave my computer to use the restroom or I basically switch to another tab or close my browser, it’s not necessarily the case that time on site ends right then. So this metric has a lot of imperfections. Now, averaged over time, it can still be directionally interesting.

But when you use these instead of conversion actions, which is what we all should be optimizing for ultimately, you can definitely get into some suckage with these metrics.

2. When they’re compared against non-relevant “competitors” and other sites

When you compare them against non-relevant competitors, so when you compare, for example, a product-focused, purchase-focused site against a media-focused site, you’re going to get big differences. First off, if your pages per visit look like a media site’s pages per visit and you’re product-focused, that is crazy. Either the media site is terrible or you’re doing something absolutely amazing in terms of keeping people’s attention and energy.

Time on site is a little bit misleading in this case too, because if you look at the time on site, again, of a media property or a news-focused, content-focused site versus one that’s very e-commerce focused, you’re going to get vastly different things. Amazon probably wants your time on site to be pretty small. Dell wants your time on site to be pretty small. Get through the purchase process, find the computer you want, buy it, get out of here. If you’re taking 10 minutes to do that or 20 minutes to do that instead of 5, we’ve failed. We haven’t provided a good enough experience to get you quickly through the purchase funnel. That can certainly be the case. So there can be warring priorities inside even one of these metrics.

3. When they’re not considered over time or with traffic sources factored in

Third, you get some suckage when they are not considered over time or against the traffic sources that brought them in. For example, if someone visits a web page via a Twitter link, chances are really good, really, really good, especially on mobile, that they’re going to have a high bounce rate, a low number of pages per visit, and a low time on site. That’s just how Twitter behavior is. Facebook is quite similar.

Now, if they’ve come via a Google search, an informational Google search and they’ve clicked on an organic listing, you should see just the reverse. You should see a relatively good bounce rate. You should see a relatively good pages per visit, well, a relatively higher pages per visit, a relatively higher time on site.

Instances when these metrics are useful

1. When they’re used as diagnostics for the conversion funnel

So there’s complexity inside these metrics for sure. What we should be using them for, when these metrics are truly useful is when they are used as a diagnostic. So when you look at a conversion funnel and you see, okay, our conversion funnel looks like this, people come in through the homepage or through our blog or news sections, they eventually, we hope, make it to our product page, our pricing page, and our conversion page.

We have these metrics for all of these. When we make changes to some of these, significant changes, minor changes, we don’t just look at how conversion performs. We also look at whether things like time on site shrank or whether people had fewer pages per visit or whether they had a higher bounce rate from some of these sections.

So perhaps, for example, we changed our pricing and we actually saw that people spent less time on the pricing page and had about the same number of pages per visit and about the same bounce rate from the pricing page. At the same time, we saw conversions dip a little bit.

Should we intuit that pricing negatively affected our conversion rate? Well, perhaps not. Perhaps we should look and see if there were other changes made or if our traffic sources were in there, because it looks like, given that bounce rate didn’t increase, given that pages per visit didn’t really change, given that time on site actually went down a little bit, it seems like people are making it just fine through the pricing page. They’re making it just fine from this pricing page to the conversion page, so let’s look at something else.

This is the type of diagnostics that you can do when you have metrics at these levels. If you’ve seen a dip in conversions or a rise, this is exactly the kind of dig into the data that smart, savvy digital marketers should and can be doing, and I think it’s a powerful, useful tool to be able to form hypotheses based on what happens.

So again, another example, did we change this product page? We saw pages per visit shrink and time on site shrink. Did it affect conversion rate? If it didn’t, but then we see that we’re getting fewer engaged visitors, and so now we can’t do as much retargeting and we’re losing email signups, maybe this did have a negative effect and we should go back to the other one, even if conversion rate itself didn’t seem to take a particular hit in this case.

2. When they’re compared over time to see if internal changes or external forces shifted behavior

Second useful way to apply these metrics is compared over time to see if your internal changes or some external forces shifted behavior. For example, we can look at the engagement rate on the blog. The blog is tough to generate as a conversion event. We could maybe look at subscriptions, but in general, pages per visit is a nice one for the blog. It tells us whether people make it past the page they landed on and into deeper sections, stick around our site, check out what we do.

So if we see that it had a dramatic fall down here in April and that was when we installed a new author and now they’re sort of recovering, we can say, “Oh, yeah, you know what? That takes a little while for a new blog author to kind of come up to speed. We’re going to give them time,” or, “Hey, we should interject here. We need to jump in and try and fix whatever is going on.”

3. When they’re benchmarked versus relevant industry competitors

Third and final useful case is when you benchmark versus truly relevant industry competitors. So if you have a direct competitor, very similar focus to you, product-focused in this case with a homepage and then some content sections and then a very focused product checkout, you could look at you versus them and their homepage and your homepage.

If you could get the data from a source like SimilarWeb or Jumpshot, if there’s enough clickstream level data, or some savvy industry surveys that collect this information, and you see that you’re significantly higher, you might then take a look at what are they doing that we’re not doing. Maybe we should use them when we do our user research and say, “Hey, what’s compelling to you about this that maybe is missing here?”

Otherwise, a lot of the time people will take direct competitors and say, “Hey, let’s look at what our competition is doing and we’ll consider that best practice.” But if you haven’t looked at how they’re performing, how people are getting through, whether they’re engaging, whether they’re spending time on that site, whether they’re making it through their different pages, you don’t know if they actually are best practices or whether you’re about to follow a laggard’s example and potentially hurt yourself.

So definitely a complex topic, definitely many, many different things that go into the uses of these metrics, and there are some bad and good ways to use them. I agree with Sayf and with Wil, but I think there are also some great ways to apply them. I would love to hear from you if you’ve got examples of those down in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Email Clickthrough Rate: 9-point checklist to get more clicks for your email marketing by reducing perceived cost

A walk through our Email Click Cost Force Checklist, step-by-step
MarketingSherpa Blog

Posted in Latest NewsComments Off

Marketing 101: What is CRO (Conversion Rate Optimization)?

If you’re in advertising or marketing, it helps to have an understanding of what conversion rate optimization is. CRO can be a powerful tool to improve the success of every marketing campaign, initiative and website you work on.
MarketingSherpa Blog

Posted in Latest NewsComments Off

How We Increased Our Email Response Rate from ~8% to 34%

Posted by STMartin

It’s no secret that reply rate is the golden metric of email campaigns.

The reason is obvious. As opposed to open and click rate, reply rate tracks how many recipients were interested (or annoyed) enough to actually write you back. For guest blogging and email outreach, your reply rate will determine your campaign’s success.

We still believe that guest blogging is a great opportunity to improve your site’s link profile and brand exposure. However, the time-investment needed in prospecting/email outreach can leave you questioning its ROI.

It doesn’t often make sense to spend 3 hours prospecting and emailing different opportunities to get only 3 replies.

So how do you make all your prospecting and emailing worth your while?

Simple: Boost your reply rate to generate more “opportunities won” in the same timeframe.

The pain point: Time

At Directive Consulting, we rely on guest posting for our most valuable backlinks. ;) With that said, four months ago our email outreach was still struggling at around an 8% reply rate.

This is actually around the industry standard; guest blogger outreach emails might expect a reply rate in the 5–15% range.

With the below template, we were sending out 20–50 emails a week and receiving no more than 2–4 positive replies.

Part 1:

Part 2:

Part 3:

To make the system more time-efficient, we had to get our reply rate at least into the double digits.

The hypothesis: Value

To boost our reply rate, we asked ourselves: What makes the best online content so engaging?

The answer: The best online content speaks to the user in terms of value. More specifically, the user’s personal values.

So, what are these user values that we need to target? Well, to look at that we need to understand today’s average user.

Image source

As opposed to their predecessors, today’s savvy post-digital users value personalization, customization, and participation.

Our hypothesis was as follows: If we can craft an email user experience that improves upon these three values, our reply rate will spike.

The results: Too hot to handle

3 successful tests later, our reply rate has gone from 8% all the way up to 34%.

And our guest blog content queue is piling up faster than the lines at the mall the night before Black Friday.

In three tests we addressed those three values: personalization, customization, and participation. Each new test brought a spike in reply rate.

How did we do it? Don’t worry, I’ll tell you how.

3 reply rate tests (& a mini test) and what we learned

We started by stepping into the user’s shoes. Everyone knows that receiving random outreach emails from strangers can be jarring. Even if you’re in the industry, it can at least be annoying.

So how do you solve that problem? The only way you can: delight.

How we approached creating a more delightful and comfortable email experience took testing. This is what we learned.

Test #1 – The personalized introduction (8%–16%)

The first feature of our email we tackled was the introduction. This included the subject line of the email, as well as how we introduced ourselves and the company.

Here’s what it looked like:

As you can see, while the subject line packs some serious authority, it’s not very personable. And if you look at the in-email introduction, you’d see a similar problem.

Plenty of professional context, but hardly a personalized first impression. This user-experience screams BLOGGER TRYING TO GET GUEST BLOG OPPORTUNITY.

Now let’s look at the variant we tested:

Big difference, huh?

While all the same authoritative references are still there, this is already far more personal.

A few noteworthy differences in user-experience:

  • Subject line: Natural, single sentence (almost seems like the email could have been forwarded by a co-worker).
  • Name and title: The letterhead not only replaces a useless sentence, it supplies a smiling face the user can match the name/title with.
  • Creative/disruptive branding: The creative letterhead is a real disrupter when you compare it to any old email. It also gets our logo above the fold in the email, and actually saves space all together.

Packing all the context of the email into a single, creative, and delightful image for the user was a huge step.

In fact, this first test alone saw our biggest jump in reply rate.

The results? Our reply rate doubled, jumping all the way from 8% to 16% — above the industry benchmark!

Mini test: The psychology behind “Because” (16%–20%)

If that wasn’t a big enough jump to please us, we added on one more addition after the initial test.

If you don’t know who Brian Dean is, I’ll leave his bio for you to read another time. For now, all you need to know is that his “because” tactic for increasing reply rates works.

Trust me. He tested it. We tested it. It works.

The tactic is simple:

  1. Provide the exact context for your email in a single sentence.
  2. Use the phrasing “I am emailing you because…” in that sentence.
  3. Isolate that sentence as it’s own paragraph as early in the email as possible.

That’s it.

And this little change bumped our reply rate another 4% — all the way up to 20%. And this was before we even ran test #2!

Test #2 – Customizing/segmenting the offer (20%–28%)

Test #2 focused on customization. We had nailed the personalized first impression.

Now we needed to customize our offer to each individual recipient. Again, let’s take a look at where we started:

As far as customization goes, this isn’t half bad. There are plenty of prospective topics that the editor or blogger could choose from. But there’s always room for improvement.

Customization is a fancy word for segmentation, which is our industry’s fancy word for breaking lists into smaller lists.

So why not segment the topics we send to which editors? We can customize our email’s offer to be more relevant to the specific recipient, which should increase our chances of a positive reply.

Instead of a single list of prospective topics, we built 8.

Each list was targeted to a different niche industry where we wanted to guest post. Each list had 10 unique topics all specified to that blog’s niche.

Now, instead of 10 topics for the umbrella category “digital marketing,” we had 10 topics for:

  1. Pay-per-click advertising blogs
  2. Content marketing blogs
  3. Social media management blogs
  4. Software as a service (SaaS) blogs
  5. Interactive design blogs
  6. Search engine optimization blogs
  7. Agency management blogs
  8. E-commerce optimization blogs

Not only did the potential topics change, we also changed the email copy to better target each niche.

This test took a bit of time on its own. It’s not easy to build a list of 80 different targeted, niche, high-quality topics based on keyword research. But in the end, the juice was definitely worth the squeeze.

And what was the juice? Another spike in our reply rate — this time from 20% up to 28%!

Test #3 – Participating in topic selection (28%–34%)

We were already pretty pleased with ourselves at this point, but true link builders are never satisfied. So we kept on testing.

We had already addressed the personalization and customization issues. Now we wanted to take a crack at participation. But how do you encourage participation in an email?

That’s a tricky question.

We answered it by trying to provide the most adaptive offer as possible.

In our email copy, we emphasized our flexibility to the editor’s timeline/content calendar. We also provided a “open to any other options you may have” option in our list of topics. But the biggest change to our offer was this:

As opposed to a list of potential topics, we went one step further. By providing options for either long or short pieces (primary and focalized) we give them something to think about. They can choose from different options we are offering them.

This change did increase our reply rate. But what was surprising was that the replies were not immediately positive responses. More often than not, they were questions about the two different types of guest posts we could write.

This is where the participation finally kicked in.

(Chasing your first reply like Leo’s first oscar….)

We were no longer cold-emailing strangers for one-time guest posts. We were conversing and building relationships with industry bloggers and editors.

And they were no longer responding to a random email. They were actively participating in the topic selection of their next blog post.

Once they started replying with questions, we knew they were interested. Then all we had to do was close them with fast responses and helpful answers.

This tiny change (all we did was split the targeted list we already had into two different sizes) brought big results. Test #3 brought the final jump in our reply rate — from 28% up to the magic 34%.

After we had proved that our new format worked, then we got to have some real fun — taking this killer system we built and scaling it up!

But that’s a post for another day.

Takeaways

So what have our reply rate tests taught us? The more personal you are and the more segmented your approach, the more success you’ll see.

2017 is going to be the year of relationship building.

This means that for each market interaction, you need to remember that the user’s experience is the top priority. Provide as much delight and value to your user as possible. Every blog post. Every email. Every market interaction.

That’s how you triple reply rates. And that’s how you triple success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

In-SERP Conversions: Dawn of the 100% Conversion Rate?

Posted by Alan_Coleman

By now, we’re all pretty used to Knowledge Graph results in the SERPs. But what could it mean when Google offers the ability to make a purchase, a call, book an appointment, or otherwise convert customers within those results? In this video blog, Alan Coleman speculates about a potential 100% conversion rate in the SERPs and raises the question of Google’s role in an increasingly app-centric world.



Video transcript

In this video blog, I’m going to talk to you about a key trend we’ve noticed with Google here at Wolfgang.

12 months ago, the key trend that we were talking about was Google had shifted its focus. From Google’s birth right up until last year, its objective was to get you to the website that was most relevant, most authoritative, most likely to answer your question — whereas what we saw 12 months ago was Google taking a lot more ownership of your journey from question to answer. And what we were seeing 12 months ago was a lot more questions literally being answered on the SERPs, pulling information from Wikipedia, from other websites and giving that to the user directly on Google.

A very recent update to this innovation is that Google is now actually using their own search data to give you further details. Last weekend I was searching for a restaurant and not only did it give me the reviews in the knowledge panel — the website, phone number, and opening hours — it also used its own data to give me the popular times: when I was most likely to get seated in the restaurant, and when it could be a problem.

So, armed with that information, we could go and have a lovely Italian lunch last weekend. But it doesn’t just stop at answering the question.

Conversions facilitated on the SERPs

Google’s methodology has always been to test things out in the organic list first and then, when they’ve learned the mechanics of it, they might try and commercialize it. What we’re beginning to see is not just questions being answered on the SERPs, but we’re beginning to see conversions being facilitated by the SERPs.

What you’re seeing here is someone searching for a medical practitioner. The searcher is actually able to book an appointment directly from the search engine results page.

Another recent innovation: call-only campaigns. Somebody’s searching for a courier, for example, and again, they can call the courier directly from the search engine results without even visiting the website. We’ve also seen click-to-call campaigns, another example of Google users being able to convert directly from the SERP. Very exciting! In theory, we’re talking about 100% conversion rates here: everyone who clicks on your ad becomes a lead or becomes a sale.

There’s also this beta which is currently out — with a very limited number of retailers in the States — whereby searchers are taken from search, to checkout, to placing their order in 3 clicks, all happening on a Google property.

googlepurchase.gif

Image courtesy of Google

Why I believe this is significant:

This is Google safeguarding its position as we move to an app ecosystem. World Wide Web usage is actually in decline of late, because people are moving so much of their web behavior to apps, and Google’s strength has been that it’s our gateway to the Web. Google went down for a period of 4 minutes two years ago, and World Wide Web traffic fell off a cliff — it declined by 40% for that period.

Google is our gateway to the Web. However, if we start moving our Internet usage to apps, Google needs to be relevant there as well. I see that answering questions within Google and on Google, allowing people to convert, again in Google and on Google, is a move for them to safeguard their position as the place where we get our questions answered and where we do our transactions on the Web.

***

Do you have any thoughts on in-SERP conversions? Join the discussion in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Why You Should Use Adjusted Bounce Rate and How to Set It Up

Posted by RobBeirne

We need to talk about bounce rate.

Now, before I begin ranting, I’d just like to put on the record that bounce rate can, in certain cases, be a useful metric that can, when viewed in the context of other metrics, give you insights on the performance of the content on your website. I accept that. However, it is also a metric which is often misinterpreted and is, in a lot of cases, misleading.

We’ve gone on the record with our thoughts on bounce rate as a metric, but it’s still something that crops up on a regular basis.

The problem with bounce rate

Put simply, bounce rate doesn’t do what a lot of people think it does: It does not tell you whether people are reading and engaging with your content in any meaningful way.

Let’s make sure we’re all singing the same song on what exactly bounce rate means.

According to Google, “Bounce Rate is the percentage of single-page sessions (i.e. sessions in which the person left your site from the entrance page without interacting with the page).”

In simple terms, a bounce is recorded when someone lands on your website and then leaves the site without visiting another page or carrying out a tracked action (event) on the page.

The reality is that while bounce rate can give you a useful overview of user behaviour, there are too many unknowns that come with it as a metric to make it a bottom-line KPI for your advertising campaigns, your content marketing campaigns, or any of your marketing campaigns, for that matter.

When looked at in isolation, bounce rate gives you very little valuable information. There is a tendency to panic when bounce rate begins to climb or if it is deemed to be “too high.” This highly subjective term is often used without consideration of what constitutes an average bounce rate (average bounce rate for a landing page is generally 70-90%).

There’s a school of thought that a high bounce rate can be seen as a good thing, as it means that the user found no need to go looking any further for the information they needed. While there is some merit to this view, and in certain circumstances it can be the case, it seems to me to be overly simplistic and opaque.

It’s also very important to bear in mind that if a user bounces, they are not included in site metrics such as average session duration.

There is, however, a simple way to turn bounce rate into a robust and useful metric. I’m a big fan of adjusted bounce rate, which gives a much better metric on how users are engaging with your website.

The solution: adjusted bounce rate

Essentially, you set up an event which is triggered after a user spends a certain amount of time on the landing page, telling Google Analytics not to count these users as bounces. A user may come to your website, find all of the information they need (a phone number, for example) and then leave the site without visiting another page. Without adjusted bounce rate, such a user would be considered a bounce, even though they had a successful experience.

One example we see frequently of when bounce rate can be a very misleading metric is when viewing the performance of your blog posts. A user could land on a blog post and read the whole thing, but if they then leave the site they’ll be counted as a bounce. Again, this gives no insight whatsoever into how engaged this user was or if they had a good experience on your website.

By defining a time limit after which you can consider a user to be ‘engaged,’ that user would no longer count as a bounce, and you’d get a more accurate idea of whether they found what they were looking for.

When we implemented Adjusted Bounce Rate on our own website, we were able to see that a lot of our blog posts which had previously had high bounce rates, had actually been really engaging to those who read them.

For example, the bounce rate for a study we published on Facebook ad CTRs dropped by 87.32% (from 90.82% to 11.51%), while our Irish E-commerce Study dropped by 76.34% (from 82.59% to 19.54%).

When we look at Moz’s own Google Analytics for Whiteboard Friday, we can see that they often see bounce rates of over 80%. While I don’t know for sure (such is the uncertainty surrounding bounce rate as a metric), I’d be willing to bet that far more than 20% of visitors to the Whiteboard Friday pages are interested and engaged with what Rand has to say.

This is an excellent example of where adjusted bounce rate could be implemented to give a more accurate representation of how users are responding to your content.

The brilliant thing about digital marketing has always been the ability of marketers to make decisions based on data and to use what we learn to inform our strategy. Adjusted bounce rate gives us much more valuable data than your run-of-the-mill, classic bounce rate.

It gives us a much truer picture of on-site user behaviour.

Adjusted bounce rate is simple to implement, even if you’re not familiar with code, requiring just a small one-line alteration to the Google Analytics code on your website. The below snippet of code is just the standard Google Analytics tag (be sure to add your own tracking ID in place of the “UA-XXXXXXX-1″), with one extra line added (the line beginning with “setTimeout”, and marked with an “additional line” comment in the code). This extra line is all that needs to be added to your current tag to set up adjusted bounce rate.

<script type="text/javascript">
 var _gaq = _gaq || [];
 _gaq.push(['_setAccount', 'UA-XXXXXXX-1']);
 _gaq.push(['_trackPageview']);
setTimeout("_gaq.push(['_trackEvent', '15_seconds', 'read'])",15000);  // --additional line
 (function() {
 var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
 ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
 var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
 })();
</script>

It’s a really simple job for your developer; simply replace the old snippet with the one above (that way you won’t need to worry about your tracking going offline due to a code mishap).

In the code above, the time is set to 15 seconds, but this can be changed (both the ’15_seconds’ and the 15000) depending on when you consider the user to be “engaged”. This ‘15_seconds’ names your event, while the final part inside the parenthesis sets the time interval and must be input in milliseconds (e.g. 30 seconds would be 30000, 60 seconds would be 60000, etc.).

On our own website, we have it set to 30 seconds, which we feel is enough time for a user to decide whether or not they’re in the right place and if they want to leave the site (bounce).

Switching over to adjusted bounce rate will mean you’ll see fewer bouncers within Google Analytics, as well as improving the accuracy of other metrics, such as average session duration, but it won’t affect the tracking in any other way.

Adjusted bounce rate isn’t perfect, but its improved data and ease of implementation are a massive step in the right direction, and I firmly believe that every website should be using it. It helps answer the question we’ve always wanted bounce rate to answer: “Are people actually reading my content?”

I firmly believe that every website should be using adjusted bounce rate. Let me know what you think in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

A 5-Step Framework for Conversion Rate Optimization

Posted by Paddy_Moogan

There is a problem with conversion rate optimization:
It looks easy. Most of us with some experience working online can take a look at a website and quickly find problems that may prevent someone from converting into a customer. There are a few such problems that are quite common:

  • A lack of customer reviews
  • A lack of trust / security signals
  • Bad communication of product selling points

The thing is, how do we know
for sure that these are problems?

The fact is, we don’t. The only way to find out is to test these things and see. Even with this in mind, though, how do you know to test these things that are mainly based on your own gut feeling?

For me, this is where doing a high level of research and discovery is worth the time and effort. It can be far too easy to make assumptions about what to test and then dive straight in and start testing them. Wouldn’t it be better to run conversion rate tests based on actual data from your target audience?

I’m going to go into detail on the process we use at Distilled for conversion rate optimization. With the context above, it shouldn’t be any surprise that I spend a lot of time talking about the discovery phase of the process as opposed to testing and reviewing results.

For those of you who want the answer straight away and an easy takeaway, here is a graphic of the process: 

Before I move on, I wanted to give you a few links that have certainly helped me over the last few years when learning about conversion rate optimization.

Right, let’s get into the process.

This entire stage is all about one thing: gathering the data you need to inform your testing. This can take time and if you’re working with clients, you need to set expectations around this. The fact is that this is a very important stage and if done correctly, can save you a lot of heartache further down the process.

Step 1: Data gathering

There are three broad areas from which you can gather data. Let’s look at each of them in turn.

The company

This is the company / website that you’re working for. There is a bunch of information you can gather from them which will help inform your tests. 

Why does the company exist?

I always believe in 
starting with why and I’ve talked about this before in the context of link building. It is at this point that you can dive right into the heart of the company and find out what makes it different to others. This isn’t just about finding USPs, it goes far deeper than that into the culture and DNA of the company. The reason here is that customers buy the company and the message it portrays just as much as the product itself. We all have affinities with certain companies who probably do produce a great product and service, but it’s a love for the company itself which keeps us interested and buying from them.

What are the goals of the company?

This is a pretty crucial one and the reasons should be obvious. You need to focus your data gathering and testing around hitting these goals. There are times when some goals may be less obvious than others. These are sometimes called 
micro-conversions and can include things that contribute to the bigger goal. For example, you may find that customers who signup to your email newsletter are more likely to become repeat customers than those who don’t. Therefore, a micro-conversion would be to get people signed up to your email list.

What are the unique selling propositions (USPs) of the company?

What makes the company different in comparison to competitors who sell the same or similar products? Bonus points here if the USP is something that a competitor
can’t emulate. For example, offering free delivery is something that may help improve conversions, but chances are that your competitors can also offer this.

What are the common objections?

This is where you should be speaking to people within the organisation who are outside the marketing team. One example is to talk to sales staff and ask them how they sell the products, what they feel the USPs are and what the typical objections are to the product. Another example is to talk to customer support staff and see what problems they tend to deal with. These guys will also have input on what customers tend to like the most and what positive feedback / product improvements get suggested.

Another team to speak to is whoever manages live chat for a website if it exists. At Distilled, we’ve sometimes been able to get access to live chat transcripts and have been able to run analysis to find trends and common problems.


The website

Here, we are focusing specifically on the website itself and seeing what data we can gather to inform our experiments.

What does the sales process look like?

At this point, I’d recommend sitting down with the client and a big whiteboard to map out the sales process from start to finish, including each touch-point between the customer and the website or marketing materials such as email. From here, you can go pretty granular into each part of the process to find where problems can occur.

It is also at this point that you should 
review funnels in analytics or set them up if they don’t currently exist. Try to find where the most common drop-off points are and take a deeper dive into why. Sometimes a technical problem may be to blame for the drop-off in conversions, so make sure you are at the very least segmenting data by browser to try and find problems. 

What is the current traffic breakdown?

This involves you taking a deep dive into the existing analytics data that you have from the website. At this point you’re just trying to get a better understanding of a few core things:

  • How much traffic the website receives: This can impact your testing in that you may discover low traffic numbers which can influence how long it takes a test to complete.
  • What demographics the website typically attracts – this may require you to enable extra tracking if you’re using Google Analytics.
  • What technology users typically use: As mentioned above, looking at browser usage is important. But on top of this, what devices do users tend to use? If you’re seeing high numbers of users using mobile devices, you should check how the website renders on a mobile device. If you’re seeing very low numbers of visits from mobile devices, that is probably worth investigating too given the growth of traffic from mobile in recent years.

Where do conversions currently come from?

Hopefully, the website will already have some 
goals or eCommerce tracking enabled which makes this bit a lot easier! If not, then you will need to get them setup as soon as possible so that you can start gathering the data you need. This work needs to be done no matter what because you’re not going to be able to measure the results of your CRO tests if you can’t measure the conversions!

If you don’t have goals setup already, you can use 
Paditrack which syncs with your Google Analytics account and allows you to apply goals to old data. It also allows you to segment your funnels which, annoyingly, Google Analytics doesn’t allow you to do as of writing.

If you do have this data, then you need to try and find patterns in the type of people who convert, as well as where they come from. With the latter, it can be a bit tricky sometimes because quite often, customers will find you via different channels. So you need to make sure that you’re looking at 
multi-channel reports and seeing which ones are most common.

Is there any back-end data you can access?

Although 
things are changing, many analytics platforms do not integrate offline or back-end data by default, so you may need to go digging for it. One thing that many companies have is data on cancellation or refund rates. Typically this is not included in standard analytics views because it takes place offline, however it can provide you with a wealth of information about products and customers. You can find out what causes customers to cancel a service or what made them ask for a refund.

The customers

This can potentially be the most interesting area to gather data from and have the most impact. Here we are gathering information directly from your customers via a number of methods.

What are the biggest objections that customers have?

For me, this is one of the most insightful things to ask because it drills straight into the one core thing that we care about in this process – what is stopping the customer from buying?

I really like 
this presentation from Conversion Rate Experts which outlines their favourite questions to ask customers at this stage of the process as well as these three questions from Avinash.

There are a number of ways to do this, which I’ll give some detail on here.

Google Consumer Surveys

We have used 
these surveys a few times at Distilled now and they have usually given us pretty good insights. The results can be quite broad and frankly, some responses can be pretty useless! But if you cut out the noise and look for the trends, you can get some good information on what concerns and considerations people have when buying products like yours.

Qualaroo

Qualaroo is a cool little survey tool which you’ve probably seen on numerous websites across the web. It looks something like this:

What I like about Qualaroo is that it doesn’t intrude on the user experience and you can use some cool customization settings to make it appear exactly when you want. For example, you can set it to only appear on certain pages or based on user behavior like time on page. You can also set it to appear when it looks like someone is about the close the window.

One neat little tip here is to place the survey on your order confirmation page and ask the question “What nearly stopped you from buying from us today?” – this can give you some low-risk feedback because the user has already purchased from you.

It’s also worth mentioning that Qualaroo can now be used on mobile devices, too, so you can tailor your questions to mobile users really well:

Other survey services

If you have a good email list which is reasonably active and engaged, you can run email surveys using something like 
Survey Monkey. This can be a little more tricky because chances are that the people on your email list may be existing customers who’s mindset is a bit different to someone who has never bought from you before. We’ve also used AYTM in the past for running surveys who offer a few more options in their free version than Survey Monkey.

Usertesting.com

Again, this is a tool that we often use at Distilled, and we have gotten some good results from it. There have been a few misses too in terms of how useful the user has been, but that happens from time to time. Usertesting.com allows you to recruit users based on certain characteristics (age, gender, interests etc) and then ask them to complete tasks for you. These tasks are usually focused around your website or a competitors and may involve researching and buying a product. As the user works through the tasks, they record a screencast and talk as they are working. 

If you want to dive more into this, I really liked 
this webinar from Conversion Rate Experts which focuses on how they use the service.

Step 2: List hypotheses

Now we need to make the step from information gathering to outlining what we may want to test. Without realising it, many people will jump straight to this step of the process and just start testing what feels right. By doing all the work we outlined in step 1, the rest of the process should be much more informed. Asking yourself the following questions should help you end up with a list of things to test that are backed up by real data and insight.

What are we testing?

Based on all of the information you gathered from the website, customers and the company in step 1, what would you like to test? Go back to the information and look for the common trends. I prefer to start with the most common customer objections and see what is common amongst them. For example, if a common theme of customer feedback was that they place a lot of value in knowing their personal payment details are safe, you could hypothesise that adding more trust signals to the checkout process will increase the number of people who complete the process.

Another example may be if you found that the sales team always get feedback that customers love the money-back guarantee that you offer. So you may hypothesise that making this selling point more obvious on your product pages may increase the number of people who start the checkout process.

Once you have a hypothesis, it is important to know what success looks like and therefore, how to tell if the test result is a positive one. This sounds like common sense, but it’s very important to get this clear right from the start so that you reach the end of the test and stand a high chance of having an answer.

Who are we testing?

It is important to understand the differences in the types of people who visit your website, not just in terms of demographic, but also in terms of where their mind is at in terms of the buying cycle. An important example to keep in mind is new vs. returning customers. Putting both of these types of customers into the same test could lead to unreliable results because the mindsets of the customers are very different. 

Returning customers (assuming you did a good job!) will already be bought into your company and brand, they will have already experienced the checkout process, they may even already have their credit card details registered with you. All of these things are likely to make them automatically more likely to convert into a customer compared to a brand new customer. One thing to mention here is that you’re never going to be able to segment everyone perfectly because 
analytics data quality is never 100% perfect. There isn’t much we can do about this beyond ensuring we’re tracking correctly and using best practice when segmenting users.

When you run your test, most pieces of software will allow you to direct traffic to your test pages based on various attributes, here is an example from 
Optimizely:

Another useful segment as you can see above is the segmentation by browser. This can be particularly useful if you have any bugs with certain browsers and your testing page. For example, if something you want to test doesn’t load correctly in Firefox, you can choose to exclude Firefox users from the test. Obviously if the test is successful, the final roll-out will need to work in all browsers, but this setting can be useful as a short term fix. 

Where are we testing?

This is a pretty straight forward one. You just need to specify which page or set of pages you’re testing. You may choose to test just one product page or a set of similar products at once. One thing to mention here is that if you’re testing multiple pages at once, you should be aware of how the buying cycles for those products may differ. If you’re testing two product pages with a single test and one of those products is a $ 500 garden shed and the other product is a $ 10 garden ornament, then the results of the test may be a bit skewed. 

When you list the pages that you’re testing, it is also a good time to run through a simple checklist to make sure that tracking code has been added to those pages correctly. Again, this is pretty basic but can be easily forgotten.

Goals of the discovery phase:

  1. You’ve gathered data from customers, the website, and the company
  2. You’ve used this data to form a hypothesis on what to test
  3. You’ve identified who you’re targeting with this test and what pages it applies to
  4. You’ve checked that tracking code is set up correctly on those pages

This stage is where we start testing! Again, this is a step that people can jump to straight away and not have data to backup their tests. Make sure that isn’t you!

Step 3: Wireframe test designs

This step is likely to vary on your specific circumstances. It may not even be necessary for you to do wire-framing! If you’re in a position where you don’t need to get sign-off on new test designs then you can make changes do your website directly using a tool like Optimizely or Visual Website Optimizer.

Having said that, there are benefits to taking some time to plan the changes that you’re going to make so that you can double check that they are in line with steps 1 and 2 above. Here are a few questions to ask yourself as you’re going through this step. 

Are the changes directly testing my hypothesis?

This sounds basic; of course they should! However it can be easy to get off-track when doing this kind of work. So it’s good to take a step back and ask yourself this question because you can easily do too much and end up testing more than you expected to.

Are the changes keeping the design on-brand?

This is likely to be more of an issue if you’re working on a very large website where there are multiple stakeholders in the website such as UX teams, design teams, marketing teams etc. This can cause problems in getting things signed off but there are often good reasons for this. If you suggest a design that involves fundamental changes to page layout and design, it’s less likely to get sign-off unless you’ve already built up a serious amount of trust. 

Are the changes technically doable?

At Distilled, we’ve sometimes run into issues where our changes have been a bit tricky to implement and have required a bit of development time to get working. This is fine if you have the development time available, but if you don’t, this could limit the complexity of the tests that you run. So you need to bear this in mind when designing tests and choosing which hypotheses to test.

If you’re looking for a good wire-framing tool for this step, there are a few options including Balsamiq and Mockingbird.

Step 4: Implement design

At Distilled, we use Optimizely to implement designs and run split tests on client websites, but Visual Website Optimizer is a good alternative.

As mentioned above, the more complex your design, the more work you may need to put the design live. It is really important at this point to make sure you’re testing the design across different browsers before putting live. Visual elements can change quite dramatically and the last thing you want to do is skew your results by a certain browser not rendering the design properly.

It is also at this stage that you can choose a few options in terms of who should see the test. This is how this looks in Optimizely:

You can also choose what proportion of your traffic will be sent to the testing pages. If you have high traffic numbers, then this can help offset the risk if a test resulting in conversion rates dropping – it does happen! So only sending 10% of your traffic to the test means that the remaining 90% will carry on as normal.

This is what this setting looks like if you’re using Optimizely:

You should also
connect Optimizely to your Google Analytics account so that you’re also able to determine the average order value for each group of visitors you are sending to your conversion tests. Sometimes, the raw conversion rate for a test may not increase, but the average order value may increase which is obviously a win that you don’t want to be overlooking.

Goals of the experiments phase:

  1. Test variations are live and getting traffic
  2. Cross-browser testing is complete
  3. Design has been signed off by client / stakeholders if applicable
  4. Correct customer segments / traffic allocation has been set

Now it’s time to see if our work has paid off! 

Step 5: Was the hypothesis correct?

Was statistical significance reached? 

Before diving in and assessing if your hypothesis was correct, you need to make sure that statistical significance has been reached. I like 
this short definition by Chris Goward which helps explain what this is and it’s importance. If you want to go a bit deeper and see some examples, this post by Will on the Distilled blog is a great read.

Many split testing tools will actually tell you if significance has been reached or not so this takes some of the hard work out of the process. Having said that, it’s still a good idea to understand the theories behind it so you can spot problems if they occur.

In terms of how long it could take to reach statistical significance, it can be hard to predict but this is a cool tool which helps you on this. Evan has another tool in relation to this which allows you to determine how order value differs across two different test groups. This is one of the key reasons to connect Optimizely to Google Analytics as mentioned above.

Was the hypothesis correct?

Yes? Great! If your test was a success and increased conversions then that’s great, but what’s next? Well firstly you need to look at how to roll out the successful design to the website properly, i.e. not relying on Optimizely or Visual Website Optimizer to display the design to visitors. In the short term, you can send 100% of your traffic to the successful design (if you haven’t already) and keep an eye on the numbers. But at some point, you’ll probably need help from developers to deploy the changes on the website directly.

When the hypothesis isn’t correct

This is going to happen; most conversion rate experts don’t talk about their failed tests, but they do happen. One guy that did talk about this was Peep Laja in this article and he went into even more detail in this case study where he said that it took six tests before a positive result was reached.

The important thing here is to not give up and make sure you’ve learned something from the process. There are always things to learn from failed tests and you can iterate on them and feed the learnings into future tests. Alongside this, make sure you’ve keeping track of all the data you’ve gathered from failed tests so that you have a log of all tests which you can refer back to in the future.

Goals of the review stage:

  1. Know whether a hypothesis was correct or not
  2. If it was correct, roll out widely
  3. If it wasn’t correct, what did we learn?
  4. On to the next test!

That’s about it! Conversion rate optimization should be an ongoing process because there are always things that can be improved across your business. Look for the opportunities to test everything, follow a good process and you can make a big difference to the bottom line.

A few resources to leave you with which I’d highly recommend:

If you have any feedback or comments, feel free to leave them below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Mobile Marketing: 31% of marketers don’t know their mobile email open rate

Mobile-friendly emails are a necessity when marketers look to target their audiences. However, many of those marketers are unaware of how many consumers utilize mobile email. With designing for mobile first, marketers have found their content is increasingly reader-friendly on PCs, as well.
MarketingSherpa Blog

Posted in Latest NewsComments Off

Advert