Tag Archive | "Metrics"

How to Get More Keyword Metrics for Your Target Keywords

Posted by Bill.Sebald

If you’re old in SEO years, you remember the day [not provided] was introduced. It was a dark, dark day. SEOs lost a vast amount of trusty information. Click data. Conversion data. This was incredibly valuable, allowing SEOs to prioritize their targets.

Google said the info was removed for security purposes, while suspicious SEOs thought this was a push towards spending more on AdWords (now Google Ads). I get it — since AdWords would give you the keyword data SEOs cherished, the “controversy” was warranted, in my opinion. The truth is out there.

But we’ve moved on, and learned to live with the situation. Then a few years later, Google Webmaster Tools (now Search Console) started providing some of the keyword data in the Search Analytics report. Through the years, the report got better and better.

But there’s still a finite set of keywords in the interface. You can’t get more than 999 in your report.

Search Analytics Report

Guess what? Google has more data for you!

The Google Search Console API is your friend. This summer it became even friendlier, providing 16 months worth of data. What you may not know is this API can give you more than 999 keywords. By way of example, the API provides more than 45,000 for our Greenlane site. And we’re not even a very large site. That’s right — the API can give you keywords, clicks, average position, impressions, and CTR %.

Salivating yet?

How to easily leverage the API

If you’re not very technical and the thought of an API frightens you, I promise there’s nothing to fear. I’m going to show you a way to leverage the data using Google Sheets.

Here is what you will need:

  1. Google Sheets (free)
  2. Supermetrics Add-On (free trial, but a paid tool)

If you haven’t heard of Google Sheets, it’s one of several tools Google provides for free. This directly competes with Microsoft Excel. It’s a cloud-based spreadsheet that works exceptionally well.

If you aren’t familiar with Supermetrics, it’s an add-on for Google Sheets that allows data to be pulled in from other sources. In this case, one of the sources will be Google Search Console. Now, while Supermetrics has a free trial, paid is the way to go. It’s worth it!

Installation of Supermetrics:

  1. Open Google Sheets and click the Add-On option
  2. Click Get Add-Ons
  3. A window will open where you can search for Supermetrics. It will look like this:

How To Install Supermetrics

From there, just follow the steps. It will immediately ask to connect to your Google account. I’m sure you’ve seen this kind of dialog box before:

Supermetrics wants to access your Google Account

You’ll be greeted with a message for launching the newly installed add-on. Just follow the prompts to launch. Next you’ll see a new window to the right of your Google Sheet.

Launch message

At this point, you should see the following note:

Great, you’re logged into Google Search Console! Now let’s run your first query. Pick an account from the list below.

Next, all you have to do is work down the list in Supermetrics. Data Source, Select Sites, and Select Dates are pretty self-explanatory. When you reach the “Select metrics” toggle, choose Impressions, Clicks, CTR (%), and Average Position.

Metrics

When you reach “Split by,” choose Search Query as the Split to rows option. And pick a large number for number of rows to fetch. If you also want the page URLs (perhaps you’d like your data divided by the page level), you just need to add Full URL as well.

Split By

You can play with the other Filter and Options if you’d like, but you’re ready to click Apply Changes and receive the data. It should compile like this:

Final result

Got the data. Now what?

Sometimes optimization is about taking something that’s working, and making it work better. This data can show you which keywords and topics are important to your audience. It’s also a clue towards what Google thinks you’re important for (thus, rewarding you with clicks).

SEMrush and Ahrefs can provide ranking keyword data with their estimated clicks, but impressions is an interesting metric here. High impression and low clicks? Maybe your title and description tags aren’t compelling enough. It’s also fun to VLOOKUP their data against this, to see just how accurate they are (or are not). Or you can use a tool like PowerBI to append other customer or paid search metrics to paint a bigger picture of your visitors’ mindset.

Conclusion

Sometimes the littlest hacks are the most fun. Google commonly holds some data back through their free products (the Greenlane Indexation Tester is a good example with the old interface). We know Search Planner and Google Analytics have more than they share. But in those cases, where directional information can sometimes be enough, digging out even more of your impactful keyword data is pure gold.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

When Bounce Rate, Browse Rate (PPV), and Time-on-Site Are Useful Metrics… and When They Aren’t – Whiteboard Friday

Posted by randfish

When is it right to use metrics like bounce rate, pages per visit, and time on site? When are you better off ignoring them? There are endless opinions on whether these kinds of metrics are valuable or not, and as you might suspect, the answer is found in the shades of grey. Learn what Rand has to say about the great metrics debate in today’s episode of Whiteboard Friday.

When bounce rate browse rate and ppc are useful metrics and when they suck

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about times at which bounce rate, browse rate, which is pages per visit, and time on site are terrible metrics and when they’re actually quite useful metrics.

This happens quite a bit. I see in the digital marketing world people talking about these metrics as though they are either dirty-scum, bottom-of-the-barrel metrics that no one should pay any attention to, or that they are these lofty, perfect metrics that are what we should be optimizing for. Neither of those is really accurate. As is often the case, the truth usually lies somewhere in between.

So, first off, some credit to Wil Reynolds, who brought this up during a discussion that I had with him at Siege Media’s offices, an interview that Ross Hudgens put together with us, and Sayf Sharif from Seer Interactive, their Director of Analytics, who left an awesome comment about this discussion on the LinkedIn post of that video. We’ll link to those in this Whiteboard Friday.

So Sayf and Wil were both basically arguing that these are kind of crap metrics. We don’t trust them. We don’t use them a lot. I think, a lot of the time, that makes sense.

Instances when these metrics aren’t useful

Here’s when these metrics, that bounce rate, pages per visit, and time on site kind of suck.

1. When they’re used instead of conversion actions to represent “success”

So they suck when you use them instead of conversion actions. So a conversion is someone took an action that I wanted on my website. They filled in a form. They purchased a product. They put in their credit card. Whatever it is, they got to a page that I wanted them to get to.

Bounce rate is basically the average percent of people who landed on a page and then left your website, not to continue on any other page on that site after visiting that page.

Pages per visit is essentially exactly what it sounds like, the average number of pages per visit for people who landed on that particular page. So people who came in through one of these pages, how many pages did they visit on my site.

Then time on site is essentially a very raw and rough metric. If I leave my computer to use the restroom or I basically switch to another tab or close my browser, it’s not necessarily the case that time on site ends right then. So this metric has a lot of imperfections. Now, averaged over time, it can still be directionally interesting.

But when you use these instead of conversion actions, which is what we all should be optimizing for ultimately, you can definitely get into some suckage with these metrics.

2. When they’re compared against non-relevant “competitors” and other sites

When you compare them against non-relevant competitors, so when you compare, for example, a product-focused, purchase-focused site against a media-focused site, you’re going to get big differences. First off, if your pages per visit look like a media site’s pages per visit and you’re product-focused, that is crazy. Either the media site is terrible or you’re doing something absolutely amazing in terms of keeping people’s attention and energy.

Time on site is a little bit misleading in this case too, because if you look at the time on site, again, of a media property or a news-focused, content-focused site versus one that’s very e-commerce focused, you’re going to get vastly different things. Amazon probably wants your time on site to be pretty small. Dell wants your time on site to be pretty small. Get through the purchase process, find the computer you want, buy it, get out of here. If you’re taking 10 minutes to do that or 20 minutes to do that instead of 5, we’ve failed. We haven’t provided a good enough experience to get you quickly through the purchase funnel. That can certainly be the case. So there can be warring priorities inside even one of these metrics.

3. When they’re not considered over time or with traffic sources factored in

Third, you get some suckage when they are not considered over time or against the traffic sources that brought them in. For example, if someone visits a web page via a Twitter link, chances are really good, really, really good, especially on mobile, that they’re going to have a high bounce rate, a low number of pages per visit, and a low time on site. That’s just how Twitter behavior is. Facebook is quite similar.

Now, if they’ve come via a Google search, an informational Google search and they’ve clicked on an organic listing, you should see just the reverse. You should see a relatively good bounce rate. You should see a relatively good pages per visit, well, a relatively higher pages per visit, a relatively higher time on site.

Instances when these metrics are useful

1. When they’re used as diagnostics for the conversion funnel

So there’s complexity inside these metrics for sure. What we should be using them for, when these metrics are truly useful is when they are used as a diagnostic. So when you look at a conversion funnel and you see, okay, our conversion funnel looks like this, people come in through the homepage or through our blog or news sections, they eventually, we hope, make it to our product page, our pricing page, and our conversion page.

We have these metrics for all of these. When we make changes to some of these, significant changes, minor changes, we don’t just look at how conversion performs. We also look at whether things like time on site shrank or whether people had fewer pages per visit or whether they had a higher bounce rate from some of these sections.

So perhaps, for example, we changed our pricing and we actually saw that people spent less time on the pricing page and had about the same number of pages per visit and about the same bounce rate from the pricing page. At the same time, we saw conversions dip a little bit.

Should we intuit that pricing negatively affected our conversion rate? Well, perhaps not. Perhaps we should look and see if there were other changes made or if our traffic sources were in there, because it looks like, given that bounce rate didn’t increase, given that pages per visit didn’t really change, given that time on site actually went down a little bit, it seems like people are making it just fine through the pricing page. They’re making it just fine from this pricing page to the conversion page, so let’s look at something else.

This is the type of diagnostics that you can do when you have metrics at these levels. If you’ve seen a dip in conversions or a rise, this is exactly the kind of dig into the data that smart, savvy digital marketers should and can be doing, and I think it’s a powerful, useful tool to be able to form hypotheses based on what happens.

So again, another example, did we change this product page? We saw pages per visit shrink and time on site shrink. Did it affect conversion rate? If it didn’t, but then we see that we’re getting fewer engaged visitors, and so now we can’t do as much retargeting and we’re losing email signups, maybe this did have a negative effect and we should go back to the other one, even if conversion rate itself didn’t seem to take a particular hit in this case.

2. When they’re compared over time to see if internal changes or external forces shifted behavior

Second useful way to apply these metrics is compared over time to see if your internal changes or some external forces shifted behavior. For example, we can look at the engagement rate on the blog. The blog is tough to generate as a conversion event. We could maybe look at subscriptions, but in general, pages per visit is a nice one for the blog. It tells us whether people make it past the page they landed on and into deeper sections, stick around our site, check out what we do.

So if we see that it had a dramatic fall down here in April and that was when we installed a new author and now they’re sort of recovering, we can say, “Oh, yeah, you know what? That takes a little while for a new blog author to kind of come up to speed. We’re going to give them time,” or, “Hey, we should interject here. We need to jump in and try and fix whatever is going on.”

3. When they’re benchmarked versus relevant industry competitors

Third and final useful case is when you benchmark versus truly relevant industry competitors. So if you have a direct competitor, very similar focus to you, product-focused in this case with a homepage and then some content sections and then a very focused product checkout, you could look at you versus them and their homepage and your homepage.

If you could get the data from a source like SimilarWeb or Jumpshot, if there’s enough clickstream level data, or some savvy industry surveys that collect this information, and you see that you’re significantly higher, you might then take a look at what are they doing that we’re not doing. Maybe we should use them when we do our user research and say, “Hey, what’s compelling to you about this that maybe is missing here?”

Otherwise, a lot of the time people will take direct competitors and say, “Hey, let’s look at what our competition is doing and we’ll consider that best practice.” But if you haven’t looked at how they’re performing, how people are getting through, whether they’re engaging, whether they’re spending time on that site, whether they’re making it through their different pages, you don’t know if they actually are best practices or whether you’re about to follow a laggard’s example and potentially hurt yourself.

So definitely a complex topic, definitely many, many different things that go into the uses of these metrics, and there are some bad and good ways to use them. I agree with Sayf and with Wil, but I think there are also some great ways to apply them. I would love to hear from you if you’ve got examples of those down in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Santa tracker, Google API terms & SEO metrics

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Santa tracker, Google API terms & SEO metrics appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

When and How to Use Domain Authority, Page Authority, and Link Count Metrics – Whiteboard Friday

Posted by randfish

How can you effectively apply link metrics like Domain Authority and Page Authority alongside your other SEO metrics? Where and when does it make sense to take them into account, and what exactly do they mean? In today’s Whiteboard Friday, Rand answers these questions and more, arming you with the knowledge you need to better understand and execute your SEO work.

When and how to use Domain Authority, Page Authority, and link count metrics.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about when and how to use Domain Authority and Page Authority and link count metrics.

So many of you have written to us at Moz over the years and certainly I go to lots of conferences and events and speak to folks who are like, “Well, I’ve been measuring my link building activity with DA,” or, “Hey, I got a high DA link,” and I want to confirm when is it the right time to be using something like DA or PA or a raw link count metric, like number of linking root domains or something like Spam Score or a traffic estimation, these types of metrics.

So I’m going to walk you through kind of these three — Page Authority, Domain Authority, and linking root domains — just to get a refresher course on what they are. Page Authority and Domain Authority are actually a little complicated. So I think that’s worthwhile. Then we’ll chat about when to use which metrics. So I’ve got sort of the three primary things that people use link metrics for in the SEO world, and we’ll walk through those.

Page Authority

So to start, Page Authority is basically — you can see I’ve written a ton of different little metrics in here — linking URLs, linking root domains, MozRank, MozTrust, linking subdomains, anchor text, linking pages, followed links, no followed links, 301s, 302s, new versus old links, TLD, domain name, branded domain mentions, Spam Score, and many, many other metrics.

Basically, what PA is, is it’s every metric that we could possibly come up with from our link index all taken together and then thrown into a model with some training data. So the training data in this case, quite obviously, is Google search results, because what we want the Page Authority score to ultimately be is a predictor of how well a given page is going to rank in Google search results assuming we know nothing else about it except link data. So this is using no on-page data, no content data, no engagement or visit data, none of the patterns or branding or entity matches, just link data.

So this is everything we possibly know about a page from its link profile and the domain that page is on, and then we insert that in as the input alongside the training data. We have a machine learning model that essentially learns against Google search results and builds the best possible model it can. That model, by the way, throws away some of this stuff, because it’s not useful, and it adds in a bunch of this stuff, like vectors or various attributes of each one. So it might say, “Oh, anchor text distribution, that’s actually not useful, but Domain Authority ordered by the root domains with more than 500 links to them.” I’m making stuff up, right? But you could have those sorts of filters on this data and thus come up with very complex models, which is what machine learning is designed to do.

All we have to worry about is that this is essentially the best predictive score we can come up with based on the links. So it’s useful for a bunch of things. If we’re trying to say how well do we think this page might rank independent of all non-link factors, PA, great model. Good data for that.

Domain Authority

Domain Authority is once you have the PA model in your head and you’re sort of like, “Okay, got it, machine learning against Google’s results to produce the best predictive score for ranking in Google.” DA is just the PA model at the root domain level. So not subdomains, just root domains, which means it’s got some weirdness. It can’t, for example, say that randfishkin.blogspot.com is different than www.blogspot.com. But obviously, a link from www.blogspot.com is way more valuable than from my personal subdomain at Blogspot or Tumblr or WordPress or any of these hosted subdomains. So that’s kind of an edge case that unfortunately DA doesn’t do a great job of supporting.

What it’s good for is it’s relatively well-suited to be predictive of how a domain’s pages will rank in Google. So it removes all the page-level information, but it’s still operative at the domain level. It can be very useful for that.

Linking Root Domain

Then linking root domains is the simplest one. This is basically a count of all the unique root domains with at least one link on them that point to a given page or a site. So if I tell you that this URL A has 410 linking root domains, that basically means that there are 410 domains with at least one link pointing to URL A.

What I haven’t told you is whether they’re followed or no followed. Usually, this is a combination of those two unless it’s specified. So even a no followed link could go into the linking root domains, which is why you should always double check. If you’re using Ahrefs or Majestic or Moz and you hover on the whatever, the little question mark icon next to any given metric, it will tell you what it includes and what it doesn’t include.

When to use which metric(s)

All right. So how do we use these?

Well, for month over month link building performance, which is something that a lot of folks track, I would actually not suggest making DA your primary one. This is for a few reasons. So Moz’s index, which is the only thing currently that calculates DA or a machine learning-like model out there among the major toolsets for link data, only updates about once every month. So if you are doing your report before the DA has updated from the last link index, that can be quite frustrating.

Now, I will say we are only a few months away from a new index that’s going to replace Mozscape that will calculate DA and PA and all these other things much, much more quickly. I know that’s been something many folks have been asking for. It is on its way.

But in the meantime, what I recommend using is:

1. Linking root domains, the count of linking root domains and how that’s grown over time.

2. Organic rankings for your targeted keywords. I know this is not a direct link metric, but this really helps to tell you about the performance of how those links have been affected. So if you’re measuring month to month, it should be the case that any months you’ve got in a 20 or 30-day period, Google probably has counted and recognized within a few days of finding them, and Google is pretty good at crawling nearly the whole web within a week or two weeks. So this is going to be a reasonable proxy for how your link building campaign has helped your organic search campaign.

3. The distribution of Domain Authority. So I think, in this case, Domain Authority can be useful. It wouldn’t be my first or second choice, but I think it certainly can belong in a link building performance report. It’s helpful to see the high DA links that you’re getting. It’s a good sorting mechanism to sort of say, “These are, generally speaking, more important, more authoritative sites.”

4. Spam Score I like as well, because if you’ve been doing a lot of link building, it is the case that Domain Authority doesn’t penalize or doesn’t lower its score for a high Spam Score. It will show you, “Hey, this is an authoritative site with a lot of DA and good-looking links, but it also looks quite spammy to us.” So, for example, you might see that something has a DA of 60, but a Spam Score of 7 or 8, which might be mildly concerning. I start to really worry when you get to like 9, 10, or 11.

Second question:

I think this is something that folks ask. So they look at their own links and they say, “All right, we have these links or our competitor has these links. Which ones are providing the most value for me?” In that case, if you can get it, for example, if it’s a link pointing to you, the best one is, of course, going to be…

1. Real traffic sent. If a site or a page, a link is sending traffic to you, that is clearly of value and that’s going to be likely interpreted positively by the search engines as well.

You can also use…

2. PA

3. DA. I think it’s pretty good. These metrics are pretty good and pretty well-correlated with, relatively speaking, value, especially if you can’t get at a metric like real traffic because it’s coming from someone else’s site.

4. Linking root domains, the count of those to a page or a domain.

5. The rankings rise, in the case where a page is ranking position four, a new link coming to it is the only thing that’s changed or the only thing you’re aware of that’s changed in the last few days, few weeks, and you see a rankings rise. It moves up a few positions. That’s a pretty good proxy for, “All right, that is a valuable link.” But this is a rare case where you really can control other variables to the extent that you think you can believe in that.

6. I like Spam Scor for this as well, because then you can start to see, “Well, are these sketchier links, or are these links that I can likely trust more?”

Last one,

So I think this is one that many, many SEOs do. We have a big list of links. We’ve got 50 links that we’re thinking about, “Should I get these or not and which ones should I go after first and which ones should I not go after?” In this case…

1. DA is really quite a good metric, and that is because it’s relatively predictive of the domain’s pages’ performance in Google, which is a proxy, but a decent proxy for how it might help your site rank better.

It is the case that folks will talk about, “Hey, it tends to be the case that when I go out and I build lots of DA 70, DA 80, DA 90+ links, I often get credit. Why DA and not PA, Rand?” Well, in the case where you’re getting links, it’s very often from new pages on a website, which have not yet been assigned PA or may not have inherited all the link equity from all the internal pages.

Over time, as those pages themselves get more links, their PA will rise as well. But the reason that I generally recommend a DA for link outreach is both because of that PA/DA timing issue and because oftentimes you don’t know which page is going to give you a link from a domain. It could be a new page they haven’t created yet. It could be one that you never thought they would add you to. It might be exactly the page that you were hoping for, but it’s hard to say.

2. I think linking root domains is a very reasonable one for this, and linking root domains is certainly closely correlated, not quite as well correlated, but closely correlated with DA and with rankings.

3. Spam Score, like we’ve talked about.

4. I might use something like SimilarWeb‘s traffic estimates, especially if real traffic sent is something that I’m very interested in. If I’m pursuing no followed links or affiliate links or I just care about traffic more than I care about rank-boosting ability, SimilarWeb has got what I think is the best traffic prediction system, and so that would be the metric I look at.

So, hopefully, you now have a better understanding of DA and PA and link counts and when and where to apply them alongside which other metrics. I look forward to your questions. I’ll be happy to jump into the comments and answer. And we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Google job search, metrics on search update & cheese doodle

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google job search, metrics on search update & cheese doodle appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

SearchCap: Google AdWords Quality Score metrics, redirects & SEO challenges

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google AdWords Quality Score metrics, redirects & SEO challenges appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

How to Tie Marketing Metrics to the Data that Boards, CxOs, and Investors Really Care About – Whiteboard Friday

Posted by randfish

SEOs and executives speak different languages. It’s a simple fact, but it’s one that often acts as a blocker for getting your ideas and investments approved. A simple change in how you communicate your marketing goals, triumphs, and challenges could be what’s standing between you and getting the C-suite buy-in that’s integral to your success. In today’s Whiteboard Friday, Rand helps you translate your marketing jargon into financial metrics and data that the folks in charge will actually care about.

How to Tie Marketing Metrics to the Data that Boards, CXOs, and investors really care about whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about tying marketing metrics that marketers use to the things that CEOs, CXOs, whatever the C-level titles that you’ve got are, investors, board members, to the metrics and data that they care about.

This is a problem that I’ve talked about with many marketers over the last few weeks, especially at some conferences and events where folks say, “Hey, we’ve got our metrics dialed in. We know what we’re doing. But when we present it to the Board, or when we present it to our CMO, or our CEO, when we show it to our investors, not only do they not get it, it’s like we’re not speaking the same language, and therefore we’re not able to have a conversation productively about where investment should and shouldn’t be made, and they’re not able to give input into whether they think our idea is a good one, or whether they think there’s a good return on investment there.” This can be tough.

Start with the metrics that marketers care about

So what happens is you’re a marketer, you’re presenting here to your Board of Directors or to your executive team, and you say, “Hey look, we’ve got traffic growing in every category. SEO is up. Social is up. We’ve grown our link profile, which is going to help us with search, all these great things.” Fantastic, but the Board is sort of sitting there like, “Well, I don’t really know how to contribute, and how does that tie in to higher lifetime value of customers, because that’s the thing that I know and the thing that I care about, and I’m not sure this marketer person is really investing in the right kind of ways for the organization.”

That sucks. As a marketer, that totally sucks, because it means that you are not communicating your message, and that means you’re not going to get, you’re unlikely to get buy-in from all these people that you really care about and need their permission and their acceptance in order to make the investments you need.

The thing is, marketers are very focused on the funnel.

We care about metrics that show top-of-funnel growth. We care about which channels send that top-of-funnel traffic. We care about how people are moving through the funnel. We want to see conversions and conversion rate, which is why we work so much on conversion rate optimization, and we care about marketing metrics that predict better retention or greater recidivism, meaning people are buying again or coming back and becoming customers again.

This is our world and we live in it. It does translate okay, decently to the Board level.

Translate marketing metrics to the financial ones that investors care about

But if you think about what folks care about at the highest levels of a company’s strategic imperatives — that could be a Board of Directors, could be investors, could be C-level folks — they’re really focused on things like market size, meaning: How big is our addressable market? Who could we potentially reach? What if we run out of those people — can we keep growing? Are more of them coming into the fold, or are people exiting this market and going somewhere else?

They care about cost of customer acquisition. How much does it cost us to get one new customer?

They care about customer revenue, the revenue that we actually get from those customers that we’re bringing in, whether that’s going up, and overall growth rate. Are we getting more customers over time? Is that rate of growth expanding, meaning acceleration?

They care about customer lifetime value. Customer lifetime value is something that pretty much every metric we calculate as marketers should tie back to that, especially when we’re having conversations with these kinds of people. Essentially it is when a new customer comes in and they make any kind of purchase from us, they spend any type of dollars with us — a product, a service, a subscription, whatever it is — how much do we get over their customer lifetime? Meaning if it’s an e-commerce play, it could be the case that they come and they buy five things from us over the course of two years on average, and that dollar total is $ 360, and 40% of that is gross margin for us. Essentially, the rest is cost of goods. Okay, that’s customer lifetime value.

Or if you have a subscription business, like Moz is a subscription business, if you subscribe to our tools, we’ll charge you $ 99 a month or $ 149 a month. I think on average our customer lifetime value is essentially $ 120 times the average customer lifetime span, which is somewhere around 11 months all in. So it’s that number multiplied out. So $ 1200 or $ 1300, somewhere around there, that’s customer lifetime value.

That doesn’t actually count recidivism, people who quit and then come back again. We’re trying to get to that metric, and we need it, because you want to be able to speak to true customer lifetime value. This is sort of the underpinning of all the rest of this.

But other things these folks are going to care about, comparison of cohorts. So it’s not the case that all customers are exactly the same. You know this as a marketer, because you know that it costs you a different amount of money to acquire folks through one channel, and they perform differently than folks who are acquired through a different channel. You know that different cohorts of personas, for example, people let’s say who work in an agency versus who work in-house, maybe those are two different kinds of people that you serve in a B2B model. Or you know that folks who are higher income versus lower income spend different amounts at your e-commerce shop, that type of stuff. That comparison is very interesting to these folks as well.

Another comparison that matters is a competitive comparison. How big are we, how big are they? How fast are they growing, how fast are we growing? What’s their customer lifetime value, what’s ours? What’s their retention and recidivism rate, what’s ours? Those things, massively interesting to this group as well.

Then there’s a bunch of other stuff that they care about, like cost of goods and teams and market dynamics, etc. Marketers generally don’t touch that stuff and don’t usually need to worry about it.

But the solution to our problem here is to speak this language.

So let’s go back to our initial story.

Instead of saying, “Here’s traffic growth from all these different channels, and here’s how we’re investing in search, versus social, versus paid ads, versus trade shows,” all this kind of stuff, what we want to say is something like, “Hey, here’s the traffic from SEO, and here’s the traffic from social, and as those have been growing, our cost to acquire a new customer has been falling, because those channels are organic, and that means we don’t pay each time we get a new customer from them. We only pay for the upfront investment in sweat equity, creativity, engineering needs, web engineering needs, and whatever we’re doing. But then it keeps paying dividends, and because of that you can see this CAC falling as our search traffic has risen.”

Now you have the attention of these folks. Now you’ve engaged them in a way that they care about, because they say, “Aha, more organic search, lower cost to acquire a customer,” — which is great because CLTV to CAC ratio, the ratio of lifetime value to acquisition cost, this ratio right here, is something that every investor, every Board of Directors member, every CXO cares deeply about. It’s the underpinnings of the company. That’s what makes a profitable company work and what gives it the ability to grow. When you speak their language, you get this type of response.

So what I’m going to urge you to do as a marketer is to take any metric, any data point, any story you’re trying to tell around return on investment, around a project you have, and turn it into something that makes sense to the group of people that you’re talking to, especially if that’s strategic-level. You want to tie those to tangible improvements or to issues. It could be problems. It may not be just positive things. It could be negative things too, in the areas your CXO or Board or investor cares about.

So let’s imagine — and this is a conversation that many, many folks have — they say to me, “Rand, we want to hire an SEO consultant, or we want to bring an SEO in-house full-time, but we’ve been having trouble getting buy-off from our CEO or our CMO or our Board.”

Well, let’s change the conversation. Instead of, “We need to hire an SEO consultant because SEO is really important, search engines send a lot of traffic, and search traffic is something we’re not competing in well right now,” to, “CAC is high. CAC is too high. Our cost to acquire a new customer right now is too high, and our CLTV is too low for customers that we buy via paid search. So we’re spending a lot of money on paid ads right now, and the customers we get via that have this high customer acquisition cost, because we have to spend money to get them, and the CLTV isn’t as high because customers who come through paid, on average, usually tend to underperform compared to those who come through organic. It’s just a fact of who clicks on ads versus who clicks on organic results. But, if we ranked organically for more of these keywords, and we could get more SEO traffic compared to our PPC traffic, we could stop (a) losing those searches to our competitors, who are outranking us now, and (b) we would bump up the CLTV and we’d be lowering cost of customer acquisition.”

Boom. You have changed the conversation to something that this group of folks really gets, and you’ve made it much more likely that they are going to say yes to your proposal.

Same thing here. Let’s say you say, “Hey, we’re going to do something crazy. We want to actually spend more on trade shows, on events, on speaking, on going places physically in-person. It’s expensive. We don’t reach as many people as we do over web channels or over traditional ad channels, but we’ve been getting good customers via events.”

That’s a real tough sell unless you do this. “Dear Board, here’s a comparison of customers acquired via our five major marketing channels. Here’s SEO, here’s PPC, here’s our Facebook ads, here’s organic social, and this is events. You can see cost to acquire, you can see lifetime value, you can see the ratio, and you can see the numbers of folks that we’ve gotten via each of those channels and the revenue they bring in.”

Awesome. Now, repeat buyers and referrals are so much stronger from events, from this group over here, that even though it costs much more, the math works out that it is the best investment we can make over the next couple of quarters. We want to bring this up by two or threefold, and if we keep seeing continued investment or continued metrics in the same way we have the last few months, we’re going to have the highest positive ROI from that investment versus any of these other channels.

Awesome. Change the conversation, made it something these folks understand. Speak their language, and you get the buy-in you want.

All right, everyone, look forward to your comments and thoughts, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Success Metrics in a World Without Twitter Share Counts

Posted by EricaMcGillivray

On November 20, 2015, Twitter took away share counts on their buttons and from their accessible free metrics. Site owners lost an easy signal of popularity of their posts. Those of us in the web metrics business scrambled to either remove, change, or find alternatives for the data to serve to our customers. And all those share count buttons, on sites across the Web, started looking a tad ugly:

Where's my shares?Yep, this is a screenshot from our own site.

Why did Twitter take away this data?

When asked directly, Twitter’s statement about the removal of tweet counts has consistently been:

“The Tweet counts alone did not accurately reflect the impact on Twitter of conversation about the content. They are often more misleading to customers than helpful.”

On the whole, I agree with Twitter that tweet counts are not a holistic measurement of actual audience engagement. They aren’t the end-all-be-all to showing your brand’s success on the channel or for the content you’re promoting. Instead, they are part of the puzzle — a piece of engagement.

However, if Twitter were really concerned about false success reports, they would’ve long ago taken away follower counts, the ultimate social media vanity metric. Or taken strong measures to block automated accounts and follower buying. Not taking action against shallow metrics, while “protecting” users from share counts, makes their statement ring hollow.

OMG, did Twitter put out an alternative?

About a year ago, Twitter acquired Gnip, an enterprise metrics solution. Gnip mostly looks to combine social data and integrate it into a brand’s customer reputation management software, making for some pretty powerful intelligence about customers and community members. But since it’s focused on an enterprise audience, it’s priced out of the reach of most brands. Plus, the fact that it’s served via API means brands must have the knowledge and development skills/talent in order to really customize the data.

Since the share count shutdown, Gnip released a beta Engagement API and has promised an upcoming Audience API. This API seems to carry all the data you’d need to put those share counts back together. However, an important note:

“Currently only three metrics are available from the totals endpoint: Favorites, Replies, and Retweets. We are working to make Impressions and Engagements available.”

For those of you running to your favorite tools — Gnip’s TOS currently forbids the reselling of their data, making it essentially forbidden to integrate into tools, although some companies like Buzzsumo have paid and gotten permission to use the data in their software. The share count removal caused Apple to quietly kill Topsy.

Feel social media’s dark side, Twitter

Killing share counts hasn’t been without its damage to Twitter as a brand. In his post about brands who’s lost and won in Google search, Dr. Pete Meyers notes that Twitter dropped from #6 to #15. That has to hurt their traffic.

Twitter lost as a major brand on Google in 2015

However, Twitter also made a deal with Google in order to show tweets directly in Google searches, which means Twitter’s brand may not be as damaged as it appears.

Star Wars tweet stream in Google results

Perhaps the biggest ding to Twitter is in their actual activity and sharing articles on their platform. Shareaholic reports sharing on Twitter is down 11% since the change was implemented.

Share of voice chart on Twitter from Shareaholic

It’s hard to sell Twitter as a viable place to invest social media time, energy, and money when there’s no easy proof in the pudding. You might have to dig further into your strategy and activities for the answers.

Take back your Twitter metrics!

The bad news: Almost none of these metrics actually replicate or replace the share count metric. Most of them cover only what you tweet, and they don’t capture the other places your content’s getting shared.

The good news: Some of these are probably better metrics and better goals.

Traffic to your site

Traffic may be an oldie, but it’s a goodie. You should probably already be tracking this. And please don’t just use Google Analytics’ default settings, as they’re probably slightly inaccurate.

Google Analytics traffic from Social and Twitter

Some defaults for one of my blogs, since I’m lazy.

Instead, make sure you tag what you’re sharing on social media and you’ll be better able to attribute your hard, hard work to the proper channels. Then you can really figure out if Twitter is the channel for your brand’s content (or if you’re using it right).

Use shortening services and their counters

Alternatively, especially if you’re sharing content not on your own site, you can use share and click counting from various URL shortening services. But this will only count toward individual links you share.

Bit.ly's analytics around share counts for individual links

Twitter’s own free analytics

No, you won’t find the share count here, either. Twitter’s backends are pretty limited to specific stats on individual tweets and some audience demographics. It can be especially challenging if you have multiple accounts and are working with a team. There is the ability to download reporting for further Excel wizardry.

Tweet impressions and Twitter's other engagement metrics

Twitter’s engagement metric is “the number of engagements (clicks, retweets, replies, follows, and likes) divided by the total number of impressions.” While this calculation seems like a good idea, it’s not my favorite, given the specific calculation’s hard to scale as you grow your audience. You’re always going to have more lurkers instead of people engaging with your content, and it’s going to take a lot of massaging of metric reporting when you explain how you grew your audience and those numbers went down. Or how the company with 100 followers does way better on Twitter’s engagement metric.

TrueSocialMetric’s engagement numbers

Now these are engagement metrics that you can scale, grow, and compare. Instead of looking at impressions, TrueSocialMetrics gives conversation, amplification, and applause rates for your social networks. This digs into the type of engagement you’re having. For example, your conversation rate for Twitter is calculated by taking how many comments you got and dividing it by how many times you tweeted.

TrueSocialMetric's engagement numbers

At Moz, we use a combination of TrueSocialMetrics and traffic to report on the success of our social media efforts to our executives. We may use other metrics internally for testing or for other needs, depending on that specific project.

Twitcount

Shortly after the removal of share counts was announced, Twitcount popped up. It works by installing their share counters on your site, where it then can surface historical totals. Twitcount’s numbers only start counting the day you install the code and the button to your site. There are limitations, since they use Twitter’s API, and these limitations may cause data inaccuracies. I haven’t used their solution, but if you have, let us know in the comments how it went!

Buffer’s reach and RT metrics

Again, this only counts for your individual tweet’s metrics, and Buffer only grabs metrics on tweets sent out via their platform. Buffer’s reach metric is similar to what many traditional advertisers and people in public relations are used to, and it is similar to Twitter’s general impressions metric. Reach looks at how far your tweet has possibly gone due to size of the retweeter’s audience.

Like most analytic tools, you can export the metrics and play with them in Excel. Or you can pay for Buffer’s business analytics, which runs between $ 50–$ 250/month.

Trending topics and hashtag reports

There are many tools out there where you can track specific trends and hashtags around your brand. At MozCon, we know people are tweeting using #MozCon. But not every brand has a special hashtag, or even knows the hot topics around their brand.

SproutSocial’s trends report is unique in that it pulls both the topics and hashtags most associated with your brand and the engagement around those.

Obviously, in last July, #MozCon is hot. But you can also see that we have positive community sentiment around our brand by what else is happening.

Buzzsumo

Our friends at Buzzsumo can be used as a Topsy topic replacement and share counter. They did a great write-up on how to use their tool for keyword research. They are providing share counts from Gnip’s data.

Share counts from BuzzSumo

Though when I ran some queries on Moz’s blog posts, there seemed to be a big gap in their share counts. While we’d expect to see Moz’s counts down a bit on the weekends, there would be something there:

BuzzSumo on Moz's share counts over the week

I’m unsure if this is Buzzsumo’s or Gnip’s data issue. It’s also possibly that there are limits on the data, especially since Moz has large numbers of followers and gets large amounts of shares on our posts.

Use Fresh Web Explorer’s Mention Authority instead

While Fresh Web Explorer‘s index only covers recent data — the tool’s main function being to find recent mentions of keywords around the web a la Google Alerts — it can be helpful if you’re running a campaign and relying on instant data no older than a month. Mention Authority does include social data. (Sorry, the full formula involved with creating the score is one of Moz’s few trade secrets.) What’s nice about this score is that it’s very analogous across different disciplines, especially publicity campaigns, and can serve as a holistic alternative.

Fresh Web Explorer's mention authority

Embedded tweets for social proof

Stealing this one from our friends at Buffer, but if you’re looking to get social proof back for people visiting your post, embedded tweets can work well. This allows others to see that your tweet about the post was successful, perhaps choosing to retweet and share with their audience.

Obviously, this won’t capture your goals to hand to a boss. But this will display some success and provide an easy share option for people to retweet your brand.

Predictions for the future of Twitter’s share count removal

Twitter will see this as a wash for engagement

With the inclusion of tweets directly in Google search results, it balances out the need for direct social proof. That said, with the recent timeline discussions and other changes, people are watching Twitter for any changes, with many predicting the death of Twitter. (Oh, the irony of trending hashtags when #RIPTwitter is popular.)

Twitter may not relent fully, but it may cheapen the product through Gnip. Alternatively, it may release some kind of “sample” share count metric instead. Serving up share count data on all links certainly costs a lot of money from a technical side. I’m sure this removal decision was reached with a “here’s how much money we’ll save” attached to it.

Questions about Twitter’s direction as a business

For a while, Twitter focused itself on being a breaking news business. At SMX East in 2013, Twitter’s Richard Alfonsi spoke about Twitter being in competition with media and journalism and being a second screen while consuming other media.

Lack of share counts, however, make it hard for companies to prove direct value. (Though I’m sure there are many advertisers wanting only lead generation and direct sales from the platform.) Small businesses, who can’t easily prove other value, aren’t going to see an easy investment in the platform.

Not to mention that issues around harassment have caused problems even celebrities with large followings like Sue Perkins (UK comedian), Joss Whedon (director and producer), Zelda Williams (daughter of Robin Williams), and Anne Wheaton (wife of Wil Wheaton). This garners extremely bad publicity for the company, especially when most were active users of Twitter.

No doubt Twitter shareholders are on edge when stock prices went down and the platform added a net of 0 new users in Q4 of 2015. Is the removal of share counts something in the long list of reasons why Twitter didn’t grow in Q4? Twitter has made some big revenue and shipping promises to shareholders in response.

Someone will build a tool to scrape Twitter and sell share counts.

When Google rolled out (not provided), every SEO software company clamored to make tools to get around it. Since Gnip data is so expensive, it’s pretty impractical for most companies. The only way to actually build this tool would be to scrape all of Twitter, which has many perils. Companies like Hootsuite, Buffer, and SproutSocial are the best set up to do it more easily, but they may not want to anger Twitter.

What are your predictions for Twitter’s future without share counts? Did you use the share counts for your brand, and how did you use them? What will you be using instead?

Header image by MKH Marketing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

A Practical Guide to Content and Its Metrics

Posted by gfiorelli1

A small disclaimer:

Before you start reading, I want to say that I am not an analytics expert per se, but a strategic SEO and digital marketing consultant. On the other hand, in my daily work of auditing and designing holistic digital marketing strategies, I deal a lot with Analytics in order to understand my clients’ gaps and opportunities.

For that reason, what you are going to read isn’t an “ultimate guide,” but instead my personal and practical guide to content and its metrics, filled with links to useful resources that helped me solving the big contents’ metric mystery. I happily expect to see your ideas in the comments.

The difference between content and formats

One of the hardest things to measure is content effectiveness, mostly because there exists great confusion about its changing nature and purpose. One common problem is thinking of “content” and “formats” as synonyms, which leads to frustration and, with the wrong scaling processes present, may also lead to Google disasters.

What is the difference between content and formats?

  1. Content is any message a brand/person delivers to an audience;
  2. Formats are the specific ways a brand/person can deliver that message (e.g. data visualizations, written content, images/photos, video, etc.).

Just to be clear: We engage and eventually share the ideas and emotions that content represents, not its formats. Formats are just the clothing we choose for our content, and keeping the fashion metaphor, some ways of dressing are better than others for making a message more explicit.

Strategy, as in everything in marketing, also plays a very important role when it comes to content.

It is during the strategic phase that we attempt to understand (both thanks to our own site analysis and competitive analysis of others’ sites) if our content is responding to our audience’s interests and needs, and also to understand what metrics we must choose in order to assess its success or failure.

Paraphrasing an old Pirelli commercial tagline: Content without strategy is nothing.

Strategy: Starting with why/how/what

When we are building a content strategy, we should ask ourselves (and our clients and CMOs) these classic questions:

  1. Why does the brand exist?
  2. How does the brand solidify its “why?”
  3. What specific tactics will the brand use for successfully developing the “how?”

Only when we have those answers can we understand the goals of our content, what metrics to consider, and how to calculate them.

Let use an example every Mozzer can understand.

Why does Moz exist?

The answer is in its tagline:

  • Inbound marketing is complicated. Moz’s software makes it easy.

How does Moz solidify its “why?”

  • Moz produces a series of tools, which help marketers in auditing, monitoring and taking insightful decisions about their web marketing projects.
  • Moreover, Moz creates and publishes content, which aims to educate marketers to do their jobs better.

If you notice, we can already pick out a couple of generic goals here:

  1. Leads > subscriptions;
  2. Awareness (that may ultimately drive leads).

What specific tactics does Moz use for successfully achieving its main goals?

Considering the nature of the two main goals we clarified above, we can find content tactics covering all the areas of the so-called content matrix.

Some classic content matrix models are the ones developed by Distilled (in the image above) and Smart Insights and First 10, but it is a good idea to develop your own based on the insights you may have about your specific industry niche.

The things Moz does are many, so I am presenting an incomplete list here.

In the “Purchase” side and with conversion and persuasion as end goals:

  • Home page and “Products” section of Moz.com (we can define them as “organic landing pages”);
  • Content about tools
    • Free tools;
    • Pro tools (which are substantially free for a 30-day trial period).
  • CPC landing pages;
  • Price page with testimonials;
  • “About” section;
  • Events sponsorship.

In the “Awareness” side and with educational and entertainment (or pure engagement) purposes:

  • The blogs (both the main blog and UGC);
  • The “Learn and Connect” section, which includes the Q&A;
  • Guides;
  • Games (The SEO Expert Quiz can surely be considered a game);
  • Webinars;
  • Social media publishing;
  • Email marketing
  • Live events (MozCon and LocalUp, but also the events where Moz Staff is present with one or more speakers).

Once we have the content inventory of our web site, we can relatively easily identify the specific goals for the different pieces of content, and of the single type of content we own and will create.

I will usually not consider content like tools, sponsorship, or live events, because even though content surely plays a role in their goals’ achievement, there are also other factors like user satisfaction and serendipity involved which are not directly related to content itself or cannot be easily measured.

Measuring landing/conversion pages’ content

This may be the easier kind of content to measure, because it is deeply related to the more general measures of leads and conversions, and it is also strongly related to everything CRO.

We can measure the effectiveness of our landing/conversion pages’ content easily with Google Analytics, especially if we remember to implement content grouping (here’s the official Google guide) and follow the suggestions Jeff Sauer offered in this post on Moz.

We can find another great resource and practical suggestions in this older (but still valid) post by Justin Cutroni: How to Use Google Analytics Content Grouping: 4 Business Examples. The example Justin offers about Patagonia.com is particularly interesting, because it is explicitly about product pages.

On the other hand, we should always remember that the default conversion rate metric should not be taken as the only metric to incorporate into decision-making; the same is true when it comes to content performance and optimization. In fact, as Dan Barker said once, the better we segment our analysis the better we can understand the performance of our money pages, give a better meaning to the conversion rate value and, therefore, correct and improve our sales and leads.

Good examples of segmentation are:

  • Conversions per returning visitor vs new visitor;
  • Conversions per type of visitor based on demographic data;
  • Conversions per channel/device.

These segmented metrics are fundamental for developing A/B tests with our content.

Here are some examples of A/B tests for landing/conversion pages’ content:

  • Title tags and meta description A/B tests (yes, title tags and meta descriptions are content too, and they have a fundamental role in CTR and “first impressions”);
  • Prominent presence of testimonials vs. a more discreet one;
  • Tone of voice used in the product description (copywriting experiment);
  • Product slideshow vs. video.

Here are a few additional sources about CRO and content, surely better than me for inspiring you in this specific field:

Measuring on-site “editorial” content

Here is where things start getting a little more complicated.

Blog posts, guides, white papers, and similar content usually do not have a conversion/lead nature, at leastnot directly. Usually their goals are more intangible ones, such as creating awareness, likability, trust, and authority.

In other cases, then, this kind of content also serves the objective of creating and maintaining an active community, as it does in the case of Moz. I tend to consider this a subset, though, because in many niches creating a community is not a top priority. Or, even if it is, it does not offer a reliable flux of “signals” so as to appropriately measure the effectiveness of our content because of pure lack of statistical evidence.

A good starting place is measuring the so-called consumption metrics.

Again, the ideal is to implement content grouping in Google Analytics (see the video above), because that way we can segment every different kind of editorial content.

For instance, if we have a blog, not only we can create a group for it, but we can also create

  • As many groups as there are categories and tags on our blog;
  • groups by average length of the posts;
  • groups per the kind of prominent formats used (video posts like Moz’s Whiteboard Fridays, infographics, long-form, etc.).

This are just three examples; think about your own measuring needs and the nature of your content, and you will come out with other ideas for content groupings.

The following are basic metrics that you’ll need to consider when measuring your editorial content:

  1. Pageviews / Unique Pageviews
  2. Pages / Session
  3. Time on Page

The ideal is to analyze these metrics at least with these secondary levels:

  • Medium / Sources, so you can understand what channel contributed the most to your content visibility. Remember, though, that dark search/social is a reality that can screw up your metrics (check out Marshall Simmonds’ deck from MozCon 2015);
  • User Type, so to see what percent of the Pageviews is due to returning visitors (a good indicator of the level of trust and authority our content has) and new ones (which indicates the ability our content has to attract new potentially long-lasting readers);
  • Mobile, which is useful in understanding the environments in which our users mostly interact with our content, and how we have to optimize its experience depending on the device used, hence helping making our content more memorable.

You surely can have fun also analyzing your content’s performance by segmenting them per demographic indicators. For instance, it may be interesting to see what affinity categories of your readers there are, depending on the categorization used in your blog and that you have replicated in your content grouping. This, in fact, can help us in better understanding the personas composing our audience, and so refining the targeting of our content.

As you can see, I did not mention bounce rate as a metric to consider, and there is a reason for that: Bounce rate is tricky, and its misinterpretation can lead to bad decisions.

Instead of bounce rate, when it comes to editorial content (and blog posts in particular), I prefer to consider scroll completion, a metric we can retrieve using Tag Manager (see this post by Optimize Smart).

Finally, especially if you also grouped content for outstanding format used (video, embedded SlideShare, etc.), you will need to retrieve users’ interactions through Tag Manager. However, if you really want to dig into the analysis of how that content is consumed by users, you will need to export your Analytics data and then combine it with data from external sources, like YouTube Analytics, SlideShare Analytics, etc.

The more we share, the more we have. This is also true in Marketing.

Consumption metrics, though, are not enough in order to understand the performance of your content, especially if you strongly rely on a community and one of the content objectives is creating and growing a community around your brand.

I am talking of the so-called sharing metrics:

  1. Social shares (Likes, Tweets, Pins, etc.);
  2. Inbound links;
  3. Un-linked mentions
  4. Email forwarding.

All this can be tracked and measured (e.g.: social shares, mentions on web sites or on social).

I usually add comments into these Metrics, because of the social nature comments have. Again, thanks to Tag Manager, you can easily tag when someone clicks on the “add comment” button.

A final metric we should always consider is the page value. As Google itself explains in that Help Page:

Page value is a measure of influence. It’s a single number that can help you better understand which pages on your site drive conversions and revenue. Pages with a high Page Value are more influential than pages with a low Page Value [Page Value is also shown for groups of content].

The combined analysis of consumption and social metrics can offer us a very granular understanding of how our content is performing, therefore how to optimize our strategy and/or how to start conducting A/B tests.

On the other hand, such a granular vision is not the ideal for reporting, especially if we have to report to a board of directors and not to our in-house or in-agency counterpart.

In that case being able to resume all these metrics (or the most relevant ones) in just one metric is very useful.

How to do it? My suggestion is to follow (and adapt to your own needs) the methodology used by the Moz editorial team and described in this post by Trevor Klein.

What about the ROI of editorial content? Don’t give up; I’ll talk about it below.

Measuring the ROI of content marketing and content-based link building campaigns

Theoretically measuring the ROI of something is relatively easy:

(Return – Investment) / Investment = ROI.

However the difficulty is not in that formula itself, but in the values used in that formula.

How to calculate the investment value?

Usually we have a given budget assigned for our content marketing and/or content-based campaigns. If that is the case, perfect! We have a figure to use for the investment value.

A complete different situation is when we must present a budget proposal and/or assign part of the budget to each campaign in a balanced and considered way.

In this post by Caroline Gilbert for Siege Media you can find great suggestions about how to calculate a content marketing budget, but I would like to present mine, too, which is based on competitive analysis.

Here’s what I do:

  1. Identify the distinct competitors which created content related to what we will target with our campaign. I rely on both SERP analysis (i.e.: using the Keyword Difficulty Tool by Moz) and information we can retrieve with a “keyword search” on Buzzsumo.
  2. Retrieve all meaningful content metrics:
    • Links (another reason why I use the Keyword Difficulty Tool);
    • Social shares per kind of social network (these are available from BuzzSumo). Remember that some of these social shares can be tallied by sponsored content (check this Social Media Explorer post about how to do Facebook competitive analysis).
    • Estimated traffic to the content’s URL (data retrieved via SimilarWeb).
  3. Assign a monetary value to the metrics retrieved.
  4. Calculate the competitors’ potential investment value.
  5. Calculate the median investment value of all the competitors.
  6. Consider the delta between what the client/company invested in content marketing (or link building, if it is moving from classic old link building to modern link earning) before, as well as the median investment value of the competitors.
  7. Calculate and propose the content marketing / content-based campaign’s value in a range which goes from “minimum viable budget” to “ideal.”

Reality teaches us that the proposed investment is not the same than the real investment, but at least we then have some data for proposing it and not just a gut feeling. However, we must be prepared to work with budgets that are more on the “minimum viable” side than on the ideal one.

How to calculate revenue?

You can find a good number of ROI calculators, but I particularly like the Fractl one, because it is very easy to understand and use.

Their general philosophy is to calculate ROI in terms of how much traffic, links, and social shares the content itself has generated organically, hence how much it helped saving in paid promotion.

If you look at it, it reminds the methodology I described above (points 1 to 7).

However, when it comes to social shares, you should avoid the classic mistake of considering only the social shares directly generated by the page your content has been published.

For instance, let’s take the Idioms of the World campaigns Verve Search did for HotelClub.com and which won the European Search Awards.

If we we look only at its own social share metrics, we will have just a partial picture:

Instead, if we see what are the social shares metrics of the pages that linked and talked about it, we will have the complete picture.

We can use (again) BuzzSumo for retrieving this data (also using its Content Analysis feature), or using URL Profiler.

As you can imagine, you can calculate the ROI of your editorial content using the same methodology.

Obviously the Fractl ROI calculator is far from being perfect, as it does not consider the offline repercussion a content campaign may have (the Idioms of the World campaign was organically published in a outstanding placement on The Guardian’s paper version, for instance), but it is a solid base for crafting your own ROI calculation.

Conclusions

So, we have arrived at the end of this personal guide about content and its metrics.

Remember these important things:

  • Don’t be data driven, be data informed;
  • Think strategically, act tactically;
  • Content’s metrics vary depending on the goals of content itself.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Misuses of 4 Google Analytics Metrics Debunked

Posted by Tom.Capper

In this post I’ll pull apart four of the most commonly used metrics in Google Analytics, how they are collected, and why they are so easily misinterpreted.

Average Time on Page

Average time on page should be a really useful metric, particularly if you’re interested in engagement with content that’s all on a single page. Unfortunately, this is actually its worst use case. To understand why, you need to understand how time on page is calculated in Google Analytics:

Time on Page: Total across all pageviews of time from pageview to last engagement hit on that page (where an engagement hit is any of: next pageview, interactive event, e-commerce transaction, e-commerce item hit, or social plugin). (Source)

If there is no subsequent engagement hit, or if there is a gap between the last engagement hit on a site and leaving the site, the assumption is that no further time was spent on the site. Below are some scenarios with an intuitive time on page of 20 seconds, and their Google Analytics time on page:

Scenario

Intuitive time on page

GA time on page

0s: Pageview
10s: Social plugin
20s: Click through to next page

20s

20s

0s: Pageview
10s: Social plugin
20s: Leave site

20s

10s

0s: Pageview
20s: Leave site

20s

0s

Google doesn’t want exits to influence the average time on page, because of scenarios like the third example above, where they have a time on page of 0 seconds (source). To avoid this, they use the following formula (remember that Time on Page is a total):

Average Time on Page: (Time on Page) / (Pageviews – Exits)

However, as the second example above shows, this assumption doesn’t always hold. The second example feeds into the top half of the average time on page faction, but not the bottom half:

Example 2 Average Time on Page: (20s+10s+0s) / (3-2) = 30s

There are two issues here:

  1. Overestimation
    Excluding exits from the second half of the average time on page equation doesn’t have the desired effect when their time on page wasn’t 0 seconds—note that 30s is longer than any of the individual visits. This is why average time on page can often be longer than average visit duration. Nonetheless, 30 seconds doesn’t seem too far out in the above scenario (the intuitive average is 20s), but in the real world many pages have much higher exit rates than the 67% in this example, and/or much less engagement with events on page.
  2. Ignored visits
    Considering only visitors who exit without an engagement hit, whether these visitors stayed for 2 seconds, 10 minutes or anything inbetween, it doesn’t influence average time on page in the slightest. On many sites, a 10 minute view of a single page without interaction (e.g. a blog post) would be considered a success, but it wouldn’t influence this metric.

Solution: Unfortunately, there isn’t an easy solution to this issue. If you want to use average time on page, you just need to keep in mind how it’s calculated. You could also consider setting up more engagement events on page (like a scroll event without the “nonInteraction” parameter)—this solves issue #2 above, but potentially worsens issue #1.

Site Speed

If you’ve used the Site Speed reports in Google Analytics in the past, you’ve probably noticed that the numbers can sometimes be pretty difficult to believe. This is because the way that Site Speed is tracked is extremely vulnerable to outliers—it starts with a 1% sample of your users and then takes a simple average for each metric. This means that a few extreme values (for example, the occasional user with a malware-infested computer or a questionable wifi connection) can create a very large swing in your data.

The use of an average as a metric is not in itself bad, but in an area so prone to outliers and working with such a small sample, it can lead to questionable results.

Fortunately, you can increase the sampling rate right up to 100% (or the cap of 10,000 hits per day). Depending on the size of your site, this may still only be useful for top-level data. For example, if your site gets 1,000,000 hits per day and you’re interested in the performance of a new page that’s receiving 100 hits per day, Google Analytics will throttle your sampling back to the 10,000 hits per day cap—1%. As such, you’ll only be looking at a sample of 1 hit per day for that page.

Solution: Turn up the sampling rate. If you receive more than 10,000 hits per day, keep the sampling rate in mind when digging into less visited pages. You could also consider external tools and testing, such as Pingdom or WebPagetest.

Conversion Rate (by channel)

Obviously, conversion rate is not in itself a bad metric, but it can be rather misleading in certain reports if you don’t realise that, by default, conversions are attributed using a last non-direct click attribution model.

From Google Analytics Help:

“…if a person clicks over your site from google.com, then returns as “direct” traffic to convert, Google Analytics will report 1 conversion for “google.com / organic” in All Traffic.”

This means that when you’re looking at conversion numbers in your acquisition reports, it’s quite possible that every single number is different to what you’d expect under last click—every channel other than direct has a total that includes some conversions that occurred during direct sessions, and direct itself has conversion numbers that don’t include some conversions that occurred during direct sessions.

Solution: This is just something to be aware of. If you do want to know your last-click numbers, there’s always the Multi-Channel Funnels and Attribution reports to help you out.

Exit Rate

Unlike some of the other metrics I’ve discussed here, the calculation behind exit rate is very intuitive—”for all pageviews to the page, Exit Rate is the percentage that were the last in the session.” The problem with exit rate is that it’s so often used as a negative metric: “Which pages had the highest exit rate? They’re the problem with our site!” Sometimes this might be true: Perhaps, for example, if those pages are in the middle of a checkout funnel.

Often, however, a user will exit a site when they’ve found what they want. This doesn’t just mean that a high exit rate is ok on informational pages like blog posts or about pages—it could also be true of product pages and other pages with a highly conversion-focused intent. Even on ecommerce sites, not every visitor has the intention of converting. They might be researching towards a later online purchase, or even planning to visit your physical store. This is particularly true if your site ranks well for long tail queries or is referenced elsewhere. In this case, an exit could be a sign that they found the information they wanted and are ready to purchase once they have the money, the need, the right device at hand or next time they’re passing by your shop.

Solution: When judging a page by its exit rate, think about the various possible user intents. It could be useful to take a segment of visitors who exited on a certain page (in the Advanced tab of the new segment menu), and investigate their journey in User Flow reports, or their landing page and acquisition data.

Discussion

If you know of any other similarly misunderstood metrics, you have any questions or you have something to add to my analysis, tweet me at @THCapper or leave a comment below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Articles

Posted in Latest NewsComments Off

Advert