Tag Archive | "Metrics"

How to Set Up Metrics to Optimize Your Digital PR Team’s Press Coverage

Posted by Ashley.Carlisle

Over the past six years, our team at Fractl has studied the art of mastering content marketing press coverage. Before moving into Agency Operations, I on-boarded and trained over a dozen new associates for our digital PR team within a year as the Media Relations Manager. Scaling a team of that size in a such a short period of time required hands-on training and a clear communication of goals and expectations within the role — but what metrics are indicative of success in digital PR?

As a data-driven content marketing agency, we turned to the numbers for something a little different than our usual data-heavy campaigns — we used our own historical data to analyze and optimize our digital PR team’s outreach.

This post aims to provide better insight in defining measurable variables as key performance indicators, or KPIs, for digital PR teams and understanding the implications and relationships of those KPIs. We’ll also go into the rationale for establishing baselines for these KPIs, which indicate the quality, efficiency, and efficacy of a team’s outreach efforts.

As a guide for defining success by analyzing your own metrics for your team (digital PR or otherwise), we’ll provide the framework for the research design, which helped us establish a threshold for the single variable we identified to best measure our efforts and be the most significantly correlated with the KPIs indicative of success of a digital PR team.

Determining the key performance indicators for digital PR outreach

The influx of available data for marketers and PR professionals to measure the impact of their work allows us to stray away from vague metrics like “reach” and the even more vague goal of “more publicity.” Instead, we are able to focus on the metrics most indicative of what we’re actually trying to measure: the effect of digital PR efforts.

We all have our theories and educated guesses about which metrics are most important and how each are related, but without researching further, theories remain theories (or expert opinions, at best). Operational research allows businesses to use the scientific method as a way to provide managers and their teams with a quantitative basis for decision making. Operationalization is the process of strictly defining variables to turn nebulous concepts (in this case, the effort and success of your digital PR team) into variables that can be measured, empirically and quantitatively.

There is one indicator identified to best measure your effort into a campaign’s outreach. It is a precursor to all of the indicators below: the volume of pitch emails sent for each campaign.

Because all pitches are not created equal, the indicators below gauge which factors best define the success of outreach, such as the quality of outreach correspondence, the efficiency of time to secure press, the efficacy of the campaign, and media mentions secured. Each multi-faceted metric can be described by a variety of measurements, and all are encompassed by the independent variable of the volume of pitch emails sent for each campaign.

Some indicators may be better measured by using more than a single metric, so for the purposes of this post, here are the three metrics to illustrate each of these three KPIs to offer a more holistic picture of your team’s performance:

Pitch quality and efficacy

  • Placement Rate: The percentage of placements (i.e., media mentions) secured per the number of total pitches sent.
  • Interest Rate: The percentage of interested publisher replies to pitches per the number of total pitches sent.
  • Decline Rate: The percentage of declining publisher replies to pitches per the number of total pitches sent.

Efficiency and capacity

  • Total days of outreach: The number of business days between the first and last pitch sent for a campaign, which is the sum of the two metrics below.
  • Days to first placement: The number of business days between the first pitch sent and first placement to be published for a campaign.
  • Days to syndication: The number of business days between the first placement to be published and the last pitch to be sent for a campaign.

Placement quality and efficacy

  • Total Links: The total number of backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow) for a campaign’s landing page.
  • Total DoFollow Links: The total number of DoFollow backlinks from external linking domains for a campaign’s landing page.
  • Total Domain Authority of Links: The total domain authority of all backlinks from external linking domains of any attribution type (e.g. DoFollow, NoFollow,) for a campaign’s landing page.

Optimizing effort to yield the best KPIs

After identifying the metrics, we need to solve the next challenge: What are the relationships between your efforts and your KPIs? The practical application of these answers can help you establish a threshold or range for the input metric that is correlated with the highest KPIs. We’ll discuss that in a bit.

After identifying metrics to analyze, define the nature of their relationships to one another. Use a hypothesis test to verify an effect; in this case, we’re interested to find the relationship between pitch count and each of the metrics we defined above as being KPIs of successful outreach. This study hypothesizes that campaigns closed out in 70 pitches or less will have better KPIs than campaigns closed out with over 71 pitches.

Analyzing the relationship and determining significance of the data

Next, determine if the relationship is significant; when the relationship is stated as statistically significant, the relationship observed has a high likelihood of happening in the future. When it comes to claiming statistical significance, some may assume there must be a complex formula that only seasoned statisticians can calculate. In reality, determining statistical significance is done via a t-test, a simple statistical test that compares two samples to help us infer a correlation of the same relationships in future samples.

In this case, campaigns with pitch counts below 70 are one group and campaigns above 71 are a second group. The findings below define the percentage difference between the means of both groups (i.e., the campaigns from Q2 and Q3) to determine if lower pitch counts do have a desired effect for each metric; those that are asterisked are statistically significant, meaning there is a less than a 5 percent chance that the observed results are due to chance.

How our analysis can optimize your digital PR team’s efforts

In practice, the relationships between these metrics help you establish a better standard of practice for your team’s outreach with realistic expectations and goals. Further, the correlation between the specified range of pitch counts and all other KPIs give you a reliable range of what values you can expect when it comes to the metrics for pitch quality, timelines, and campaign performance when adhering to the range of pitches.

The original theory — that a threshold for pitch counts exists when the relationship between pitch count and all other metrics of performance were compared — is confirmed by the data. The sample with lower pitch counts (less than 70) sees a positive relationship with the KPIs we want to decrease (e.g. decline rates, total days) and negative relationship with the KPIs we want to increase (e.g. placement rates, link counts). The sample with higher pitch counts (greater than 71) saw the inverse — a negative relationship with the KPIs we want to decrease and a positive relationship with the KPIs we want to increase. Essentially, when campaigns with less than 70 pitches sent were isolated, the numbers improved in nearly every metric.

When this analysis is applied to each of the 74 campaigns from Q3, you’ll see nearly consistent results, with the exception again being Total Domain Authority. Campaigns with up to 70 pitches are correlated with better KPIs when compared to campaigns with over 71 pitches.

Vague or unrealistic expectations and goals will sabotage the success of any team and any project. When it comes to the effort put into each campaign, having objective, optimized procedures allows your team to work smarter, not harder.

So, what does that baseline range look like, and how do you calculate it?

Establishing realistic baseline metrics

A simple question helps answer what the baseline should be in this instance: What was the average of each KPI of the campaigns with fewer than 70 pitches?

We gathered all 70 campaigns closed out of our digital PR team’s pipelines in the second and third quarters of 2018 with pitch counts below 70 and determined the average of each metric. Then, we calculated the standard deviation from the mean, which defines the spread of the data to establish a range for each KPI — and that became our baseline range.

Examining historical data is among the best methods for determining realistic baselines. By gathering a broad, sizeable sample (usually more than 30 is ideal) that represents the full scope of projects your team works on, you can determine the average for each metric and deviation from the average to establish a range.

These reliable ranges allow your digital PR team to understand the baselines they must strive for during active outreach when in compliance with the standard of practice for pitch counts established from our research. Further, these baseline ranges allow you to set more realistic goals for future performance by increasing each range by a realistic percentage.

Deviations from that range act as indicators of potential issues related to the quality, efficiency, or efficacy of their outreach, with each of the metrics implying what specifically may be array. We offer context into each of those metrics defining our three KPIs in terms of their implications and limitations.

Understanding how each metric can influence the productivity of your team

Pitch quality and efficacy

The purpose of a pitch is to tell a compelling and succinct story of why the campaign you’re pitching is newsworthy and fits the beat of the individual writer you’re pitching. Help your team succeed by enforcing tried and true best practices to enable them to craft each pitch with personalization and compelling narratives at the top of mind. The placements act as a conversion rate to measure the efficacy of your team’s outreach while interests and declines act as a combined response rate to measure the quality of outreach.

To help your team avoid the “spray and pray” mentality of blasting out as many pitches as possible and hoping one will yield a media mention, which ultimately jeopardizes publisher relationships and are an inefficient use of time, focus on the rates our teams secure responses and placements from publishers in relation to the total volume of pitches sent. Prioritize this interpretation of the data rather than just the individual counts to help add context to the pitch count.

Campaigns with a high-ratio of interest and placements to pitches from publishers imply the quality of the pitch was sufficient, meaning it encompassed one or more of the factors known to be important in securing press coverage. This includes, but is not limited to, compelling and newsworthy narratives, personalized details, and/or relevancy to the writer. In some cases, campaigns may have a low-ratio of interest but high-ratio of placements as a result of a nonresponse bias — the occurrence where publishers will not respond to a pitch but will still cover the campaign in a future article, yielding a placement. These “ghost posts” can skew interest rates, illustrating why three metrics compose this KPI.

Campaigns with a high-ratio of declines to pitches imply the quality of the pitch may be subpar, which signals to the associate to re-evaluate their outreach strategy. Again, the inverse may not always be true, as campaigns with a low ratio of declines may be a result of non-response bias. In this case, if publishers do not respond at all, we can either infer they did not open the email or they opened the email and were not interested, therefore declining by default.

While confounding variables (such as the quality of the content itself, not just the quality of the pitch) may skew these metrics in either direction and remain the greatest limitation, holistically, these three metrics offer actionable insights during active outreach.

Efficiency and capacity

Similarly, ranges for timeline metrics can give your associates context of when they should be achieving milestones (i.e., the first placement) as well as the total length of outreach. Deviating beyond the standard timeline to secure the first placement often indicates the outreach strategy needs re-evaluating, while extending beyond the range for total days of outreach indicates a campaign should be closed out soon.

Efficiency metrics help beyond advising the strategy for outreach, informing operations from a capacity standpoint. Toggling between tens and sometimes hundreds of active campaigns at any given point relies on consistency for capacity — reducing variance between the volume of campaigns entering production to campaigns being closed out of the pipeline by staggering campaigns based on their average duration. This allows for more robust planning and reliable forecasting.

Awareness of the baselines for time to secure press enables you and your team to not just plan strategies and capacities, but also the content of your campaigns. You can ensure timely content by allowing for sufficient time for outreach when ideating your campaigns so the content does not become stale or outdated.

The biggest limitation of these metrics is a looming external variable often beyond our control — the editorial calendars and agendas of the publishers. Publishers have their own deadlines and priorities to fill, so we can not always plan for delays in publishing dates or worse yet, scrapping coverage altogether.

Placement quality and efficacy

Ultimately, your efforts are intended to yield placements to gain brand awareness and voice, as well as build a diverse link portfolio; the latter is arguably easier to quantify. Total external links pointing to the campaign’s landing page or client homepage along with the total Domain Authority of those links allow you to track both the quantity and quality of links.

Higher link counts built from your placements allow you to infer the syndication networks of the placements your outreach secured, while higher total Domain Authority measures the relative value of those linking domains to measure quality. Along with further specifying the types of links (specifically Dofollow links, arguably the most valuable link type), these metrics have the potential to forecast the impact of the campaign on the website’s own overall authority.

Replicating our analysis to optimize your team’s press coverage

Often times, historical research designs such as this one can have limitations in their cause and effect implications. This collection of data offers valuable insight into correlations to help us infer patterns and trends.

Our analysis utilized historical data representative of our entire agency in terms of scope of clients, campaign types, and associates, strengthening internal validity. So while the specific baseline metrics are tailored to our team, the framework we offer for establishing those baselines is transferable to any team.

Apply these methods with your digital PR team to help define KPIs, establish baselines, and test your own theories:

  • Track the ten metrics that compose the KPIs of digital PR outreach for each campaign or initiative to keep a running historical record.
  • Determine the average spread via the mean and standard deviation for each metric from a sizeable, representative sample of campaigns to establish your team’s baseline metrics.
  • Test any theories of trends in your team’s effort (i.e., pitch counts) in relation to KPIs with a simple hypothesis test to optimize your team and resources.

How does your team approach defining the most important metrics and establishing baseline ranges? How do you approach optimizing those efforts to yield the best press coverage? Uncovering these answers will help your team synergize more effectively and establish productive foundations for future outreach efforts.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Page Speed Optimization: Metrics, Tools, and How to Improve

Posted by BritneyMuller

Page speed is an important consideration for your SEO work, but it’s a complex subject that tends to be very technical. What are the most crucial things to understand about your site’s page speed, and how can you begin to improve? In this week’s edition of Whiteboard Friday, Britney Muller goes over what you need to know to get started.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hey, Moz fans. Welcome to another edition of Whiteboard Friday. Today we’re going over all things page speed and really getting to the bottom of why it’s so important for you to be thinking about and working on as you do your work.

At the very fundamental level I’m going to briefly explain just how a web page is loaded. That way we can sort of wrap our heads around why all this matters.

How a webpage is loaded

A user goes to a browser, puts in your website, and there is a DNS request. This points at your domain name provider, so maybe GoDaddy, and this points to your server where your files are located, and this is where it gets interesting. So the DOM starts to load all of your HTML, your CSS, and your JavaScript. But very rarely does this one pull all of the needed scripts or needed code to render or load a web page.

Typically the DOM will need to request additional resources from your server to make everything happen, and this is where things start to really slow down your site. Having that sort of background knowledge I hope will help in us being able to triage some of these issues.

Issues that could be slowing down your site

What are some of the most common culprits?

  1. First and foremost is images. Large images are the biggest culprit of slow loading web pages.
  2. Hosting can cause issues.
  3. Plugins, apps, and widgets, basically any third-party script as well can slow down load time.
  4. Your theme and any large files beyond that can really slow things down as well.
  5. Redirects, the number of hops needed to get to a web page will slow things down.
  6. Then JavaScript, which we’ll get into in a second.

But all of these things can be a culprit. So we’re going to go over some resources, some of the metrics and what they mean, and then what are some of the ways that you can improve your page speed today.

Page speed tools and resources

The primary resources I have listed here are Google tools and Google suggested insights. I think what’s really interesting about these is we get to see what their concerns are as far as page speed goes and really start to see the shift towards the user. We should be thinking about that anyway. But first and foremost, how is this affecting people that come to your site, and then secondly, how can we also get the dual benefit of Google perceiving it as higher quality?

We know that Google suggests a website to load anywhere between two to three seconds. The faster the better, obviously. But that’s sort of where the range is. I also highly suggest you take a competitive view of that. Put your competitors into some of these tools and benchmark your speed goals against what’s competitive in your industry. I think that’s a cool way to kind of go into this.

Chrome User Experience Report

This is Chrome real user metrics. Unfortunately, it’s only available for larger, popular websites, but you get some really good data out of it. It’s housed on Big ML, so some basic SQL knowledge is needed.

Lighthouse

Lighthouse, one of my favorites, is available right in Chrome Dev Tools. If you are on a web page and you click Inspect Element and you open up Chrome Dev Tools, to the far right tab where it says Audit, you can run a Lighthouse report right in your browser.

What I love about it is it gives you very specific examples and fixes that you can do. A fun fact to know is it will automatically be on the simulated fast 3G, and notice they’re focused on mobile users on 3G. I like to switch that to applied fast 3G, because it has Lighthouse do an actual run of that load. It takes a little bit longer, but it seems to be a little bit more accurate. Good to know.

Page Speed Insights

Page Speed Insights is really interesting. They’ve now incorporated Chrome User Experience Report. But if you’re not one of those large sites, it’s not even going to measure your actual page speed. It’s going to look at how your site is configured and provide feedback according to that and score it. Just something good to be aware of. It still provides good value.

Test your mobile website speed and performance

I don’t know what the title of this is. If you do, please comment down below. But it’s located on testmysite.thinkwithgoogle.com. This one is really cool because it tests the mobile speed of your site. If you scroll down, it directly ties it into ROI for your business or your website. We see Google leveraging real-world metrics, tying it back to what’s the percentage of people you’re losing because your site is this slow. It’s a brilliant way to sort of get us all on board and fighting for some of these improvements.

Pingdom and GTmetrix are non-Google products or non-Google tools, but super helpful as well.

Site speed metrics

So what are some of the metrics?

First paint

We’re going to go over first paint, which is basically just the first non-blank paint on a screen. It could be just the first pixel change. That initial change is first paint.

First contentful paint

First contentful paint is when the first content appears. This might be part of the nav or the search bar or whatever it might be. That’s the first contentful paint.

First meaningful paint

First meaningful paint is when primary content is visible. When you sort of get that reaction of, “Oh, yeah, this is what I came to this page for,” that’s first meaningful paint.

Time to interactive

Time to interactive is when it’s visually usable and engage-able. So we’ve all gone to a web page and it looks like it’s done, but we can’t quite use it yet. That’s where this metric comes in. So when is it usable for the user? Again, notice how user-centric even these metrics are. Really, really neat.

DOM content loaded

The DOM content loaded, this is when the HTML is completely loaded and parsed. So some really good ones to keep an eye on and just to be aware of in general.

Ways to improve your page speed

HTTP/2

HTTP/2 can definitely speed things up. As to what extent, you have to sort of research that and test.

Preconnect, prefetch, preload

Preconnect, prefetch, and preload really interesting and important in speeding up a site. We see Google doing this on their SERPs. If you inspect an element, you can see Google prefetching some of the URLs so that it has it faster for you if you were to click on some of those results. You can similarly do this on your site. It helps to load and speed up that process.

Enable caching & use a content delivery network (CDN)

Caching is so, so important. Definitely do your research and make sure that’s set up properly. Same with CDNs, so valuable in speeding up a site, but you want to make sure that your CDN is set up properly.

Compress images

The easiest and probably quickest way for you to speed up your site today is really just to compress those images. It’s such an easy thing to do. There are all sorts of free tools available for you to compress them. Optimizilla is one. You can even use free tools on your computer, Save for Web, and compress properly.

Minify resources

You can also minify resources. So it’s really good to be aware of what minification, bundling, and compression do so you can have some of these more technical conversations with developers or with anyone else working on the site.

So this is sort of a high-level overview of page speed. There’s a ton more to cover, but I would love to hear your input and your questions and comments down below in the comment section.

I really appreciate you checking out this edition of Whiteboard Friday, and I will see you all again soon. Thanks so much. See you.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

All the ABM metrics to measure for your quarterly reporting

The traditional demand funnel doesn’t cut it for account-based marketing — so report on these KPIs instead.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Find More Articles

Posted in Latest NewsComments Off

How to Stop Drowning in Data and Begin Using Your Metrics Wisely

Digital marketers have a problem: We’ve got too much data. It sounds like a ridiculous complaint coming from a data…

The post How to Stop Drowning in Data and Begin Using Your Metrics Wisely appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

How to Get More Keyword Metrics for Your Target Keywords

Posted by Bill.Sebald

If you’re old in SEO years, you remember the day [not provided] was introduced. It was a dark, dark day. SEOs lost a vast amount of trusty information. Click data. Conversion data. This was incredibly valuable, allowing SEOs to prioritize their targets.

Google said the info was removed for security purposes, while suspicious SEOs thought this was a push towards spending more on AdWords (now Google Ads). I get it — since AdWords would give you the keyword data SEOs cherished, the “controversy” was warranted, in my opinion. The truth is out there.

But we’ve moved on, and learned to live with the situation. Then a few years later, Google Webmaster Tools (now Search Console) started providing some of the keyword data in the Search Analytics report. Through the years, the report got better and better.

But there’s still a finite set of keywords in the interface. You can’t get more than 999 in your report.

Search Analytics Report

Guess what? Google has more data for you!

The Google Search Console API is your friend. This summer it became even friendlier, providing 16 months worth of data. What you may not know is this API can give you more than 999 keywords. By way of example, the API provides more than 45,000 for our Greenlane site. And we’re not even a very large site. That’s right — the API can give you keywords, clicks, average position, impressions, and CTR %.

Salivating yet?

How to easily leverage the API

If you’re not very technical and the thought of an API frightens you, I promise there’s nothing to fear. I’m going to show you a way to leverage the data using Google Sheets.

Here is what you will need:

  1. Google Sheets (free)
  2. Supermetrics Add-On (free trial, but a paid tool)

If you haven’t heard of Google Sheets, it’s one of several tools Google provides for free. This directly competes with Microsoft Excel. It’s a cloud-based spreadsheet that works exceptionally well.

If you aren’t familiar with Supermetrics, it’s an add-on for Google Sheets that allows data to be pulled in from other sources. In this case, one of the sources will be Google Search Console. Now, while Supermetrics has a free trial, paid is the way to go. It’s worth it!

Installation of Supermetrics:

  1. Open Google Sheets and click the Add-On option
  2. Click Get Add-Ons
  3. A window will open where you can search for Supermetrics. It will look like this:

How To Install Supermetrics

From there, just follow the steps. It will immediately ask to connect to your Google account. I’m sure you’ve seen this kind of dialog box before:

Supermetrics wants to access your Google Account

You’ll be greeted with a message for launching the newly installed add-on. Just follow the prompts to launch. Next you’ll see a new window to the right of your Google Sheet.

Launch message

At this point, you should see the following note:

Great, you’re logged into Google Search Console! Now let’s run your first query. Pick an account from the list below.

Next, all you have to do is work down the list in Supermetrics. Data Source, Select Sites, and Select Dates are pretty self-explanatory. When you reach the “Select metrics” toggle, choose Impressions, Clicks, CTR (%), and Average Position.

Metrics

When you reach “Split by,” choose Search Query as the Split to rows option. And pick a large number for number of rows to fetch. If you also want the page URLs (perhaps you’d like your data divided by the page level), you just need to add Full URL as well.

Split By

You can play with the other Filter and Options if you’d like, but you’re ready to click Apply Changes and receive the data. It should compile like this:

Final result

Got the data. Now what?

Sometimes optimization is about taking something that’s working, and making it work better. This data can show you which keywords and topics are important to your audience. It’s also a clue towards what Google thinks you’re important for (thus, rewarding you with clicks).

SEMrush and Ahrefs can provide ranking keyword data with their estimated clicks, but impressions is an interesting metric here. High impression and low clicks? Maybe your title and description tags aren’t compelling enough. It’s also fun to VLOOKUP their data against this, to see just how accurate they are (or are not). Or you can use a tool like PowerBI to append other customer or paid search metrics to paint a bigger picture of your visitors’ mindset.

Conclusion

Sometimes the littlest hacks are the most fun. Google commonly holds some data back through their free products (the Greenlane Indexation Tester is a good example with the old interface). We know Search Planner and Google Analytics have more than they share. But in those cases, where directional information can sometimes be enough, digging out even more of your impactful keyword data is pure gold.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

When Bounce Rate, Browse Rate (PPV), and Time-on-Site Are Useful Metrics… and When They Aren’t – Whiteboard Friday

Posted by randfish

When is it right to use metrics like bounce rate, pages per visit, and time on site? When are you better off ignoring them? There are endless opinions on whether these kinds of metrics are valuable or not, and as you might suspect, the answer is found in the shades of grey. Learn what Rand has to say about the great metrics debate in today’s episode of Whiteboard Friday.

When bounce rate browse rate and ppc are useful metrics and when they suck

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about times at which bounce rate, browse rate, which is pages per visit, and time on site are terrible metrics and when they’re actually quite useful metrics.

This happens quite a bit. I see in the digital marketing world people talking about these metrics as though they are either dirty-scum, bottom-of-the-barrel metrics that no one should pay any attention to, or that they are these lofty, perfect metrics that are what we should be optimizing for. Neither of those is really accurate. As is often the case, the truth usually lies somewhere in between.

So, first off, some credit to Wil Reynolds, who brought this up during a discussion that I had with him at Siege Media’s offices, an interview that Ross Hudgens put together with us, and Sayf Sharif from Seer Interactive, their Director of Analytics, who left an awesome comment about this discussion on the LinkedIn post of that video. We’ll link to those in this Whiteboard Friday.

So Sayf and Wil were both basically arguing that these are kind of crap metrics. We don’t trust them. We don’t use them a lot. I think, a lot of the time, that makes sense.

Instances when these metrics aren’t useful

Here’s when these metrics, that bounce rate, pages per visit, and time on site kind of suck.

1. When they’re used instead of conversion actions to represent “success”

So they suck when you use them instead of conversion actions. So a conversion is someone took an action that I wanted on my website. They filled in a form. They purchased a product. They put in their credit card. Whatever it is, they got to a page that I wanted them to get to.

Bounce rate is basically the average percent of people who landed on a page and then left your website, not to continue on any other page on that site after visiting that page.

Pages per visit is essentially exactly what it sounds like, the average number of pages per visit for people who landed on that particular page. So people who came in through one of these pages, how many pages did they visit on my site.

Then time on site is essentially a very raw and rough metric. If I leave my computer to use the restroom or I basically switch to another tab or close my browser, it’s not necessarily the case that time on site ends right then. So this metric has a lot of imperfections. Now, averaged over time, it can still be directionally interesting.

But when you use these instead of conversion actions, which is what we all should be optimizing for ultimately, you can definitely get into some suckage with these metrics.

2. When they’re compared against non-relevant “competitors” and other sites

When you compare them against non-relevant competitors, so when you compare, for example, a product-focused, purchase-focused site against a media-focused site, you’re going to get big differences. First off, if your pages per visit look like a media site’s pages per visit and you’re product-focused, that is crazy. Either the media site is terrible or you’re doing something absolutely amazing in terms of keeping people’s attention and energy.

Time on site is a little bit misleading in this case too, because if you look at the time on site, again, of a media property or a news-focused, content-focused site versus one that’s very e-commerce focused, you’re going to get vastly different things. Amazon probably wants your time on site to be pretty small. Dell wants your time on site to be pretty small. Get through the purchase process, find the computer you want, buy it, get out of here. If you’re taking 10 minutes to do that or 20 minutes to do that instead of 5, we’ve failed. We haven’t provided a good enough experience to get you quickly through the purchase funnel. That can certainly be the case. So there can be warring priorities inside even one of these metrics.

3. When they’re not considered over time or with traffic sources factored in

Third, you get some suckage when they are not considered over time or against the traffic sources that brought them in. For example, if someone visits a web page via a Twitter link, chances are really good, really, really good, especially on mobile, that they’re going to have a high bounce rate, a low number of pages per visit, and a low time on site. That’s just how Twitter behavior is. Facebook is quite similar.

Now, if they’ve come via a Google search, an informational Google search and they’ve clicked on an organic listing, you should see just the reverse. You should see a relatively good bounce rate. You should see a relatively good pages per visit, well, a relatively higher pages per visit, a relatively higher time on site.

Instances when these metrics are useful

1. When they’re used as diagnostics for the conversion funnel

So there’s complexity inside these metrics for sure. What we should be using them for, when these metrics are truly useful is when they are used as a diagnostic. So when you look at a conversion funnel and you see, okay, our conversion funnel looks like this, people come in through the homepage or through our blog or news sections, they eventually, we hope, make it to our product page, our pricing page, and our conversion page.

We have these metrics for all of these. When we make changes to some of these, significant changes, minor changes, we don’t just look at how conversion performs. We also look at whether things like time on site shrank or whether people had fewer pages per visit or whether they had a higher bounce rate from some of these sections.

So perhaps, for example, we changed our pricing and we actually saw that people spent less time on the pricing page and had about the same number of pages per visit and about the same bounce rate from the pricing page. At the same time, we saw conversions dip a little bit.

Should we intuit that pricing negatively affected our conversion rate? Well, perhaps not. Perhaps we should look and see if there were other changes made or if our traffic sources were in there, because it looks like, given that bounce rate didn’t increase, given that pages per visit didn’t really change, given that time on site actually went down a little bit, it seems like people are making it just fine through the pricing page. They’re making it just fine from this pricing page to the conversion page, so let’s look at something else.

This is the type of diagnostics that you can do when you have metrics at these levels. If you’ve seen a dip in conversions or a rise, this is exactly the kind of dig into the data that smart, savvy digital marketers should and can be doing, and I think it’s a powerful, useful tool to be able to form hypotheses based on what happens.

So again, another example, did we change this product page? We saw pages per visit shrink and time on site shrink. Did it affect conversion rate? If it didn’t, but then we see that we’re getting fewer engaged visitors, and so now we can’t do as much retargeting and we’re losing email signups, maybe this did have a negative effect and we should go back to the other one, even if conversion rate itself didn’t seem to take a particular hit in this case.

2. When they’re compared over time to see if internal changes or external forces shifted behavior

Second useful way to apply these metrics is compared over time to see if your internal changes or some external forces shifted behavior. For example, we can look at the engagement rate on the blog. The blog is tough to generate as a conversion event. We could maybe look at subscriptions, but in general, pages per visit is a nice one for the blog. It tells us whether people make it past the page they landed on and into deeper sections, stick around our site, check out what we do.

So if we see that it had a dramatic fall down here in April and that was when we installed a new author and now they’re sort of recovering, we can say, “Oh, yeah, you know what? That takes a little while for a new blog author to kind of come up to speed. We’re going to give them time,” or, “Hey, we should interject here. We need to jump in and try and fix whatever is going on.”

3. When they’re benchmarked versus relevant industry competitors

Third and final useful case is when you benchmark versus truly relevant industry competitors. So if you have a direct competitor, very similar focus to you, product-focused in this case with a homepage and then some content sections and then a very focused product checkout, you could look at you versus them and their homepage and your homepage.

If you could get the data from a source like SimilarWeb or Jumpshot, if there’s enough clickstream level data, or some savvy industry surveys that collect this information, and you see that you’re significantly higher, you might then take a look at what are they doing that we’re not doing. Maybe we should use them when we do our user research and say, “Hey, what’s compelling to you about this that maybe is missing here?”

Otherwise, a lot of the time people will take direct competitors and say, “Hey, let’s look at what our competition is doing and we’ll consider that best practice.” But if you haven’t looked at how they’re performing, how people are getting through, whether they’re engaging, whether they’re spending time on that site, whether they’re making it through their different pages, you don’t know if they actually are best practices or whether you’re about to follow a laggard’s example and potentially hurt yourself.

So definitely a complex topic, definitely many, many different things that go into the uses of these metrics, and there are some bad and good ways to use them. I agree with Sayf and with Wil, but I think there are also some great ways to apply them. I would love to hear from you if you’ve got examples of those down in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Santa tracker, Google API terms & SEO metrics

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Santa tracker, Google API terms & SEO metrics appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

When and How to Use Domain Authority, Page Authority, and Link Count Metrics – Whiteboard Friday

Posted by randfish

How can you effectively apply link metrics like Domain Authority and Page Authority alongside your other SEO metrics? Where and when does it make sense to take them into account, and what exactly do they mean? In today’s Whiteboard Friday, Rand answers these questions and more, arming you with the knowledge you need to better understand and execute your SEO work.

When and how to use Domain Authority, Page Authority, and link count metrics.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about when and how to use Domain Authority and Page Authority and link count metrics.

So many of you have written to us at Moz over the years and certainly I go to lots of conferences and events and speak to folks who are like, “Well, I’ve been measuring my link building activity with DA,” or, “Hey, I got a high DA link,” and I want to confirm when is it the right time to be using something like DA or PA or a raw link count metric, like number of linking root domains or something like Spam Score or a traffic estimation, these types of metrics.

So I’m going to walk you through kind of these three — Page Authority, Domain Authority, and linking root domains — just to get a refresher course on what they are. Page Authority and Domain Authority are actually a little complicated. So I think that’s worthwhile. Then we’ll chat about when to use which metrics. So I’ve got sort of the three primary things that people use link metrics for in the SEO world, and we’ll walk through those.

Page Authority

So to start, Page Authority is basically — you can see I’ve written a ton of different little metrics in here — linking URLs, linking root domains, MozRank, MozTrust, linking subdomains, anchor text, linking pages, followed links, no followed links, 301s, 302s, new versus old links, TLD, domain name, branded domain mentions, Spam Score, and many, many other metrics.

Basically, what PA is, is it’s every metric that we could possibly come up with from our link index all taken together and then thrown into a model with some training data. So the training data in this case, quite obviously, is Google search results, because what we want the Page Authority score to ultimately be is a predictor of how well a given page is going to rank in Google search results assuming we know nothing else about it except link data. So this is using no on-page data, no content data, no engagement or visit data, none of the patterns or branding or entity matches, just link data.

So this is everything we possibly know about a page from its link profile and the domain that page is on, and then we insert that in as the input alongside the training data. We have a machine learning model that essentially learns against Google search results and builds the best possible model it can. That model, by the way, throws away some of this stuff, because it’s not useful, and it adds in a bunch of this stuff, like vectors or various attributes of each one. So it might say, “Oh, anchor text distribution, that’s actually not useful, but Domain Authority ordered by the root domains with more than 500 links to them.” I’m making stuff up, right? But you could have those sorts of filters on this data and thus come up with very complex models, which is what machine learning is designed to do.

All we have to worry about is that this is essentially the best predictive score we can come up with based on the links. So it’s useful for a bunch of things. If we’re trying to say how well do we think this page might rank independent of all non-link factors, PA, great model. Good data for that.

Domain Authority

Domain Authority is once you have the PA model in your head and you’re sort of like, “Okay, got it, machine learning against Google’s results to produce the best predictive score for ranking in Google.” DA is just the PA model at the root domain level. So not subdomains, just root domains, which means it’s got some weirdness. It can’t, for example, say that randfishkin.blogspot.com is different than www.blogspot.com. But obviously, a link from www.blogspot.com is way more valuable than from my personal subdomain at Blogspot or Tumblr or WordPress or any of these hosted subdomains. So that’s kind of an edge case that unfortunately DA doesn’t do a great job of supporting.

What it’s good for is it’s relatively well-suited to be predictive of how a domain’s pages will rank in Google. So it removes all the page-level information, but it’s still operative at the domain level. It can be very useful for that.

Linking Root Domain

Then linking root domains is the simplest one. This is basically a count of all the unique root domains with at least one link on them that point to a given page or a site. So if I tell you that this URL A has 410 linking root domains, that basically means that there are 410 domains with at least one link pointing to URL A.

What I haven’t told you is whether they’re followed or no followed. Usually, this is a combination of those two unless it’s specified. So even a no followed link could go into the linking root domains, which is why you should always double check. If you’re using Ahrefs or Majestic or Moz and you hover on the whatever, the little question mark icon next to any given metric, it will tell you what it includes and what it doesn’t include.

When to use which metric(s)

All right. So how do we use these?

Well, for month over month link building performance, which is something that a lot of folks track, I would actually not suggest making DA your primary one. This is for a few reasons. So Moz’s index, which is the only thing currently that calculates DA or a machine learning-like model out there among the major toolsets for link data, only updates about once every month. So if you are doing your report before the DA has updated from the last link index, that can be quite frustrating.

Now, I will say we are only a few months away from a new index that’s going to replace Mozscape that will calculate DA and PA and all these other things much, much more quickly. I know that’s been something many folks have been asking for. It is on its way.

But in the meantime, what I recommend using is:

1. Linking root domains, the count of linking root domains and how that’s grown over time.

2. Organic rankings for your targeted keywords. I know this is not a direct link metric, but this really helps to tell you about the performance of how those links have been affected. So if you’re measuring month to month, it should be the case that any months you’ve got in a 20 or 30-day period, Google probably has counted and recognized within a few days of finding them, and Google is pretty good at crawling nearly the whole web within a week or two weeks. So this is going to be a reasonable proxy for how your link building campaign has helped your organic search campaign.

3. The distribution of Domain Authority. So I think, in this case, Domain Authority can be useful. It wouldn’t be my first or second choice, but I think it certainly can belong in a link building performance report. It’s helpful to see the high DA links that you’re getting. It’s a good sorting mechanism to sort of say, “These are, generally speaking, more important, more authoritative sites.”

4. Spam Score I like as well, because if you’ve been doing a lot of link building, it is the case that Domain Authority doesn’t penalize or doesn’t lower its score for a high Spam Score. It will show you, “Hey, this is an authoritative site with a lot of DA and good-looking links, but it also looks quite spammy to us.” So, for example, you might see that something has a DA of 60, but a Spam Score of 7 or 8, which might be mildly concerning. I start to really worry when you get to like 9, 10, or 11.

Second question:

I think this is something that folks ask. So they look at their own links and they say, “All right, we have these links or our competitor has these links. Which ones are providing the most value for me?” In that case, if you can get it, for example, if it’s a link pointing to you, the best one is, of course, going to be…

1. Real traffic sent. If a site or a page, a link is sending traffic to you, that is clearly of value and that’s going to be likely interpreted positively by the search engines as well.

You can also use…

2. PA

3. DA. I think it’s pretty good. These metrics are pretty good and pretty well-correlated with, relatively speaking, value, especially if you can’t get at a metric like real traffic because it’s coming from someone else’s site.

4. Linking root domains, the count of those to a page or a domain.

5. The rankings rise, in the case where a page is ranking position four, a new link coming to it is the only thing that’s changed or the only thing you’re aware of that’s changed in the last few days, few weeks, and you see a rankings rise. It moves up a few positions. That’s a pretty good proxy for, “All right, that is a valuable link.” But this is a rare case where you really can control other variables to the extent that you think you can believe in that.

6. I like Spam Scor for this as well, because then you can start to see, “Well, are these sketchier links, or are these links that I can likely trust more?”

Last one,

So I think this is one that many, many SEOs do. We have a big list of links. We’ve got 50 links that we’re thinking about, “Should I get these or not and which ones should I go after first and which ones should I not go after?” In this case…

1. DA is really quite a good metric, and that is because it’s relatively predictive of the domain’s pages’ performance in Google, which is a proxy, but a decent proxy for how it might help your site rank better.

It is the case that folks will talk about, “Hey, it tends to be the case that when I go out and I build lots of DA 70, DA 80, DA 90+ links, I often get credit. Why DA and not PA, Rand?” Well, in the case where you’re getting links, it’s very often from new pages on a website, which have not yet been assigned PA or may not have inherited all the link equity from all the internal pages.

Over time, as those pages themselves get more links, their PA will rise as well. But the reason that I generally recommend a DA for link outreach is both because of that PA/DA timing issue and because oftentimes you don’t know which page is going to give you a link from a domain. It could be a new page they haven’t created yet. It could be one that you never thought they would add you to. It might be exactly the page that you were hoping for, but it’s hard to say.

2. I think linking root domains is a very reasonable one for this, and linking root domains is certainly closely correlated, not quite as well correlated, but closely correlated with DA and with rankings.

3. Spam Score, like we’ve talked about.

4. I might use something like SimilarWeb‘s traffic estimates, especially if real traffic sent is something that I’m very interested in. If I’m pursuing no followed links or affiliate links or I just care about traffic more than I care about rank-boosting ability, SimilarWeb has got what I think is the best traffic prediction system, and so that would be the metric I look at.

So, hopefully, you now have a better understanding of DA and PA and link counts and when and where to apply them alongside which other metrics. I look forward to your questions. I’ll be happy to jump into the comments and answer. And we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Google job search, metrics on search update & cheese doodle

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google job search, metrics on search update & cheese doodle appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

SearchCap: Google AdWords Quality Score metrics, redirects & SEO challenges

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google AdWords Quality Score metrics, redirects & SEO challenges appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Advert