Tag Archive | "Ranking"

7 Search Ranking Factors Analyzed: A Follow-Up Study

Posted by Jeff_Baker

Grab yourself a cup of coffee (or two) and buckle up, because we’re doing maths today.

Again.

Back it on up…

A quick refresher from last time: I pulled data from 50 keyword-targeted articles written on Brafton’s blog between January and June of 2018.

We used a technique of writing these articles published earlier on Moz that generates some seriously awesome results (we’re talking more than doubling our organic traffic in the last six months, but we will get to that in another publication).

We pulled this data again… Only I updated and reran all the data manually, doubling the dataset. No APIs. My brain is Swiss cheese.

We wanted to see how newly written, original content performs over time, and which factors may have impacted that performance.

Why do this the hard way, dude?

“Why not just pull hundreds (or thousands!) of data points from search results to broaden your dataset?”, you might be thinking. It’s been done successfully quite a few times!

Trust me, I was thinking the same thing while weeping tears into my keyboard.

The answer was simple: I wanted to do something different from the massive aggregate studies. I wanted a level of control over as many potentially influential variables as possible.

By using our own data, the study benefited from:

  • The same root Domain Authority across all content.
  • Similar individual URL link profiles (some laughs on that later).
  • Known original publish dates and without reoptimization efforts or tinkering.
  • Known original keyword targets for each blog (rather than guessing).
  • Known and consistent content depth/quality scores (MarketMuse).
  • Similar content writing techniques for targeting specific keywords for each blog.

You will never eliminate the possibility of misinterpreting correlation as causation. But controlling some of the variables can help.

As Rand once said in a Whiteboard Friday, “Correlation does not imply causation (but it sure is a hint).

Caveat:

What we gained in control, we lost in sample size. A sample size of 96 is much less useful than ten thousand, or a hundred thousand. So look at the data carefully and use discretion when considering the ranking factors you find most likely to be true.

This resource can help gauge the confidence you should put into each Pearson Correlation value. Generally, the stronger the relationship, the smaller sample size needed to be be confident in the results.

So what exactly have you done here?

We have generated hints at what may influence the organic performance of newly created content. No more, and no less. But they are indeed interesting hints and maybe worth further discussion or research.

What have you not done?

We have not published sweeping generalizations about Google’s algorithm. This post should not be read as a definitive guide to Google’s algorithm, nor should you assume that your site will demonstrate the same correlations.

So what should I do with this data?

The best way to read this article, is to observe the potential correlations we observed with our data and consider the possibility of how those correlations may or may not apply to your content and strategy.

I’m hoping that this study takes a new approach to studying individual URLs and stimulates constructive debate and conversation.

Your constructive criticism is welcome, and hopefully pushes these conversations forward!

The stat sheet

So quit jabbering and show me the goods, you say? Alright, let’s start with our stats sheet, formatted like a baseball card, because why not?:

*Note: Only blogs with complete ranking data were used in the study. We threw out blogs with missing data rather than adding arbitrary numbers.

And as always, here is the original data set if you care to reproduce my results.

So now the part you have been waiting for…

The analysis

To start, please use a refresher on the Pearson Correlation Coefficient from my last blog post, or Rand’s.

1. Time and performance

I started with a question: “Do blogs age like a Macallan 18 served up neat on a warm summer Friday afternoon, or like tepid milk on a hot summer Tuesday?

Does the time indexed play a role in how a piece of content performs?

Correlation 1: Time and target keyword position

First we will map the target keyword ranking positions against the number of days its corresponding blog has been indexed. Visually, if there is any correlation we will see some sort of negative or positive linear relationship.

There is a clear negative relationship between the two variables, which means the two variables may be related. But we need to go beyond visuals and use the PCC.

Days live vs. target keyword position

PCC

-.343

Relationship

Moderate

The data shows a moderate relationship between how long a blog has been indexed and the positional ranking of the target keyword.

But before getting carried away, we shouldn’t solely trust one statistical method and call it a day. Let’s take a look at things another way: Let’s compare the average age of articles whose target keywords rank in the top ten against the average age of articles whose target keywords rank outside the top ten.

Average age of articles based on position

Target KW position ≤ 10

144.8 days

Target KW position > 10

84.1 days

Now a story is starting to become clear: Our newly written content takes a significant amount of time to fully mature.

But for the sake of exhausting this hint, let’s look at the data one final way. We will group the data into buckets of target keyword positions, and days indexed, then apply them to a heatmap.

This should show us a clear visual clustering of how articles perform over time.

This chart, quite literally, paints a picture. According to the data, we shouldn’t expect a new article to realize its full potential until at least 100 days, and likely longer. As a blog post ages, it appears to gain more favorable target keyword positioning.

Correlation 2: Time and total ranking keywords on URL

You’ll find that when you write an article it will (hopefully) rank for the keyword you target. But often times it will also rank for other keywords. Some of these are variants of the target keyword, some are tangentially related, and some are purely random noise.

Instinct will tell you that you want your articles to rank for as many keywords as possible (ideally variants and tangentially related keywords).

Predictably, we have found that the relationship between the number of keywords an article ranks for and its estimated monthly organic traffic (per SEMrush) is strong (.447).

We want all of our articles to do things like this:

We want lots of variants each with significant search volume. But, does an article increase the total number of keywords it ranks for over time? Let’s take a look.

Visually this graph looks a little murky due to the existence of two clear outliers on the far right. We will first run the analysis with the outliers, and again without. With the outliers, we observe the following:

Days live vs. total keywords ranking on URL (w/outliers)

PCC

.281

Relationship

Weak/borderline moderate

There appears to be a relationship between the two variables, but it isn’t as strong. Let’s see what happens when we remove those two outliers:

Visually, the relationship looks stronger. Let’s look at the PCC:

Days live vs. total keywords ranking on URL (without outliers)

PCC

.390

Relationship

Moderate/borderline strong

The relationship appears to be much stronger with the two outliers removed.

But again, let’s look at things another way.

Let’s look at the average age of the top 25% of articles and compare them to the average age of the bottom 25% of articles:

Average age of top 25% of articles versus bottom 25%

Top 25%

148.9 days

Bottom 25%

73.8 days

This is exactly why we look at data multiple ways! The top 25% of blog posts with the most ranking keywords have been indexed an average of 149 days, while the bottom 25% have been indexed 74 days — roughly half.

To be fully sure, let’s again cluster the data into a heatmap to observe where performance falls on the time continuum:

We see a very similar pattern as in our previous analysis: a clustering of top-performing blogs starting at around 100 days.

Time and performance assumptions

You still with me? Good, because we are saying something BIG here. In our observation, it takes between 3 and 5 months for new content to perform in organic search. Or at the very least, mature.

To look at this one final way, I’ve created a scatterplot of only the top 25% of highest performing blogs and compared them to their time indexed:

There are 48 data plots on this chart, the blue plots represent the top 25% of articles in terms of strongest target keyword ranking position. The orange plots represent the top 25% of articles with the highest number of keyword rankings on their URL. (These can be, and some are, the same URL.)

Looking at the data a little more closely, we see the following:

90% of the top 25% of highest-performing content took at least 100 days to mature, and only two articles took less than 75 days.

Time and performance conclusion

For those of you just starting a content marketing program, remember that you may not see the full organic potential for your first piece of content until month 3 at the earliest. And, it takes at least a couple months of content production to make a true impact, so you really should wait a minimum of 6 months to look for any sort of results.

In conclusion, we expect new content to take at least 100 days to fully mature.

2. Links

But wait, some of you may be saying. What about links, buddy? Articles build links over time, too!

It stands to reason that, over time, a blog will gain links (and ranking potential) over time. Links matter, and higher positioned rankings gain links at a faster rate. Thus, we are at risk of misinterpreting correlation for causation if we don’t look at this carefully.

But what none of you know, that I know, is that being the terrible SEO that I am, I had no linking strategy with this campaign.

And I mean zero strategy. The average article generated 1.3 links from .5 linking domains.

Nice.

Linking domains vs. target keyword position

PCC

-.022

Relationship

None

Average linking domains to top 25% of articles

.46

Average linking domains to bottom 25% of articles

.46

The one thing consistent across all the articles was a shocking and embarrassing lack of inbound links. This is demonstrated by an insignificant correlation coefficient of -.022. The same goes for the total number of links per URL, with a correlation coefficient of -.029.

These articles appear to have performed primarily on their content rather than inbound links.

(And they certainly would have performed much better with a strong, or any, linking strategy. Nobody is arguing the value of links here.) But mostly…

Shame on me.

Shame. Shame. Shame.

But on a positive note, we were able to generate a more controlled experiment on the effects of time and blog performance. So, don’t fire me just yet?

Note: It would be interesting to pull link quality metrics into the discussion (for the precious few links we did earn) rather than total volume. However, after a cursory look at the data, nothing stood out as being significant.

3. Word count

Content marketers and SEOs love talking about word count. And for good reason. When we collectively agreed that “quality content” was the key to rankings, it would stand to reason that longer content would be more comprehensive, and thus do a better job of satisfying searcher intent. So let’s test that theory.

Correlation 1: Target keyword position versus total word count

Will longer articles increase the likelihood of ranking for the keyword you are targeting?

Not in our case. To be sure, let’s run a similar analysis as before.

Word count vs. target keyword position

PCC

.111

Relationship

Negligible

Average word count of top 25% articles

1,774

Average word count of bottom 25% articles

1,919

The data shows no impact on rankings based on the length of our articles.

Correlation 2: Total keywords ranking on URL versus word count

One would think that longer content would result in is additional ranking keywords, right? Even by accident, you would think that the more related topics you discuss in an article, the more keywords you will rank for. Let’s see if that’s true:

Total keywords ranking on URL vs. word count

PCC

-.074

Relationship

None

Not in this case.

Word count, speculative tangent

So how can it be that so many studies demonstrate higher word counts result in more favorable rankings? Some reconciliation is in order, so allow me to speculate on what I think may be happening in these studies.

  1. Most likely: Measurement techniques. These studies generally look at one factor relative to rankings: average absolute word count based on position. (And, there actually isn’t much of a difference in average word count between position one and ten.)
  2. As we are demonstrating in this article, there may be many other factors at play that need to be isolated and tested for correlations in order to get the full picture, such as: time indexed, on-page SEO (to be discussed later), Domain Authority, link profile, and depth/quality of content (also to be discussed later with MarketMuse as a measure). It’s possible that correlation does not imply correlation, and by using word count averages as the single method of measure, we may be painting too broad of a stroke.

  3. Likely: High quality content is longer, by nature. We know that “quality content” is discussed in terms of how well a piece satisfies the intent of the reader. In an ideal scenario, you will create content that fully satisfies everything a searcher would want to know about a given topic. Ideally you own the resource center for the topic, and the searcher does not need to revisit SERPs and weave together answers from multiple sources. By nature, this type of comprehensive content is quite lengthy. Long-form content is arguably a byproduct of creating for quality. Cyrus Shepard does a better job of explaining this likelihood here.
  4. Less likely: Long-form threshold. The articles we wrote for this study ranged from just under 1,000 words to nearly as high as 4,000 words. One could consider all of these as “long-form content,” and perhaps Google does as well. Perhaps there is a word count threshold that Google uses.

This is all speculation. What we can say for certain is that all our content is 900 words and up, and shows no incremental benefit to be had from additional length.

Feel free to disagree with any (or all) of my speculations on my interpretation of the discrepancies of results, but I tend to have the same opinion as Brian Dean with the information available.

4. MarketMuse

At this point, most of you are familiar with MarketMuse. They have created a number of AI-powered tools that help with content planning and optimization.

We use the Content Optimizer tool, which evaluates the top 20 results for any keyword and generates an outline of all the major topics being discussed in SERPs. This helps you create content that is more comprehensive than your competitors, which can lead to better performance in search.

Based on the competitive landscape, the tool will generate a recommended content score (their proprietary algorithm) that you should hit in order to compete with the competing pages ranking in SERPs.

But… if you’re a competitive fellow, what happens if you want to blow the recommended score out of the water? Do higher scores have an impact on rankings? Does it make a difference if your competition has a very low average score?

We pulled every article’s content score, along with MarketMuse’s recommended scores and the average competitor scores, to answer these questions.

Correlation 1: Overall MarketMuse content score

Does a higher overall content score result in better rankings? Let’s take a look:

Absolute MarketMuse score vs. target keyword position

PCC

.000

Relationship

None

A perfect zero! We weren’t able to beat the system by racking up points. I also checked to see if a higher absolute score would result in a larger number of keywords ranking on the URL — it doesn’t.

Correlation 2: Beating the recommended score

As mentioned, based on the competitive landscape, MarketMuse will generate a recommended content score. What happens if you blow the recommended score out of the water? Do you get bonus points?

In order to calculate this correlation, we pulled the content score percentage attainment and compared it to the target keyword position. For example, if we scored a 30 of recommended 25, we hit 120% attainment. Let’s see if it matters:

Percentage content score attainment vs. target keyword position

PCC

.028

Relationship

None

No bonus points for doing extra credit!

Correlation 3: Beating the average competitors’ scores

Okay, if you beat MarketMuse’s recommendations, you don’t get any added benefit, but what if you completely destroy your competitors’ average content scores?

We will calculate this correlation the same way we previously did, with percentage attainment over the average competitor. For example, if we scored a 30 over the average of 10, we hit 300% attainment. Let’s see if that matters:

Percentage attainment over average competitor score versus target KW position

PCC

-.043

Relationship

None

That didn’t work either! Seems that there are no hacks or shortcuts here.

MarketMuse summary

We know that MarketMuse works, but it seems that there are no additional tricks to this tool.

If you regularly hit the recommended score as we did (average 110% attainment, with 81% of blogs hitting 100% attainment or better) and cover the topics prescribed, you should do well. But don’t fixate on competitor scores or blowing the recommended score out of the water. You may just be wasting your time.

Note: It’s worth noting that we probably would have shown stronger correlations had we intentionally bombed a few MarketMuse scores. Perhaps a test for another day.

5. On-page optimization

Ah, old-school technical SEO. This type of work warms the cockles of a seasoned SEO’s heart. But does it still have a place in our constantly evolving world? Has Google advanced to the point where it doesn’t need technical cues from SEOs to understand what a page is about?

To find out, I have pulled Moz’s on-page optimization score for every article and compared them to the target keywords’ positional rankings:

Let’s take a look at the scatterplot for all the keyword targets.

Now looking at the math:

On-page optimization score vs. target keyword position

PCC

-.384

Relationship

Moderate/strong

Average on-page score for top 25%

91%

Average on-page score for bottom 25%

87%

If you have a keen eye you may have noticed a few strong outliers on the scatterplot. If we remove three of the largest outliers, the correlation goes up to -.435, a strong relationship.

Before we jump to conclusions, let’s look at this data one final way.

Let’s take a look at the percentage of articles with their target keywords ranking 1–10 that also have a 90% on-page score or better. We will compare that number to the percentage of articles ranking outside the top ten that also have a 90% on-page score or better.

If our assumption is correct, we will see a much higher percentage of keywords ranking 1–10 with an on-page score of 90% or better, and a lower number for articles ranking greater than 10.

On-page optimization score by rankings

Percentage of KWs ranking 1–10 with ≥ 90% score

73.5%

Percentage of keywords ranking >10 with ≥ 90% score

53.2%

This is enough of a hint for me. I’m implementing a 90% minimum on-page score from here on out.

Old school SEOs, rejoice!

6. The competition’s average word count

We won’t put this “word count” argument to bed just yet…

Let’s ask ourselves, “Does it matter how long the average content of the top 20 results is?”

Is there a relationship between the length of your content versus the average competitor?

What if your competitors are writing very short form, and you want to beat them with long-form content?

We will measure this the same way as before, with percentage attainment. For example, if the average word count of the top 20 results for “content marketing agency” is 300, and our piece is 450 words, we hit 150% attainment.

Let’s see if you can “out-verbose” your opponents.

Percentage word count attainment versus target KW position

PCC

.062

Relationship

None

Alright, I’ll put word count to bed now, I promise.

7. Keyword density

You’ve made it to the last analysis. Congratulations! How many cups of coffee have you consumed? No judgment; this report was responsible for entire coffee farms being completely decimated by yours truly.

For selfish reasons, I couldn’t resist the temptation to dispel this ancient tactic of “using target keywords” in blog content. You know what I’m talking about: when someone says “This blog doesn’t FEEL optimized… did you use the target keyword enough?”

There are still far too many people that believe that littering target keywords throughout a piece of content will yield results. And misguided SEO agencies, along with certain SEO tools, perpetuate this belief.

Yoast has a tool in WordPress that some digital marketers live and die by. They don’t think that a blog is complete until Yoast shows the magical green light, indicating that the content has satisfied the majority of its SEO recommendations:

Uh oh, keyword density is too low! Let’s see if it that ACTUALLY matters.

Not looking so good, my keyword-stuffing friends! Let’s take a look at the PCC:

Target keyword ranking position vs. Yoast keyword density

PCC

.097

Relationship

None/Negligible

Believers would like to see a negative relationship here; as the keyword density goes down, the ranking position decreases, producing a downward sloping line.

What we are looking at is a slightly upward-sloping line, which would indicate losing rankings by keyword stuffing — but fortunately not TOO upward sloping, given the low correlation value.

Okay, so PLEASE let that be the end of “keyword density.” This practice has been disproven in past studies, as referenced by Zyppy. Let’s confidently put this to bed, forever. Please.

Oh, and just for kicks, the Flesch Reading Ease score has no bearing on rankings either (-.03 correlation). Write to a third grade level, or a college level, it doesn’t matter.

TL;DR (I don’t blame you)

What we learned from our data

  1. Time: It took 100 days or more for an article to fully mature and show its true potential. A content marketing program probably shouldn’t be fully scrutinized until month 5 or 6 at the very earliest.
  2. Links: Links matter, I’m just terrible at generating them. Shame.
  3. Word count: It’s not about the length of the content, in absolute terms or relative to the competition. It’s about what is written and how resourceful it is.
  4. MarketMuse: We have proven that MarketMuse works as it prescribes, but there is no added benefit to breaking records.
  5. On-page SEO: Our data demonstrates that it still matters. We all still have a job.
  6. Competitor content length: We weren’t successful at blowing our competitors out of the water with longer content.
  7. Keyword density: Just stop. Join us in modern times. The water is warm.

In conclusion, some reasonable guidance we agree on is:

Wait at least 100 days to evaluate the performance of your content marketing program, write comprehensive content, and make sure your on-page SEO score is 90%+.

Oh, and build links. Unlike me. Shame.

Now go take a nap.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Local Search Ranking Factors 2018: Local Today, Key Takeaways, and the Future

Posted by Whitespark

In the past year, local SEO has run at a startling and near-constant pace of change. From an explosion of new Google My Business features to an ever-increasing emphasis on the importance of reviews, it’s almost too much to keep up with. In today’s Whiteboard Friday, we welcome our friend Darren Shaw to explain what local is like today, dive into the key takeaways from his 2018 Local Search Ranking Factors survey, and offer us a glimpse into the future according to the local SEO experts.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans. I’m Darren Shaw from Whitespark, and today I want to talk to you about the local search ranking factors. So this is a survey that David Mihm has run for the past like 10 years. Last year, I took it over, and it’s a survey of the top local search practitioners, about 40 of them. They all contribute their answers, and I aggregate the data and say what’s driving local search. So this is what the opinion of the local search practitioners is, and I’ll kind of break it down for you.

Local search today

So these are the results of this year’s survey. We had Google My Business factors at about 25%. That was the biggest piece of the pie. We have review factors at 15%, links at 16%, on-page factors at 14%, behavioral at 10%, citations at 11%, personalization and social at 6% and 3%. So that’s basically the makeup of the local search algorithm today, based on the opinions of the people that participated in the survey.

The big story this year is Google My Business. Google My Business factors are way up, compared to last year, a 32% increase in Google My Business signals. I’ll talk about that a little bit more over in the takeaways. Review signals are also up, so more emphasis on reviews this year from the practitioners. Citation signals are down again, and that makes sense. They continue to decline I think for a number of reasons. They used to be the go-to factor for local search. You just built out as many citations as you could. Now the local search algorithm is so much more complicated and there’s so much more to it that it’s being diluted by all of the other factors. Plus it used to be a real competitive difference-maker. Now it’s not, because everyone is pretty much getting citations. They’re considered table stakes now. By seeing a drop here, it doesn’t mean you should stop doing them. They’re just not the competitive difference-maker they used to be. You still need to get listed on all of the important sites.

Key takeaways

All right, so let’s talk about the key takeaways.

1. Google My Business

The real story this year was Google My Business, Google My Business, Google My Business. Everyone in the comments was talking about the benefits they’re seeing from investing in a lot of these new features that Google has been adding.

Google has been adding a ton of new features lately — services, descriptions, Google Posts, Google Q&A. There’s a ton of stuff going on in Google My Business now that allows you to populate Google My Business with a ton of extra data. So this was a big one.

✓ Take advantage of Google Posts

Everyone talked about Google Posts, how they’re seeing Google Posts driving rankings. There are a couple of things there. One is the semantic content that you’re providing Google in a Google post is definitely helping Google associate those keywords with your business. Engagement with Google Posts as well could be driving rankings up, and maybe just being an active business user continuing to post stuff and logging in to your account is also helping to lift your business entity and improve your rankings. So definitely, if you’re not on Google Posts, get on it now.

If you search for your category, you’ll see a ton of businesses are not doing it. So it’s also a great competitive difference-maker right now.

✓ Seed your Google Q&A

Google Q&A, a lot of businesses are not even aware this exists. There’s a Q&A section now. Your customers are often asking questions, and they’re being answered by not you. So it’s valuable for you to get in there and make sure you’re answering your questions and also seed the Q&A with your own questions. So add all of your own content. If you have a frequently asked questions section on your website, take that content and put it into Google Q&A. So now you’re giving lots more content to Google.

✓ Post photos and videos

Photos and videos, continually post photos and videos, maybe even encourage your customers to do that. All of that activity is helpful. A lot of people don’t know that you can now post videos to Google My Business. So get on that if you have any videos for your business.

✓ Fill out every field

There are so many new fields in Google My Business. If you haven’t edited your listing in a couple of years, there’s a lot more stuff in there that you can now populate and give Google more data about your business. All of that really leads to engagement. All of these extra engagement signals that you’re now feeding Google, from being a business owner that’s engaged with your listing and adding stuff and from users, you’re giving them more stuff to look at, click on, and dwell on your listing for a longer time, all that helps with your rankings.

2. Reviews

✓ Get more Google reviews

Reviews continue to increase in importance in local search, so, obviously, getting more Google reviews. It used to be a bit more of a competitive difference-maker. It’s becoming more and more table stakes, because everybody seems to be having lots of reviews. So you definitely want to make sure that you are competing with your competition on review count and lots of high-quality reviews.

✓ Keywords in reviews

Getting keywords in reviews, so rather than just asking for a review, it’s useful to ask your customers to mention what service they had provided or whatever so you can get those keywords in your reviews.

✓ Respond to reviews (users get notified now!)

Responding to reviews. Google recently started notifying users that if the owner has responded to you, you’ll get an email. So all of that is really great, and those responses, it’s another signal to Google that you’re an engaged business.

✓ Diversify beyond Google My Business for reviews

Diversify. Don’t just focus on Google My Business. Look at other sites in your industry that are prominent review sites. You can find them if you just look for your own business name plus reviews, if you search that in Google, you’re going to see the sites that Google is saying are important for your particular business.

You can also find out like what are the sites that your competitors are getting reviews on. Then if you just do a search like keyword plus city, like “lawyers + Denver,” you might find sites that are important for your industry as well that you should be listed on. So check out a couple of your keywords and make sure you’re getting reviews on more sites than just Google.

3. Links

Then links, of course, links continue to drive local search. A lot of people in the comments talked about how a handful of local links have been really valuable. This is a great competitive difference-maker, because a lot of businesses don’t have any links other than citations. So when you get a few of these, it can really have an impact.

✓ From local industry sites and sponsorships

They really talk about focusing on local-specific sites and industry-specific sites. So you can get a lot of those from sponsorships. They’re kind of the go-to tactic. If you do a search for in title sponsors plus city name, you’re going to find a lot of sites that are listing their sponsors, and those are opportunities for you, in your city, that you could sponsor that event as well or that organization and get a link.

The future!

All right. So I also asked in the survey: Where do you see Google going in the future? We got a lot of great responses, and I tried to summarize that into three main themes here for you.

1. Keeping users on Google

This is a really big one. Google does not want to send its users to your website to get the answer. Google wants to have the answer right on Google so that they don’t have to click. It’s this zero-click search result. So you see Rand Fishkin talking about this. This has been happening in local for a long time, and it’s really amplified with all of these new features Google has been adding. They want to have all of your data so that they don’t have to send users to find it somewhere else. Then that means in the future less traffic to your website.

So Mike Blumenthal and David Mihm also talk about Google as your new homepage, and this concept is like branded search.

  • What does your branded search look like?
  • So what sites are you getting reviews on?
  • What does your knowledge panel look like?

Make that all look really good, because Google doesn’t want to send people to your new website.

2. More emphasis on behavioral signals

David Mihm is a strong voice in this. He talks about how Google is trying to diversify how they rank businesses based on what’s happening in the real world. They’re looking for real-world signals that actual humans care about this business and they’re engaging with this business.

So there’s a number of things that they can do to track that — so branded search, how many people are searching for your brand name, how many people are clicking to call your business, driving directions. This stuff is all kind of hard to manipulate, whereas you can add more links, you can get more reviews. But this stuff, this is a great signal for Google to rely on.

Engagement with your listing, engagement with your website, and actual humans in your business. If you’ve seen on the knowledge panel sometimes for brick-and-mortar business, it will be like busy times. They know when people are actually at your business. They have counts of how many people are going into your business. So that’s a great signal for them to use to understand the prominence of your business. Is this a busy business compared to all the other ones in the city?

3. Google will monetize everything

Then, of course, a trend to monetize as much as they can. Google is a publicly traded company. They want to make as much money as possible. They’re on a constant growth path. So there are a few things that we see coming down the pipeline.

Local service ads are expanding across the country and globally and in different industries. So this is like a paid program. You have to apply to get into it, and then Google takes a cut of leads. So if you are a member of this, then Google will send leads to you. But you have to be verified to be in there, and you have to pay to be in there.

Then taking a cut from bookings, you can now book directly on Google for a lot of different businesses. If you think about Google Flights and Google Hotels, Google is looking for a way to monetize all of this local search opportunity. That’s why they’re investing heavily in local search so they can make money from it. So seeing more of these kinds of features rolling out in the future is definitely coming. Transactions from other things. So if I did book something, then Google will take a cut for it.

So that’s the future. That’s sort of the news of the local search ranking factors this year. I hope it’s been helpful. If you have any questions, just leave some comments and I’ll make sure to respond to them all. Thanks, everybody.

Video transcription by Speechpad.com


If you missed our recent webinar on the Local Search Ranking Factors survey with Darren Shaw and Dr. Pete, don’t worry! You can still catch the recording here:

Check out the webinar

You’ll be in for a jam-packed hour of deeper insights and takeaways from the survey, as well as some great audience-contributed Q&A.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Announcing the 2018 Local Search Ranking Factors Survey

Posted by Whitespark

It has been another year (and a half) since the last publication of the Local Search Ranking Factors, and local search continues to see significant growth and change. The biggest shift this year is happening in Google My Business signals, but we’re also seeing an increase in the importance of reviews and continued decreases in the importance of citations.

Check out the full survey!

Huge growth in Google My Business

Google has been adding features to GMB at an accelerated rate. They see the revenue potential in local, and now that they have properly divorced Google My Business from Google+, they have a clear runway to develop (and monetize) local. Here are just some of the major GMB features that have been released since the publication of the 2017 Local Search Ranking Factors:

  • Google Posts available to all GMB users
  • Google Q&A
  • Website builder
  • Services
  • Messaging
  • Videos
  • Videos in Google Posts

These features are creating shifts in the importance of factors that are driving local search today. This year has seen the most explosive growth in GMB specific factors in the history of the survey. GMB signals now make up 25% the local pack/finder pie chart.

GMB-specific features like Google Posts, Google Q&A, and image/video uploads are frequently mentioned as ranking drivers in the commentary. Many businesses are not yet investing in these aspects of local search, so these features are currently a competitive advantage. You should get on these before everyone is doing it.

Here’s your to do list:

  1. Start using Google posts NOW. At least once per week, but preferably a few times per week. Are you already pushing out posts to Facebook, Instagram, or Twitter? Just use the same, lightly edited, content on Google Posts. Also, use calls to action in your posts to drive direct conversions.
  2. Seed the Google Q&A with your own questions and answers. Feed that hyper-relevant, semantically rich content to Google. Relevance FTW.
  3. Regularly upload photos and videos. (Did you know that you can upload videos to GMB now?)
  4. Make sure your profile is 100% complete. If there is an empty field in GMB, fill it. If you haven’t logged into your GMB account in a while, you might be surprised to see all the new data points you can add to your listing.

Why spend your time on these activities? Besides the potential relevance boost you’ll get from the additional content, you’re also sending valuable engagement signals. Regularly logging into your listing and providing content shows Google that you’re an active and engaged business owner that cares about your listing, and the local search experts are speculating that this is also providing ranking benefits. There’s another engagement angle here too: user engagement. Provide more content for users to engage with and they’ll spend more time on your listing clicking around and sending those helpful behavioral signals to Google.

Reviews on the rise

Review signals have also seen continued growth in importance over last year.

Review signals were 10.8% in 2015, so over the past 3 years, we’ve seen a 43% increase in the importance of review signals:

Many practitioners talked about the benefits they’re seeing from investing in reviews. I found David Mihm’s comments on reviews particularly noteworthy. When asked “What are some strategies/tactics that are working particularly well for you at the moment?”, he responded with:

“In the search results I look at regularly, I continue to see reviews playing a larger and larger role. Much as citations became table stakes over the last couple of years, reviews now appear to be on their way to becoming table stakes as well. In mid-to-large metro areas, even industries where ranking in the 3-pack used to be possible with a handful of reviews or no reviews, now feature businesses with dozens of reviews at a minimum — and many within the last few months, which speaks to the importance of a steady stream of feedback.

Whether the increased ranking is due to review volume, keywords in review content, or the increased clickthrough rate those gold stars yield, I doubt we’ll ever know for sure. I just know that for most businesses, it’s the area of local SEO I’d invest the most time and effort into getting right — and done well, should also have a much more important flywheel effect of helping you build a better business, as the guys at GatherUp have been talking about for years.”

Getting keywords in your reviews is a factor that has also risen. In the 2017 survey, this factor ranked #26 in the local pack/finder factors. It is now coming in at #14.

I know this is the Local Search Ranking Factors, and we’re talking about what drives rankings, but you know what’s better than rankings? Conversions. Yes, reviews will boost your rankings, but reviews are so much more valuable than that because a ton of positive reviews will get people to pick up the phone and call your business, and really, that’s the goal. So, if you’re not making the most of reviews yet, get on it!

A quick to do list for reviews would be:

  1. Work on getting more Google reviews (obviously). Ask every customer.
  2. Encourage keywords in the reviews by asking customers to mention the specific service or product in their review.
  3. Respond to every review. (Did you know that Google now notifies the reviewer when the owner responds?)
  4. Don’t only focus on reviews. Actively solicit direct customer feedback as well so you can mark it up in schema/JSON and get stars in the search results.
  5. Once you’re killing it on Google, diversify and get reviews on the other important review sites for your industry (but also continue to send customers to Google).

For a more in-depth discussion of review strategy, please see the blog post version of my 2018 MozCon presentation, “How to Convert Local Searchers Into Customers with Reviews.”

Meh, links

To quote Gyi Tsakalakis: “Meh, links.” All other things being equal, links continue to be a key differentiator in local search. It makes sense. Once you have a complete and active GMB listing, your citations squared away, a steady stream of reviews coming in, and solid content on your website, the next step is links. The trouble is, links are hard, but that’s also what makes them such a valuable competitive differentiator. They ARE hard, so when you get quality links they can really help to move the needle.

When asked, “What are some strategies/tactics that are working particularly well for you at the moment?” Gyi responded with:

“Meh, links. In other words, topically and locally relevant links continue to work particularly well. Not only do these links tend to improve visibility in both local packs and traditional results, they’re also particularly effective for improving targeted traffic, leads, and customers. Find ways to earn links on the sites your local audience uses. These typically include local news, community, and blog sites.”

Citations?

Let’s make something clear: citations are still very valuable and very important.

Ok, with that out of the way, let’s look at what’s been happening with citations over the past few surveys:

I think this decline is related to two things:

  1. As local search gets more complex, additional signals are being factored into the algorithm and this dilutes the value that citations used to provide. There are just more things to optimize for in local search these days.
  2. As local search gains more widespread adoption, more businesses are getting their citations consistent and built out, and so citations become less of a competitive difference maker than they were in the past.

Yes, we are seeing citations dropping in significance year after year, but that doesn’t mean you don’t need them. Quite the opposite, really. If you don’t get them, you’re going to have a bad time. Google looks to your citations to help understand how prominent your business is. A well established and popular business should be present on the most important business directories in their industry, and if it’s not, that can be a signal of lower prominence to Google.

The good news is that citations are one of the easiest items to check off your local search to do list. There are dozens of services and tools out there to help you get your business listed and accurate for only a few hundred dollars. Here’s what I recommend:

  1. Ensure your business is listed, accurate, complete, and duplicate-free on the top 10-15 most important sites in your industry (including the primary data aggregators and industry/city-specific sites).
  2. Build citations (but don’t worry about duplicates and inconsistencies) on the next top 30 to 50 sites.

Google has gotten much smarter about citation consistency than they were in the past. People worry about it much more than they need to. An incorrect or duplicate listing on an insignificant business listing site is not going to negatively impact your ability to rank.

You could keep building more citations beyond the top 50, and it won’t hurt, but the law of diminishing returns applies here. As you get deeper into the available pool of citation sites, the quality of these sites decreases, and the impact they have on your local search decreases with it. That said, I have heard from dozens of agencies that swear that “maxing out” all available citation opportunities seems to have a positive impact on their local search, so your mileage may vary. ¯\_(ツ)_/¯

The future of local search

One of my favorite questions in the commentary section is “Comments about where you see Google is headed in the future?” The answers here, from some of the best minds in local search, are illuminating. The three common themes I pulled from the responses are:

  1. Google will continue providing features and content so that they can provide the answers to most queries right in the search results and send less clicks to websites. Expect to see your traffic from local results to your website decline, but don’t fret. You want those calls, messages, and driving directions more than you want website traffic anyway.
  2. Google will increase their focus on behavioral signals for rankings. What better way is there to assess the real-world popularity of a business than by using signals sent by people in the real world. We can speculate that Google is using some of the following signals right now, and will continue to emphasize and evolve behavioral ranking methods:
    1. Searches for your brand name.
    2. Clicks to call your business.
    3. Requests for driving directions.
    4. Engagement with your listing.
    5. Engagement with your website.
    6. Credit card transactions.
    7. Actual human foot traffic in brick-and-mortar businesses.
  3. Google will continue monetizing local in new ways. Local Services Ads are rolling out to more and more industries and cities, ads are appearing right in local panels, and you can book appointments right from local packs. Google isn’t investing so many resources into local out of the goodness of their hearts. They want to build the ultimate resource for instant information on local services and products, and they want to use their dominant market position to take a cut of the sales.

And that does it for my summary of the survey results. A huge thank you to each of the brilliant contributors for giving their time and sharing their knowledge. Our understanding of local search is what it is because of your excellent work and contributions to our industry.

There is much more to read and learn in the actual resource itself, especially in all the comments from the contributors, so go dig into it:

Click here for the full results!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Google confirms ‘small’ search ranking algorithm update this past week

Did you notice any changes to your web sites traffic or ranking in Google?



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Ranking the 6 Most Accurate Keyword Difficulty Tools

Posted by Jeff_Baker

In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.

Did it work?

Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.

But we got a whole lot more than just traffic.

From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.

As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece…

How well keyword research tools can predict where you will rank.

A little background

We created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.

We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.

With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:

This image links to an example of a content brief Brafton delivers to writers.

Between mid-January and late May, we ended up writing 55 blog posts each targeting 55 unique keywords. 50 of those blog posts ended up ranking in the top 100 of Google results.

We then paused and took a snapshot of each URL’s Google ranking position for its target keyword and its corresponding organic difficulty scores from Moz, SEMrush, Ahrefs, SpyFu, and KW Finder. We also took the PPC competition scores from the Keyword Planner Tool.

Our intention was to draw statistical correlations between between our keyword rankings and each tool’s organic difficulty score. With this data, we were able to report on how accurately each tool predicted where we would rank.

This study is uniquely scientific, in that each blog had one specific keyword target. We optimized the blog content specifically for that keyword. Therefore every post was created in a similar fashion.

Do keyword research tools actually work?

We use them every day, on faith. But has anyone ever actually asked, or better yet, measured how well keyword research tools report on the organic difficulty of a given keyword?

Today, we are doing just that. So let’s cut through the chit-chat and get to the results…

This image ranks each of the 6 keyword research tools, in order, Moz leads with 4.95 stars out of 5, followed by KW Finder, SEMrush, AHREFs, SpyFu, and lastly Keyword Planner Tool.

While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).

As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.

Let’s dig in!

The Pearson Correlation Coefficient

Yes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.

In order to understand the relationship between two variables, our first step is to create a scatter plot chart.

Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

We start with a visual inspection of the data to determine if there is a linear relationship between the two variables. Ideally for each tool, you would expect to see the X variable (keyword ranking) increase proportionately with the Y variable (organic difficulty). Put simply, if the tool is working, the higher the keyword difficulty, the less likely you will rank in a top position, and vice-versa.

This chart is all fine and dandy, however, it’s not very scientific. This is where the Pearson Correlation Coefficient (PCC) comes into play.

The PCC measures the strength of a linear relationship between two variables. The output of the PCC is a score ranging from +1 to -1. A score greater than zero indicates a positive relationship; as one variable increases, the other increases as well. A score less than zero indicates a negative relationship; as one variable increases, the other decreases. Both scenarios would indicate a level of causal relationship between the two variables. The stronger the relationship between the two veriables, the closer to +1 or -1 the PCC will be. Scores near zero indicate a weak or no relatioship.

Phew. Still with me?

So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.

We will use the following table from statisticshowto.com to interpret the PCC score for each tool:

Coefficient Correlation R Score

Key

.70 or higher

Very strong positive relationship

.40 to +.69

Strong positive relationship

.30 to +.39

Moderate positive relationship

.20 to +.29

Weak positive relationship

.01 to +.19

No or negligible relationship

0

No relationship [zero correlation]

-.01 to -.19

No or negligible relationship

-.20 to -.29

Weak negative relationship

-.30 to -.39

Moderate negative relationship

-.40 to -.69

Strong negative relationship

-.70 or higher

Very strong negative relationship

In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.

These scatter plots show three types of correlations: positive, negative, and no correlation. Positive correlations have data plots that move up and to the right. Negative correlations move down and to the right. No correlation has data that follows no linear pattern

And here are some examples of charts with their correlating PCC scores (r):

These scatter plots show what different PCC values look like visually. The tighter the grouping of data around the regression line, the higher the PCC value.

The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.

That was the tough part – you still with me? Great, now let’s look at each tool’s results.

Test 1: The Pearson Correlation Coefficient

Now that we’ve all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.

In order of performance:

#1: Moz

This image shows a scatter plot for Moz's keyword difficulty scores versus our keyword rankings. In general, the data clusters fairly tight around the regression line.

Revisiting Moz’s scatter plot, we observe a tight grouping of results relative to the regression line with few moderate outliers.

Moz Organic Difficulty Predictability

PCC

0.412

P-val

.003 (P<0.05)

Relationship

Strong

% Keywords Matched

100.00%

Moz came in first with the highest PCC of .412. As an added bonus, Moz grabs data on keyword difficulty in real time, rather than from a fixed database. This means that you can get any keyword difficulty score for any keyword.

In other words, Moz was able to generate keyword difficulty scores for 100% of the 50 keywords studied.

#2: SpyFu

This image shows a scatter plot for SpyFu's keyword difficulty scores versus our keyword rankings. The plot is similar looking to Moz's, with a few larger outliers.

Visually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.

SpyFu Organic Difficulty Predictability

PCC

0.405

P-val

.01 (P<0.05)

Relationship

Strong

% Keywords Matched

80.00%

SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.

#3: SEMrush

This image shows a scatter plot for SEMrush's keyword difficulty scores versus our keyword rankings. The data has a significant amount of outliers relative to the regression line.

SEMrush would certainly benefit from a couple mulligans (a second chance to perform an action). The Correlation Coefficient is very sensitive to outliers, which pushed SEMrush’s score down to third (.364).

SEMrush Organic Difficulty Predictability

PCC

0.364

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

92.00%

Further complicating the research process, only 46 of 50 keywords had keyword difficulty scores associated with them, and many of those had to be found through SEMrush’s “phrase match” feature individually, rather than through the difficulty tool.

The process was more laborious to dig around for data.

#4: KW Finder

This image shows a scatter plot for KW Finder's keyword difficulty scores versus our keyword rankings. The data also has a significant amount of outliers relative to the regression line.

KW Finder definitely could have benefitted from more than a few mulligans with numerous strong outliers, coming in right behind SEMrush with a score of .360.

KW Finder Organic Difficulty Predictability

PCC

0.360

P-val

.01 (P<0.05)

Relationship

Moderate

% Keywords Matched

100.00%

Fortunately, the KW Finder tool had a 100% match rate without any trouble digging around for the data.

#5: Ahrefs

This image shows a scatter plot for AHREF's keyword difficulty scores versus our keyword rankings. The data shows tight clustering amongst low difficulty score keywords, and a wide distribution amongst higher difficulty scores.

Ahrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.

Ahrefs Organic Difficulty Predictability

PCC

0.316

P-val

.03 (P<0.05)

Relationship

Moderate

% Keywords Matched

100%

On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.

#6: Google Keyword Planner Tool

This image shows a scatter plot for Google Keyword Planner Tool's keyword difficulty scores versus our keyword rankings. The data shows randomly distributed plots with no linear relationship.

Before you ask, yes, SEO companies still use the paid competition figures from Google’s Keyword Planner Tool (and other tools) to assess organic ranking potential. As you can see from the scatter plot, there is in fact no linear relationship between the two variables.

Google Keyword Planner Tool Organic Difficulty Predictability

PCC

0.045

P-val

Statistically insignificant/no linear relationship

Relationship

Negligible/None

% Keywords Matched

88.00%

SEO agencies still using KPT for organic research (you know who you are!) — let this serve as a warning: You need to evolve.

Test 1 summary

For scoring, we will use a ten-point scale and score every tool relative to the highest-scoring competitor. For example, if the second highest score is 98% of the highest score, the tool will receive a 9.8. As a reminder, here are the results from the PCC test:

This bar chart shows the final PCC values for the first test, summarized.

And the resulting scores are as follows:

Tool

PCC Test

Moz

10

SpyFu

9.8

SEMrush

8.8

KW Finder

8.7

Ahrefs

7.7

KPT

1.1

Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).

Test 2: Adjusted Pearson Correlation Coefficient

Let’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.

Here are the adjusted results for the handicap round:

Adjusted Scores (3 Outliers removed)

PCC

Difference (+/-)

SpyFu

0.527

0.122

SEMrush

0.515

0.150

Moz

0.514

0.101

Ahrefs

0.478

0.162

KWFinder

0.470

0.110

Keyword Planner Tool

0.189

0.144

As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.

For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.

Here are the updated scores at the end of round two:

Tool

PCC Test

Adjusted PCC

Total

SpyFu

9.8

10

19.8

Moz

10

9.7

19.7

SEMrush

8.8

9.8

18.6

KW Finder

8.7

8.9

17.6

AHREFs

7.7

9.1

16.8

KPT

1.1

3.6

4.7

SpyFu takes the lead! Now let’s jump into the final round of statistical tests.

Test 3: Resampling

Being that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.

Big thanks to Russ Jones, who put together an entirely different model that answers the question: “What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?”

He randomly selected 2 keywords from the list and their associated difficulty scores.

Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.

He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:

Resampling

% Guessed correctly

Moz

62.2%

Ahrefs

61.2%

SEMrush

60.3%

Keyword Finder

58.9%

SpyFu

54.3%

KPT

45.9%

As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.

Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.

In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.

For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.

The updated scores are as follows:

Tool

PCC Test

Adjusted PCC

Resampling

Total

Moz

10

9.7

10

29.7

SEMrush

8.8

9.8

8.4

27

Ahrefs

7.7

9.1

9.2

26

KW Finder

8.7

8.9

7.3

24.9

SpyFu

9.8

10

3.5

23.3

KPT

1.1

3.6

-.4

.7

So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.

Finally, we need to make some usability adjustments.

Usability Adjustment 1: Keyword Matching

A keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.

To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:

  1. You have to use another tool to get the data, which devalues the entire point of using the original tool.
  2. You miss an opportunity to rank for a high-value keyword.

Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.

One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.

The penalties are as follows:

Tool

Match Rate

Penalty

KW Finder

100%

0

Ahrefs

100%

0

Moz

100%

0

SEMrush

92%

-.8

Keyword Planner Tool

88%

-1.2

SpyFu

80%

-2

Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were not found in its keyword difficulty tool, but rather through manually digging through the phrase match tool. We will give them a pass, but with a stern warning!

Usability Adjustment 2: Reliability

I told you we would come back to this! Revisiting the second test in which we threw away the three strongest outliers that negatively impacted each tool’s score, we will now make adjustments.

In real life, there are no mulligans. In real life, each of those three blog posts that were thrown out represented a significant monetary and time investment. Therefore, when a tool has a major blunder, the result can be a total waste of time and resources.

For that reason, we will impose a slight penalty on those tools that benefited the most from their handicap.

We will use the level of PCC improvement to evaluate how much a tool benefitted from removing their outliers. In doing so, we will be rewarding the tools that were the most consistently reliable. As a reminder, the amounts each tool benefitted were as follows:

Tool

Difference (+/-)

Ahrefs

0.162

SEMrush

0.150

Keyword Planner Tool

0.144

SpyFu

0.122

KWFinder

0.110

Moz

0.101

In calculating the penalty, we scored each of the tools relative to the top performer, giving the top performer zero penalty and imposing penalties based on how much additional benefit the tools received over the most reliable tool, on a scale of 0–100%, with a maximum deduction of 5 points.

So if a tool received twice the benefit of the top performing tool, it would have had a 100% benefit, receiving the maximum deduction of 5 points. If another tool received a 20% benefit over of the most reliable tool, it would get a 1-point deduction. And so on.

Tool

% Benefit

Penalty

Ahrefs

60%

-3

SEMrush

48%

-2.4

Keyword Planner Tool

42%

-2.1

SpyFu

20%

-1

KW Finder

8%

-.4

Moz

-

0

Results

All told, our penalties were fairly mild, with a slight shuffling in the middle tier. The final scores are as follows:

Tool

Total Score

Stars (5 max)

Moz

29.7

4.95

KW Finder

24.5

4.08

SEMrush

23.8

3.97

Ahrefs

23.0

3.83

Spyfu

20.3

3.38

KPT

-2.6

0.00

Conclusion

Using any organic keyword difficulty tool will give you an advantage over not doing so. While none of the tools are a crystal ball, providing perfect predictability, they will certainly give you an edge. Further, if you record enough data on your own blogs’ performance, you will get a clearer picture of the keyword difficulty scores you should target in order to rank on the first page.

For example, we know the following about how we should target keywords with each tool:

Tool

Average KD ranking ≤10

Average KD ranking ≥ 11

Moz

33.3

37.0

SpyFu

47.7

50.6

SEMrush

60.3

64.5

KWFinder

43.3

46.5

Ahrefs

11.9

23.6

This is pretty powerful information! It’s either first page or bust, so we now know the threshold for each tool that we should set when selecting keywords.

Stay tuned, because we made a lot more correlations between word count, days live, total keywords ranking, and all kinds of other juicy stuff. Tune in again in early September for updates!

We hope you found this test useful, and feel free to reach out with any questions on our math!

Disclaimer: These results are estimates based on 50 ranking keywords from 50 blog posts and keyword research data pulled from a single moment in time. Search is a shifting landscape, and these results have certainly changed since the data was pulled. In other words, this is about as accurate as we can get from analyzing a moving target.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Rewriting the Beginner’s Guide to SEO, Chapter 2: Crawling, Indexing, and Ranking

Posted by BritneyMuller

It’s been a few months since our last share of our work-in-progress rewrite of the Beginner’s Guide to SEO, but after a brief hiatus, we’re back to share our draft of Chapter Two with you! This wouldn’t have been possible without the help of Kameron Jenkins, who has thoughtfully contributed her great talent for wordsmithing throughout this piece.

This is your resource, the guide that likely kicked off your interest in and knowledge of SEO, and we want to do right by you. You left amazingly helpful commentary on our outline and draft of Chapter One, and we’d be honored if you would take the time to let us know what you think of Chapter Two in the comments below.


Chapter 2: How Search Engines Work – Crawling, Indexing, and Ranking

First, show up.

As we mentioned in Chapter 1, search engines are answer machines. They exist to discover, understand, and organize the internet’s content in order to offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible to search engines. It’s arguably the most important piece of the SEO puzzle: If your site can’t be found, there’s no way you’ll ever show up in the SERPs (Search Engine Results Page).

How do search engines work?

Search engines have three primary functions:

  1. Crawl: Scour the Internet for content, looking over the code/content for each URL they find.
  2. Index: Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
  3. Rank: Provide the pieces of content that will best answer a searcher’s query. Order the search results by the most helpful to a particular query.

What is search engine crawling?

Crawling, is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

The bot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, crawlers are able to find new content and add it to their index — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

What is a search engine index?

Search engines process and store information they find in an index, a huge database of all the content they’ve discovered and deem good enough to serve up to searchers.

Search engine ranking

When someone performs a search, search engines scour their index for highly relevant content and then orders that content in the hopes of solving the searcher’s query. This ordering of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there can be reasons for doing this, if you want your content found by searchers, you have to first make sure it’s accessible to crawlers and is indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the search engine, rather than against it!

Note: In SEO, not all search engines are equal

Many beginners wonder about the relative importance of particular search engines. Most people know that Google has the largest market share, but how important it is to optimize for Bing, Yahoo, and others? The truth is that despite the existence of more than 30 major web search engines, the SEO community really only pays attention to Google. Why? The short answer is that Google is where the vast majority of people search the web. If we include Google Images, Google Maps, and YouTube (a Google property), more than 90% of web searches happen on Google — that’s nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your site?

As you’ve just learned, making sure your site gets crawled and indexed is a prerequisite for showing up in the SERPs. First things first: You can check to see how many and which pages of your website have been indexed by Google using “site:yourdomain.com“, an advanced search operator.

Head to Google and type “site:yourdomain.com” into the search bar. This will return results Google has in its index for the site specified:

Screen Shot 2017-08-03 at 5.19.15 PM.png

The number of results Google displays (see “About __ results” above) isn’t exact, but it does give you a solid idea of which pages are indexed on your site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don’t currently have one. With this tool, you can submit sitemaps for your site and monitor how many submitted pages have actually been added to Google’s index, among other things.

If you’re not showing up anywhere in the search results, there are a few possible reasons why:

  • Your site is brand new and hasn’t been crawled yet.
  • Your site isn’t linked to from any external websites.
  • Your site’s navigation makes it hard for a robot to crawl it effectively.
  • Your site contains some basic code called crawler directives that is blocking search engines.
  • Your site has been penalized by Google for spammy tactics.

If your site doesn’t have any other sites linking to it, you still might be able to get it indexed by submitting your XML sitemap in Google Search Console or manually submitting individual URLs to Google. There’s no guarantee they’ll include a submitted URL in their index, but it’s worth a try!

Can search engines see your whole site?

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections might be obscured for one reason or another. It’s important to make sure that search engines are able to discover all the content you want indexed, and not just your homepage.

Ask yourself this: Can the bot crawl through your website, and not just to it?

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing certain content, search engines won’t see those protected pages. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will be able to find everything that their visitors search for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there’s no guarantee they will be able to read and understand it just yet. It’s always best to add text within the <HTML> markup of your webpage.

Can search engines follow your site navigation?

Just as a crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page. If you’ve got a page you want search engines to find but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get listed in search results.

Common navigation mistakes that can keep crawlers from seeing all of your site:

  • Having a mobile navigation that shows different results than your desktop navigation
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has gotten much better at crawling and understanding Javascript, but it’s still not a perfect process. The more surefire way to ensure something gets found, understood, and indexed by Google is by putting it in the HTML.
  • Personalization, or showing unique navigation to a specific type of visitor versus others, could appear to be cloaking to a search engine crawler
  • Forgetting to link to a primary page on your website through your navigation — remember, links are the paths crawlers follow to new pages!

This is why it’s essential that your website has a clear navigation and helpful URL folder structures.

Information architecture

Information architecture is the practice of organizing and labeling content on a website to improve efficiency and fundability for users. The best information architecture is intuitive, meaning that users shouldn’t have to think very hard to flow through your website or to find something.

Your site should also have a useful 404 (page not found) page for when a visitor clicks on a dead link or mistypes a URL. The best 404 pages allow users to click back into your site so they don’t bounce off just because they tried to access a nonexistent link.

Tell search engines how to crawl your site

In addition to making sure crawlers can reach your most important pages, it’s also pertinent to note that you’ll have pages on your site you don’t want them to find. These might include things like old URLs that have thin content, duplicate URLs (such as sort-and-filter parameters for e-commerce), special promo code pages, staging or test pages, and so on.

Blocking pages from search engines can also help crawlers prioritize your most important pages and maximize your crawl budget (the average number of pages a search engine bot will crawl on your site).

Crawler directives allow you to control what you want Googlebot to crawl and index using a robots.txt file, meta tag, sitemap.xml file, or Google Search Console.

Robots.txt

Robots.txt files are located in the root directory of websites (ex. yourdomain.com/robots.txt) and suggest which parts of your site search engines should and shouldn’t crawl via specific robots.txt directives. This is a great solution when trying to block search engines from non-private pages on your site.

You wouldn’t want to block private/sensitive pages from being crawled here because the file is easily accessible by users and bots.

Pro tip:

  • If Googlebot can’t find a robots.txt file for a site (40X HTTP status code), it proceeds to crawl the site.
  • If Googlebot finds a robots.txt file for a site (20X HTTP status code), it will usually abide by the suggestions and proceed to crawl the site.
  • If Googlebot finds neither a 20X or a 40X HTTP status code (ex. a 501 server error) it can’t determine if you have a robots.txt file or not and won’t crawl your site.

Meta directives

The two types of meta directives are the meta robots tag (more commonly used) and the x-robots-tag. Each provides crawlers with stronger instructions on how to crawl and index a URL’s content.

The x-robots tag provides more flexibility and functionality if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply sitewide noindex tags.

These are the best options for blocking more sensitive*/private URLs from search engines.

*For very sensitive URLs, it is best practice to remove them from or require a secure login to view the pages.

WordPress Tip: In Dashboard > Settings > Reading, make sure the “Search Engine Visibility” box is not checked. This blocks search engines from coming to your site via your robots.txt file!

Avoid these common pitfalls, and you’ll have clean, crawlable content that will allow bots easy access to your pages.

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Sitemaps

A sitemap is just what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure Google is finding your highest priority pages is to create a file that meets Google’s standards and submit it through Google Search Console. While submitting a sitemap doesn’t replace the need for good site navigation, it can certainly help crawlers follow a path to all of your important pages.

Google Search Console

Some sites (most common with e-commerce) make the same content available on multiple different URLs by appending certain parameters to URLs. If you’ve ever shopped online, you’ve likely narrowed down your search via filters. For example, you may search for “shoes” on Amazon, and then refine your search by size, color, and style. Each time you refine, the URL changes slightly. How does Google know which version of the URL to serve to searchers? Google does a pretty good job at figuring out the representative URL on its own, but you can use the URL Parameters feature in Google Search Console to tell Google exactly how you want them to treat your pages.

Indexing: How do search engines understand and remember your site?

Once you’ve ensured your site has been crawled, the next order of business is to make sure it can be indexed. That’s right — just because your site can be discovered and crawled by a search engine doesn’t necessarily mean that it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where your discovered pages are stored. After a crawler finds a page, the search engine renders it just like a browser would. In the process of doing so, the search engine analyzes that page’s contents. All of that information is stored in its index.

Read on to learn about how indexing works and how you can make sure your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?

Yes, the cached version of your page will reflect a snapshot of the last time googlebot crawled it.

Google crawls and caches web pages at different frequencies. More established, well-known sites that post frequently like https://www.nytimes.com will be crawled more frequently than the much-less-famous website for Roger the Mozbot’s side hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the drop-down arrow next to the URL in the SERP and choosing “Cached”:

You can also view the text-only version of your site to determine if your important content is being crawled and cached effectively.

Are pages ever removed from the index?

Yes, pages can be removed from the index! Some of the main reasons why a URL might be removed include:

  • The URL is returning a “not found” error (4XX) or server error (5XX) – This could be accidental (the page was moved and a 301 redirect was not set up) or intentional (the page was deleted and 404ed in order to get it removed from the index)
  • The URL had a noindex meta tag added – This tag can be added by site owners to instruct the search engine to omit the page from its index.
  • The URL has been manually penalized for violating the search engine’s Webmaster Guidelines and, as a result, was removed from the index.
  • The URL has been blocked from crawling with the addition of a password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s index is no longer showing up, you can manually submit the URL to Google by navigating to the “Submit URL” tool in Search Console.

Ranking: How do search engines rank URLs?

How do search engines ensure that when someone types a query into the search bar, they get relevant results in return? That process is known as ranking, or the ordering of search results by most relevant to least relevant to a particular query.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in meaningful ways. These algorithms have gone through many changes over the years in order to improve the quality of search results. Google, for example, makes algorithm adjustments every day — some of these updates are minor quality tweaks, whereas others are core/broad algorithm updates deployed to tackle a specific issue, like Penguin to tackle link spam. Check out our Google Algorithm Change History for a list of both confirmed and unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn’t always reveal specifics as to why they do what they do, we do know that Google’s aim when making algorithm adjustments is to improve overall search quality. That’s why, in response to algorithm update questions, Google will answer with something along the lines of: “We’re making quality updates all the time.” This indicates that, if your site suffered after an algorithm adjustment, compare it against Google’s Quality Guidelines or Search Quality Rater Guidelines, both are very telling in terms of what search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to searcher’s questions in the most helpful formats. If that’s true, then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See Spot Run.” Over time, their understanding starts to deepen, and they learn semantics—- the meaning behind language and the relationship between words and phrases. Eventually, with enough practice, the student knows the language well enough to even understand nuance, and is able to provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to game the system by using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you wanted to rank for a particular keyword like “funny jokes,” you might add the words “funny jokes” a bunch of times onto your page, and make it bold, in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are fun and crazy. Your funny joke awaits. Sit back and read funny jokes because funny jokes can make you happy and funnier. Some funny favorite funny jokes.

This tactic made for terrible user experiences, and instead of laughing at funny jokes, people were bombarded by annoying, hard-to-read text. It may have worked in the past, but this is never what search engines wanted.

The role links play in SEO

When we talk about links, we could mean two things. Backlinks or “inbound links” are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

Links have historically played a big role in SEO. Very early on, search engines needed help figuring out which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to any given site helped them do this.

Backlinks work very similarly to real life WOM (Word-Of-Mouth) referrals. Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

  • Referrals from others = good sign of authority
    Example: Many different people have all told you that Jenny’s Coffee is the best in town
  • Referrals from yourself = biased, so not a good sign of authority
    Example: Jenny claims that Jenny’s Coffee is the best in town
  • Referrals from irrelevant or low-quality sources = not a good sign of authority and could even get you flagged for spam
    Example: Jenny paid to have people who have never visited her coffee shop tell others how good it is.
  • No referrals = unclear authority
    Example: Jenny’s Coffee might be good, but you’ve been unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google’s core algorithm) is a link analysis algorithm named after one of Google’s founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links pointing to it. The assumption is that the more relevant, important, and trustworthy a web page is, the more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites, the better your odds are to rank higher within search results.

The role content plays in SEO

There would be no point to links if they didn’t direct searchers to something. That something is content! Content is more than just words; it’s anything meant to be consumed by searchers — there’s video content, image content, and of course, text. If search engines are answer machines, content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible results, so how do search engines decide which pages the searcher is going to find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the query’s intent. In other words, does this page match the words that were searched and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment, there’s no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you put in your header tags. All those can play a role in how well a page performs in search, but the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three have stayed fairly consistent: links to your website (which serve as a third-party credibility signals), on-page content (quality content that fulfills a searcher’s intent), and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google’s core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, it’s always learning, and because it’s always learning, search results should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better result to users than the higher ranking URLs, you can bet that RankBrain will adjust those results, moving the more relevant result higher and demoting the lesser relevant pages as a byproduct.

Like most things with the search engine, we don’t know exactly what comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?

Because Google will continue leveraging RankBrain to promote the most relevant, helpful content, we need to focus on fulfilling searcher intent more than ever before. Provide the best possible information and experience for searchers who might land on your page, and you’ve taken a big first step to performing well in a RankBrain world.

Engagement metrics: correlation, causation, or both?

With Google rankings, engagement metrics are most likely part correlation and part causation.

When we say engagement metrics, we mean data that represents how searchers interact with your site from search results. This includes things like:

  • Clicks (visits from search)
  • Time on page (amount of time the visitor spent on a page before leaving it)
  • Bounce rate (the percentage of all website sessions where users viewed only one page)
  • Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that engagement metrics correlate with higher ranking, but causation has been hotly debated. Are good engagement metrics just indicative of highly ranked sites? Or are sites ranked highly because they possess good engagement metrics?

What Google has said

While they’ve never used the term “direct ranking signal,” Google has been clear that they absolutely use click data to modify the SERP for particular queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a particular query, 80% of people click on #2 and only 10% click on #1, after a while we figure out probably #2 is the one people want, so we’ll switch it.”

Another comment from former Google engineer Edmond Lau corroborates this:

“It’s pretty clear that any reasonable search engine would use click data on their own results to feed back into ranking to improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it obvious that it uses click data with its patents on systems like rank-adjusted content items.”

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than correlation, but it would appear that Google falls short of calling engagement metrics a “ranking signal” because those metrics are used to improve search quality, and the rank of individual URLs is just a byproduct of that.

What tests have confirmed

Various tests have confirmed that Google will adjust SERP order in response to searcher engagement:

  • Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1 spot after getting around 200 people to click on the URL from the SERP. Interestingly, ranking improvement seemed to be isolated to the location of the people who visited the link. The rank position spiked in the US, where many participants were located, whereas it remained lower on the page in Google Canada, Google Australia, etc.
  • Larry Kim’s comparison of top pages and their average dwell time pre- and post-RankBrain seemed to indicate that the machine-learning component of Google’s algorithm demotes the rank position of pages that people don’t spend as much time on.
  • Darren Shaw’s testing has shown user behavior’s impact on local search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for quality, and rank position changes as a byproduct, it’s safe to say that SEOs should optimize for engagement. Engagement doesn’t change the objective quality of your web page, but rather your value to searchers relative to other results for that query. That’s why, after no changes to your page or its backlinks, it could decline in rankings if searchers’ behaviors indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker. Objective factors such as links and content first rank the page, then engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results

Back when search engines lacked a lot of the sophistication they have today, the term “10 blue links” was coined to describe the flat structure of the SERP. Any time a search was performed, Google would return a page with 10 organic results, each in the same format.

In this search landscape, holding the #1 spot was the holy grail of SEO. But then something happened. Google began adding results in new formats on their search result pages, called SERP features. Some of these SERP features include:

  • Paid advertisements
  • Featured snippets
  • People Also Ask boxes
  • Local (map) pack
  • Knowledge panel
  • Sitelinks

And Google is adding new ones all the time. It even experimented with “zero-result SERPs,” a phenomenon where only one result from the Knowledge Graph was displayed on the SERP with no results below it except for an option to “view more results.”

The addition of these features caused some initial panic for two main reasons. For one, many of these features caused organic results to be pushed down further on the SERP. Another byproduct is that fewer searchers are clicking on the organic results since more queries are being answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied by different content formats. Notice how the different types of SERP features match the different types of query intents.

Query Intent

Possible SERP Feature Triggered

Informational

Featured Snippet

Informational with one answer

Knowledge Graph / Instant Answer

Local

Map Pack

Transactional

Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know that answers can be delivered to searchers in a wide array of formats, and how you structure your content can impact the format in which it appears in search.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical location customers can visit (ex: dentist) or for a business that travels to visit their customers (ex: plumber), make sure that you claim, verify, and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine ranking:

  1. Relevance
  2. Distance
  3. Prominence

Relevance

Relevance is how well a local business matches what the searcher is looking for. To ensure that the business is doing everything it can to be relevant to searchers, make sure the business’ information is thoroughly and accurately filled out.

Distance

Google use your geo-location to better serve you local results. Local search results are extremely sensitive to proximity, which refers to the location of the searcher and/or the location specified in the query (if the searcher included one).

Organic search results are sensitive to a searcher’s location, though seldom as pronounced as in local pack results.

Prominence

With prominence as a factor, Google is looking to reward businesses that are well-known in the real world. In addition to a business’ offline prominence, Google also looks to some online factors to determine local ranking, such as:

Reviews

The number of Google reviews a local business receives, and the sentiment of those reviews, have a notable impact on their ability to rank in local results.

Citations

A “business citation” or “business listing” is a web-based reference to a local business’ “NAP” (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources in continuously making up its local business index. When Google finds multiple consistent references to a business’s name, location, and phone number it strengthens Google’s “trust” in the validity of that data. This then leads to Google being able to show the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Check a local business’ citation accuracy here.

Organic ranking

SEO best practices also apply to local SEO, since Google also considers a website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google and users better understand your content.

[Bonus!] Local engagement

Although not listed by Google as a local ranking determiner, the role of engagement is only going to increase as time goes on. Google continues to enrich local results by incorporating real-world data like popular times to visit and average length of visits…

Screenshot of Google SERP result for a local business showing busy times of day

…and even provides searchers with the ability to ask the business questions!

Screenshot of the Questions & Answers portion of a local Google SERP result

Undoubtedly now more than ever before, local results are being influenced by real-world data. This interactivity is how searchers interact with and respond to local businesses, rather than purely static (and game-able) information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to searchers, it makes perfect sense for them to use real time engagement metrics to determine quality and relevance.


You don’t have to know the ins and outs of Google’s algorithm (that remains a mystery!), but by now you should have a great baseline knowledge of how the search engine finds, interprets, stores, and ranks content. Armed with that knowledge, let’s learn about choosing the keywords your content will target!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Using the Flowchart Method for Diagnosing Ranking Drops – Whiteboard Friday

Posted by KameronJenkins

Being able to pinpoint the reason for a ranking drop is one of our most perennial and potentially frustrating tasks as SEOs. There are an unknowable number of factors that go into ranking these days, but luckily the methodology for diagnosing those fluctuations is readily at hand. In today’s Whiteboard Friday, we welcome the wonderful Kameron Jenkins to show us a structured way to diagnose ranking drops using a flowchart method and critical thinking.

Flowchart method for diagnosing ranking drops

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Hey, everyone. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins. I am the new SEO Wordsmith here at Moz, and I’m so excited to be here. Before this, I worked at an agency for about six and a half years. I worked in the SEO department, and really a common thing we encountered was a client’s rankings dropped. What do we do?

This flowchart was kind of built out of that mentality of we need a logical workflow to be able to diagnose exactly what happened so we can make really pointed recommendations for how to fix it, how to get our client’s rankings back. So let’s dive right in. It’s going to be a flowchart, so it’s a little nonlinear, but hopefully this makes sense and helps you work smarter rather than harder.

Was it a major ranking drop?: No

The first question I’d want to ask is: Was their rankings drop major? By major, I would say that’s something like page 1 to page 5 overnight. Minor would be something like it just fell a couple positions, like position 3 to position 5.

We’re going to take this path first. It was minor.

Has there been a pattern of decline lasting about a month or greater?

That’s not a magic number. A month is something that you can use as a benchmark. But if there’s been a steady decline and it’s been one week it’s position 3 and then it’s position 5 and then position 7, and it just keeps dropping over time, I would consider that a pattern of decline.

So if no, I would actually say wait.

  • Volatility is normal, especially if you’re at the bottom of page 1, maybe page 2 plus. There’s going to be a lot more shifting of the search results in those positions. So volatility is normal.
  • Keep your eyes on it, though. It’s really good to just take note of it like, “Hey, we dropped. Okay, I’m going to check that again next week and see if it continues to drop, then maybe we’ll take action.”
  • Wait it out. At this point, I would just caution against making big website updates if it hasn’t really been warranted yet. So volatility is normal. Expect that. Keep your finger on the pulse, but just wait it out at this point.

If there has been a pattern of decline though, I’m going to have you jump to the algorithm update section. We’re going to get there in a second. But for now, we’re going to go take the major rankings drop path.

Was it a major ranking drop?: Yes

The first question on this path that I’d want to ask is:

Was there a rank tracking issue?

Now, some of these are going seem pretty basic, like how would that ever happen, but believe me it happens every once in a while. So just before we make major updates to the website, I’d want to check the rank tracking.

I. The wrong domain or URL.

That can be something that happens a lot. A site maybe you change domains or maybe you move a page and that old page of that old domain is still listed in your ranking tracker. If that’s the case, then the rank tracking tool doesn’t know which URL to judge the rankings off of. So it’s going to look like maybe you dropped to position 10 overnight from position 1, and that’s like, whoa, that’s a huge update. But it’s actually just that you have the wrong URL in there. So just check that. If there’s been a page update, a domain update, check to make sure that you’ve updated your rank tracker.

II. Glitches.

So it’s software, it can break. There are things that could cause it to be off for whatever reason. I don’t know how common that is. It probably is totally dependent on which kind of software you use. But glitches do happen, so I would manually check your rankings.

III. Manually check rankings.

One way I would do that is…

  • Go to incognito in Google and make sure you’re logged out so it’s not personalized. I would search the term that you’re wanting to rank for and see where you’re actually ranking.
  • Google’s Ad Preview tool. That one is really good too if you want to search where you’re ranking locally so you can set your geolocation. You could do mobile versus desktop rankings. So it could be really good for things like that.
  • Crosscheck with another tool, like Moz’s tool for rank tracking. You can pop in your URLs, see where you’re ranking, and cross-check that with your own tool.

So back to this. Rank tracking issues. Yes, you found your problem. If it was just a rank tracking tool issue, that’s actually great, because it means you don’t have to make a lot of changes. Your rankings actually haven’t dropped. But if that’s not the issue, if there is no rank tracking issue that you can pinpoint, then I would move on to Google Search Console.

Problems in Google Search Console?

So Google Search Console is really helpful for checking site health matters. One of the main things I would want to check in there, if you experience a major drop especially, is…

I. Manual actions.

If you navigate to Manual Actions, you could see notes in there like unnatural links pointing to your site. Or maybe you have thin or low-quality content on your site. If those things are present in your Manual Actions, then you have a reference point. You have something to go off of. There’s a lot of work involved in lifting a manual penalty that we can’t get into here unfortunately. Some things that you can do to focus on manual penalty lifting…

  • Moz’s Link Explorer. You can check your inbound links and see their spam score. You could look at things like anchor text to see if maybe the links pointing to your site are keyword stuffed. So you can use tools like that.
  • There are a lot of good articles too, in the industry, just on getting penalties lifted. Marie Haynes especially has some really good ones. So I would check that out.

But you have found your problem if there’s a manual action in there. So focus on getting that penalty lifted.

II. Indexation issues.

Before you move out of Search Console, though, I would check indexation issues as well. Maybe you don’t have a manual penalty. But go to your index coverage report and you can see if anything you submitted in your sitemap is maybe experiencing issues. Maybe it’s blocked by robots.txt, or maybe you accidentally no indexed it. You could probably see that in the index coverage report. Search Console, okay. So yes, you found your problem. No, you’re going to move on to algorithm updates.

Algorithm updates

Algorithm updates happen all the time. Google says that maybe one to two happen per day. Not all of those are going to be major. The major ones, though, are listed. They’re documented in multiple different places. Moz has a really good list of algorithm updates over time. You can for sure reference that. There are going to be a lot of good ones. You can navigate to the exact year and month that your site experienced a rankings drop and see if it maybe correlates with any algorithm update.

For example, say your site lost rankings in about January 2017. That’s about the time that Google released its Intrusive Interstitials Update, and so I would look on my site, if that was the issue, and say, “Do I have intrusive interstitials? Is this something that’s affecting my website?”

If you can match up an algorithm update with the time that your rankings started to drop, you have direction. You found an issue. If you can’t match it up to any algorithm updates, it’s finally time to move on to site updates.

Site updates

What changes happened to your website recently? There are a lot of different things that could have happened to your website. Just keep in mind too that maybe you’re not the only one who has access to your website. You’re the SEO, but maybe tech support has access. Maybe even your paid ad manager has access. There are a lot of different people who could be making changes to the website. So just keep that in mind when you’re looking into it. It’s not just the changes that you made, but changes that anyone made could affect the website’s ranking. Just look into all possible factors.

Other factors that can impact rankings

A lot of different things, like I said, can influence your site’s rankings. A lot of things can inadvertently happen that you can pinpoint and say, “Oh, that’s definitely the cause.”

Some examples of things that I’ve personally experienced on my clients’ websites…

I. Renaming pages and letting them 404 without updating with a 301 redirect.

There was one situation where a client had a blog. They had hundreds of really good blog posts. They were all ranking for nice, long tail terms. A client emailed into tech support to change the name of the blog. Unfortunately, all of the posts lived under the blog, and when he did that, he didn’t update it with a 301 redirect, so all of those pages, that were ranking really nicely, they started to fall out of the index. The rankings went with it. There’s your problem. It was unfortunate, but at least we were able to diagnose what happened.

II. Content cutting.

Maybe you’re working with a UX team, a design team, someone who is looking at the website from a visual, a user experience perspective. A lot of times in these situations they might take a page that’s full of really good, valuable content and they might say, “Oh, this is too clunky. It’s too bulky. It has too many words. So we’re going to replace it with an image, or we’re going to take some of the content out.”

When this happens, if the content was the thing that was making your page rank and you cut that, that’s probably something that’s going to affect your rankings negatively. By the way, if that’s happening to you, Rand has a really good Whiteboard Friday on kind of how to marry user experience and SEO. You should definitely check that out if that’s an issue for you.

III. Valuable backlinks lost.

Another situation I was diagnosing a client and one of their backlinks dropped. It just so happened to be like the only thing that changed over this course of time. It was a really valuable backlink, and we found out that they just dropped it for whatever reason, and the client’s rankings started to decline after that time. Things like Moz’s tools, Link Explorer, you can go in there and see gained and lost backlinks over time. So I would check that out if maybe that might be an issue for you.

IV. Accidental no index.

Depending on what type of CMS you work with, it might be really, really easy to accidentally check No Index on this page. If you no index a really important page, Google takes it out of its index. That could happen. Your rankings could drop.So those are just some examples of things that can happen. Like I said, hundreds and hundreds of things could have been changed on your site, but it’s just really important to try to pinpoint exactly what those changes were and if they coincided with when your rankings started to drop.

SERP landscape

So we got all the way to the bottom. If you’re at the point where you’ve looked at all of the site updates and you still haven’t found anything that would have caused a rankings drop, I would say finally look at the SERP landscape.

What I mean by that is just Google your keyword that you want to rank for or your group of keywords that you want to rank for and see which websites are ranking on page 1. I would get a lay of the land and just see:

  • What are these pages doing?
  • How many backlinks do they have?
  • How much content do they have?
  • Do they load fast?
  • What’s the experience?

Then make content better than that. To rank, so many people just think avoid being spammy and avoid having things broken on your site. But that’s not SEO. That’s really just helping you be able to compete. You have to have content that’s the best answer to searchers’ questions, and that’s going to get you ranking.

I hope that was helpful. This is a really good way to just kind of work through a ranking drop diagnosis. If you have methods, by the way, that work for you, I’d love to hear from you and see what worked for you in the past. Let me know, drop it in the comments below.

Thanks, everyone. Come back next week for another edition of Whiteboard Friday.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Google expands featured snippets, voice search ranking study & Rand Fishkin moves on

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google expands featured snippets, voice search ranking study & Rand Fishkin moves on appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Related Articles

Posted in Latest NewsComments Off

The Google Ranking Factor You Can Influence in an Afternoon [Case Study]

Posted by sanfran

What does Google consider “quality content”? And how do you capitalize on a seemingly subjective characteristic to improve your standing in search?

We’ve been trying to figure this out since the Hummingbird algorithm was dropped in our laps in 2013, prioritizing “context” over “keyword usage/frequency.” This meant that Google’s algorithm intended to understand the meaning behind the words on the page, rather than the page’s keywords and metadata alone.

This new sea change meant the algorithm was going to read in between the lines in order to deliver content that matched the true intent of someone searching for a keyword.

Write longer content? Not so fast!

Watching us SEOs respond to Google updates is hilarious. We’re like a floor full of day traders getting news on the latest cryptocurrency.

One of the most prominent theories that made the rounds was that longer content was the key to organic ranking. I’m sure you’ve read plenty of articles on this. We at Brafton, a content marketing agency, latched onto that one for a while as well. We even experienced some mixed success.

However, what we didn’t realize was that when we experienced success, it was because we accidentally stumbled on the true ranking factor.

Longer content alone was not the intent behind Hummingbird.

Content depth

Let’s take a hypothetical scenario.

If you were to search the keyword “search optimization techniques,” you would see a SERP that looks similar to the following:

Nothing too surprising about these results.

However, if you were to go through each of these 10 results and take note of the major topics they discussed, theoretically you would have a list of all the topics being discussed by all of the top ranking sites.

Example:

Position 1 topics discussed: A, C, D, E, F

Position 2 topics discussed: A, B, F

Position 3 topics discussed: C, D, F

Position 4 topics discussed: A, E, F

Once you finished this exercise, you would have a comprehensive list of every topic discussed (A–F), and you would start to see patterns of priority emerge.

In the example above, note “topic F” is discussed in all four pieces of content. One would consider this a cornerstone topic that should be prioritized.

If you were then to write a piece of content that covered each of the topics discussed by every competitor on page one, and emphasized the cornerstone topics appropriately, in theory, you would have the most comprehensive piece of content on that particular topic.

By producing the most comprehensive piece of content available, you would have the highest quality result that will best satisfy the searcher’s intent. More than that, you would have essentially created the ultimate resource center for everything a person would want to know about that topic.

How to identify topics to discuss in a piece of content

At this point, we’re only theoretical. The theory makes logical sense, but does it actually work? And how do we go about scientifically gathering information on topics to discuss in a piece of content?

Finding topics to cover:

  • Manually: As discussed previously, you can do it manually. This process is tedious and labor-intensive, but it can be done on a small scale.
  • Using SEMrush: SEMrush features an SEO content template that will provide guidance on topic selection for a given keyword.
  • Using MarketMuse: MarketMuse was originally built for the very purpose of content depth, with an algorithm that mimics Hummingbird. MM takes a largely unscientific process and makes it scientific. For the purpose of this case study, we used MarketMuse.

The process

Watch the process in action

1. Identify content worth optimizing

We went through a massive list of keywords our blog ranked for. We filtered that list down to keywords that were not ranking number one in SERPs but had strong intent. You can also do this with core landing pages.

Here’s an example: We were ranking in the third position for the keyword “financial content marketing.” While this is a low-volume keyword, we were enthusiastic to own it due to the high commercial intent it comes with.

2. Evaluate your existing piece

Take a subjective look at your piece of content that is ranking for the keyword. Does it SEEM like a comprehensive piece? Could it benefit from updated examples? Could it benefit from better/updated inline embedded media? With a cursory look at our existing content, it was clear that the examples we used were old, as was the branding.

3. Identify topics

As mentioned earlier, you can do this in a few different ways. We used MarketMuse to identify the topics we were doing a good job of covering as well as our topic gaps, topics that competitors were discussing, but we were not. The results were as follows:

Topics we did a good job of covering:

  • Content marketing impact on branding
  • Impact of using case studies
  • Importance of infographics
  • Business implications of a content marketing program
  • Creating articles for your audience

Topics we did a poor job of covering:

  • Marketing to millennials
  • How to market to existing clients
  • Crafting a content marketing strategy
  • Identifying and tracking goals

4. Rewrite the piece

Considering how out-of-date our examples were, and the number of topics we had neglected to discuss, we determined a full rewrite of the piece was warranted. Our writer, Mike O’Neill, was given the topic guidance, ensuring he had a firm understanding of everything that needed to be discussed in order to create a comprehensive article.

5. Update the content

To maintain our link equity, we kept the same URL and simply updated the old content with the new. Then we updated the publish date. The new article looks like this, with updated content depth, modern branding, and inline visuals.

6. Fetch as Google

Rather than wait for Google to reindex the content, I wanted to see the results immediately (and it is indeed immediate).

7. Check your results

Open an incognito window and see your updated position.

Promising results:

We have run more than a dozen experiments and have seen positive results across the board. As demonstrated in the video, these results are usually realized within 60 seconds of reindexing the updated content.

Keyword target

Old Ranking

New ranking

“Financial content marketing”

3

1

“What is a subdomain”

16

6

“Best company newsletters”

32

4

“Staffing marketing”

7

3

“Content marketing agency”

16

1

“Google local business cards”

16

5

“Company blog”

7

4

“SEO marketing tools”

9

3

Of those tests, here’s another example of this process in action for the keyword, “best company newsletters.”

Before:

After

Assumptions:

From these results, we can assume that content depth and breadth of topic coverage matters — a lot. Google’s algorithm seems to have an understanding of the competitive topic landscape for a keyword. In our hypothetical example from before, it would appear the algorithm knows that topics A–F exist for a given keyword and uses that collection of topics as a benchmark for content depth across competitors.

We can also assume Google’s algorithm either a.) responds immediately to updated information, or b.) has a cached snapshot of the competitive content depth landscape for any given keyword. Either of these scenarios is very likely because of the speed at which updated content is re-ranked.


In conclusion, don’t arbitrarily write long content and call it “high quality.” Choose a keyword you want to rank for and create a comprehensive piece of content that fully supports that keyword. There is no guarantee you’ll be granted a top position — domain strength factors play a huge role in rankings — but you’ll certainly improve your odds, as we have seen.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Diagnosing Why a Site’s Set of Pages May Be Ranking Poorly – Whiteboard Friday

Posted by randfish

Your rankings have dropped and you don’t know why. Maybe your traffic dropped as well, or maybe just a section of your site has lost rankings. It’s an important and often complex mystery to solve, and there are a number of boxes to check off while you investigate. In this Whiteboard Friday, Rand shares a detailed process to follow to diagnose what went wrong to cause your rankings drop, why it happened, and how to start the recovery process.

Diagnosing why a site's pages may be ranking poorly

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to talk about diagnosing a site and specifically a section of a site’s pages and why they might be performing poorly, why their traffic may have dropped, why rankings may have dropped, why both of them might have dropped. So we’ve got a fairly extensive process here, so let’s get started.

Step 1: Uncover the problem

First off, our first step is uncovering the problem or finding whether there is actually a problem. A good way to think about this is especially if you have a larger website, if we’re talking about a site that’s 20 or 30 or even a couple hundred pages, this is not a big issue. But many websites that SEOs are working on these days are thousands, tens of thousands, hundreds of thousands of pages. So what I like to urge folks to do is to

A. Treat different site sections as unique segments for investigation. You should look at them individually.

A lot of times subfolders or URL structures are really helpful here. So I might say, okay, MySite.com, I’m going to look exclusively at the /news section. Did that fall in rankings? Did it fall in traffic? Or was it /posts, where my blog posts and my content is? Or was it /cities? Let’s say I have a website that’s dealing with data about the population of cities. So I rank for lots of those types of queries, and it seems like I’m ranking for fewer of them, and it’s my cities pages that are poorly performing in comparison to where they were a few months ago or last year at this time.

B. Check traffic from search over time.

So I go to my Google Analytics or whatever analytics you’re using, and you might see something like, okay, I’m going to look exclusively at the /cities section. If you can structure your URLs in this fashion, use subfolders, this is a great way to do it. Then take a look and see, oh, hang on, that’s a big traffic drop. We fell off a cliff there for these particular pages.

This data can be hiding inside your analytics because it could be that the rest of your site is performing well. It’s going sort of up and to the right, and so you see this slow plateauing or a little bit of a decline, but it’s not nearly as sharp as it is if you look at the traffic specifically for a single subsection that might be performing poorly, like this /cities section.

From there, I’m going to next urge you to use Google Trends. Why? Why would I go to Google Trends? Because what I want you to do is I want you to look at some of your big keywords and topics in Google Trends to see if there has been a serious decline in search volume at the same time. If search demand is rising or staying stable over the course of time where you have lost traffic, it’s almost certainly something you’ve done, not something searchers are doing. But if you see that traffic has declined, for example, maybe you were ranking really well for population data from 2015. It turns out people are now looking for population data for 2016 or ’17 or ’18. Maybe that is part of the problem, that search demand has fallen and your curve matches that.

C. Perform some diagnostic queries or use your rank tracking data if you have it on these types of things.

This is one of the reasons I like to rank track for even these types of queries that don’t get a lot of traffic.

1. Target keywords. In this case, it might be “Denver population growth,” maybe that’s one of your keywords. You would see, “Do I still rank for this? How well do I rank for this? Am I ranking more poorly than I used to?”

2. Check brand name plus target keyword. So, in this case, it would be my site plus the above here plus “Denver population growth,” so My Site or MySite.com Denver population growth. If you’re not ranking for that, that’s usually an indication of a more serious problem, potentially a penalty or some type of dampening that’s happening around your brand name or around your website.

3. Look for a 10 to 20-word text string from page content without quotes. It could be shorter. It could be only six or seven words, or it could be longer, 25 words if you really need it. But essentially, I want to take a string of text that exists on the page and put it in order in Google search engine, not in quotes. I do not want to use quotes here, and I want to see how it performs. This might be several lines of text here.

4. Look for a 10 to 20-word text string with quotes. So those lines of text, but in quotes searched in Google. If I’m not ranking for this, but I am for this one … sorry, if I’m not ranking for the one not in quotes, but I am in quotes, I might surmise this is probably not duplicate content. It’s probably something to do with my content quality or maybe my link profile or Google has penalized or dampened me in some way.

5. site: urlstring/ So I would search for “site:MySite.com/cities/Denver.” I would see: Wait, has Google actually indexed my page? When did they index it? Oh, it’s been a month. I wonder why they haven’t come back. Maybe there’s some sort of crawl issue, robots.txt issue, meta robots issue, something. I’m preventing Google from potentially getting there. Or maybe they can’t get there at all, and this results in zero results. That means Google hasn’t even indexed the page. Now we have another type of problem.

D. Check your tools.

1. Google Search Console. I would start there, especially in the site issues section.

2. Check your rank tracker or whatever tool you’re using, whether that’s Moz or something else.

3. On-page and crawl monitoring. Hopefully you have something like that. It could be through Screaming Frog. Maybe you’ve run some crawls over time, or maybe you have a tracking system in place. Moz has a crawl system. OnPage.org has a really good one.

4. Site uptime. So I might check Pingdom or other things that alert me to, “Oh, wait a minute, my site was down for a few days last week. That obviously is why traffic has fallen,” those types of things.

Step 2: Offer hypothesis for falling rankings/traffic

Okay, you’ve done your diagnostics. Now it’s time to offer some hypotheses. So now that we understand which problem I might have, I want to understand what could be resulting in that problem. So there are basically two situations you can have. Rankings have stayed stable or gone up, but traffic has fallen.

A. If rankings are up, but traffic is down…

In those cases, these are the five things that are most typically to blame.

1. New SERP features. There’s a bunch of featured snippets that have entered the population growth for cities search results, and so now number one is not what number one used to be. If you don’t get that featured snippet, you’re losing out to one of your competitors.

2. Lower search demand. Like we talked about in Google Trends. I’m looking at search demand, and there are just not as many people searching as there used to be.

3. Brand or reputation issues. I’m ranking just fine, but people now for some reason hate me. People who are searching this sector think my brand is evil or bad or just not as helpful as it used to be. So I have issues, and people are not clicking on my results. They’re choosing someone else actively because of reputation issues.

4. Snippet problems. I’m ranking in the same place I used to be, but I’m no longer the sexiest, most click-drawing snippet in the search results, and other people are earning those clicks instead.

5. Shift in personalization or location biasing by Google. It used to be the case that everyone who searched for city name plus population growth got the same results, but now suddenly people are seeing different results based on maybe their device or things they’ve clicked in the past or where they’re located. Location is often a big cause for this.

So for many SEOs for many years, “SEO consultant” resulted in the same search results. Then Google introduced the Maps results and pushed down a lot of those folks, and now “SEO consultant” results in different ranked results in each city and each geography that you search in. So that can often be a cause for falling traffic even though rankings remain high.

B. If rankings and traffic are down…

If you’re seeing that rankings have fallen and traffic has fallen in conjunction, there’s a bunch of other things that are probably going on that are not necessarily these things. A few of these could be responsible still, like snippet problems could cause your rankings and your traffic to fall, or brand and reputation issues could cause your click-through rate to fall, which would cause you to get dampened. But oftentimes it’s things like this:

1. & 2. Duplicate content and low-quality or thin content. Google thinks that what you’re providing just isn’t good enough.

3. Change in searcher intent. People who were searching for population growth used to want what you had to offer, but now they want something different and other people in the SERP are providing that, but you are not, so Google is ranking you lower. Even though your content is still good, it’s just not serving the new searcher intent.

4. Loss to competitors. So maybe you have worse links than they do now or less relevance or you’re not solving the searcher’s query as well. Your user interface, your UX is not as good. Your keyword targeting isn’t as good as theirs. Your content quality and the unique value you provide isn’t as good as theirs. If you see that one or two competitors are consistently outranking you, you might diagnose that this is the problem.

5. Technical issues. So if I saw from over here that the crawl was the problem, I wasn’t getting indexed, or Google hasn’t updated my pages in a long time, I might look into accessibility things, maybe speed, maybe I’m having problems like letting Googlebot in, HTTPS problems, or indexable content, maybe Google can’t see the content on my page anymore because I made some change in the technology of how it’s displayed, or crawlability, internal link structure problems, robots.txt problems, meta robots tag issues, that kind of stuff.

Maybe at the server level, someone on the tech ops team of my website decided, “Oh, there’s this really problematic bot coming from Mountain View that’s costing us a bunch of bandwidth. Let’s block bots from Mountain View.” No, don’t do that. Bad. Those kinds of technical issues can happen.

6. Spam and penalties. We’ll talk a little bit more about how to diagnose those in a second.

7. CTR, engagement, or pogo-sticking issues. There could be click-through rate issues or engagement issues, meaning pogo sticking, like people are coming to your site, but they are clicking back because they weren’t satisfied by your results, maybe because their expectations have changed or market issues have changed.

Step 3: Make fixes and observe results

All right. Next and last in this process, what we’re going to do is make some fixes and observe the results. Hopefully, we’ve been able to correctly diagnose and form some wise hypotheses about what’s going wrong, and now we’re going to try and resolve them.

A. On-page and technical issues should solve after a new crawl + index.

So on-page and technical issues, if we’re fixing those, they should usually resolve, especially on small sections of sites, pretty fast. As soon as Google has crawled and indexed the page, you should generally see performance improve. But this can take a few weeks if we’re talking about a large section on a site, many thousands of pages, because Google has to crawl and index all of them to get the new sense that things are fixed and traffic is coming in. Since it’s long tail to many different pages, you’re not going to see that instant traffic gain and rise as fast.

B. Link issues and spam penalty problems can take months to show results.

Look, if you have crappier links or not a good enough link profile as your competitors, growing that can take months or years even to fix. Penalty problems and spam problems, same thing. Google can take sometimes a long time. You’ve seen a lot of spam experts on Twitter saying, “Oh, well, all my clients who had issues over the last nine months suddenly are ranking better today,” because Google made some fix in their latest index rollout or their algorithm changed, and it’s sort of, okay, well we’ll reward the people for all the fixes that they’ve made. Sometimes that’s in batches that take months.

C. Fixing a small number of pages in a section that’s performing poorly might not show results very quickly.

For example, let’s say you go and you fix /cities/Milwaukee. You determine from your diagnostics that the problem is a content quality issue. So you go and you update these pages. They have new content. It serves the searchers much better, doing a much better job. You’ve tested it. People really love it. You fixed two cities, Milwaukee and Denver, to test it out. But you’ve left 5,000 other cities pages untouched.

Sometimes Google will sort of be like, “No, you know what? We still think your cities pages, as a whole, don’t do a good job solving this query. So even though these two that you’ve updated do a better job, we’re not necessarily going to rank them, because we sort of think of your site as this whole section and we grade it as a section or apply some grades as a section.” That is a real thing that we’ve observed happening in Google’s results.

Because of this, one of the things that I would urge you to do is if you’re seeing good results from the people you’re testing it with and you’re pretty confident, I would roll out the changes to a significant subset, 30%, 50%, 70% of the pages rather than doing only a tiny, tiny sample.

D. Sometimes when you encounter these issues, a remove and replace strategy works better than simply upgrading old URLs.

So if Google has decided /cities, your /cities section is just awful, has all sorts of problems, not performing well on a bunch of different vectors, you might take your /cities section and actually 301 redirect them to a new URL, /location, and put the new UI and the new content that better serves the searcher and fixes a lot of these issues into that location section, such that Google now goes, “Ah, we have something new to judge. Let’s see how these location pages on MySite.com perform versus the old cities pages.”

So I know we’ve covered a ton today and there are a lot of diagnostic issues that we haven’t necessarily dug deep into, but I hope this can help you if you’re encountering rankings challenges with sections of your site or with your site as a whole. Certainly, I look forward to your comments and your feedback. If you have other tips for folks facing this, that would be great. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Find More Articles

Posted in Latest NewsComments Off

Advert