Tag Archive | "Webmaster"

SearchCap: India fines Google, Bing Webmaster Tools login & Chrome on HTTP

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: India fines Google, Bing Webmaster Tools login & Chrome on HTTP appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Google reminds webmasters that widget links are against their webmaster guidelines

Don’t be surprised if a new wave of unnatural link penalties are sent out via the Google Search Console for widget links.

The post Google reminds webmasters that widget links are against their webmaster guidelines appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Google reminds webmasters that widget links are against their webmaster guidelines

Don’t be surprised if a new wave of unnatural link penalties are sent out via the Google Search Console for widget links.

The post Google reminds webmasters that widget links are against their webmaster guidelines appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Google Webmaster Tools Just Got a Lot More Important for Link Discovery and Cleanup

Posted by RobertFisher

What if you owned a paid directory site and every day you received emails upon emails stating that someone wants links removed. As they stacked up in your inbox, whether they were pleasant or they were sternly demanding you cease and desist, would you just want to give up?
What would you do to stop the barrage of emails if you thought the requests were just too overwhelming? How could you make it all go away, or at least the majority of it?

First, a bit of background

We had a new, important client come aboard on April 1, 2013 with a lot of work needed going forward. They had been losing rankings for some time and wanted help. With new clients, we want as much baseline data as possible so that we can measure progress going forward, so we do a lot of monitoring. On April 17th, one of our team members noticed something quite interesting. Using Ahrefs for link tracking, we saw there was a big spike in the number of external links coming to our new client’s site. 

When the client came on board on two weeks prior, the site had about 5,500 links coming in and many of those were less than quality. Likely half or more were comment links from sites with no relevance to the client and they used the domain as the anchor text. Now, overnight they were at 6,100 links and the next day even more. Each day the links kept increasing. We saw they were coming from a paid directory called Netwerker.com. Within a month to six weeks, they were at over 30,000 new links from that site.

We sent a couple of emails asking that they please stop the linking, and we watched Google Webmaster Tools (GWT) every day like hawks waiting for the first link from Netwerker to show. The emails got no response, but in late May we saw the first links from there show up in GWT and we submitted a domain disavow immediately.

We launched their new site in late June and watched as they climbed in the rankings; that is a great feeling. Because the site was rising in the rankings rather well, we assumed the disavow tool had worked on Netwerker. Unfortunately, there was a cloud on the horizon concerning all of the link building that had been done for the client prior to our engagement. October arrived with a Penguin attack
(Penguin 2.1, Oct. 4, 2013) and they fell considerably in the SERPs. I mean, they disappeared for many of the best terms they had again began to rank for. They had fallen to page five or deeper for key terms. (NOTE: This was all algorithmic and they had no manual penalty.)

While telling the client that their new drop was a Penguin issue related to the October Penguin update (and the large ratio of really bad links), we also looked for anything else that would cause the issue or might be affecting the results. We are constantly monitoring and changing things with our clients. As a result, there are times we do not make a good change and we have to move things back. (We always tell the client if we have caused a negative impact on their rankings, etc. This is one of the most important things we ever do in building trust over time and we have never lost a client because we made a mistake.) We went through everything thoroughly and eliminated any other potential causative factors. At every turn there was a Penguin staring back at us!

When we had launched the new site in late June 2013, we had seen them rise back to page one for key terms in a competitive vertical. Now, they were missing for the majority of their most important terms. In mid-March of 2014, nearly a year after engagement, they agreed to do a severe link clean up and we began immediately. There would be roughly 45,000 – 50,000 links to clean up, but with 30,000 from the one domain already appropriately disavowed, it was a bit less daunting. I have to say here that I believe their reticence to do the link cleanup was due to really bad SEO in the past. They had, over time, had several SEO people/firms and at every turn, they were given poor advice. I believe they were misinformed into believing that high rankings were easy to get and there were “tricks” that would fool Google so you could pull it off. So, it really isn’t a client’s fault when they believe things are easy in the world of SEO.

Finally, it begins to be fun

About two weeks in, we saw them start to pop up randomly in the rankings. We were regularly getting responses back from linking sites. Some responses were positive and some were requests for money to remove the links; the majority gave us the famous “no reply.” But, we were making progress and beginning to see a result. Around the first or second week of April their most precious term, geo location + product/service, was ranked number one and their rich snippets were beautiful. It came and went over the next week or two, staying longer each time.

To track links we use MajesticSEO, Ahrefs, Open Site Explorer, and Google Webmaster Tools. As the project progressed, our Director of Content and Media who was overseeing the project could not understand why so many links were falling off so quickly. Frankly, we were not getting that many agreeing to remove them.

Here is a screenshot of the lost links from Ahrefs.

ahrefs New and Lost Links March 7 to May 7

Here are the lost links in MajesticSEO.

MajesticSEO Lost Links March to May

We were seeing links fall off as if the wording we had used in our emails to the sites was magical. This caused a bit of skepticism on our team’s part so they began to dig deeper. It took little time to realize the majority of the links that were falling off were from Netwerker! (Remember, a disavow does not keep the links from showing in the link research tools.) Were they suddenly good guys and willing to clear it all up? Had our changed wording caused a change of heart? No, the links from Netwerker still showed in GWT; Webmaster Tools had never shown all from Netwerker, only about 13,000, and it was still showing 13,000. But, was that just because Google was slower at showing the change? To check we did a couple of things. First, we just tried out the links that were “lost” and we saw they still resolved to the site, so we dug some more.

Using a bit of magic in the form of a
User-Agent Switcher extension and eSolutions, What’s my info? (to verify the correct user-agent was being presented), our head of development ran the user-agent string for Ahrefs and MajesticSEO. What he found was that Netwerker was now starting to block MajesticSEO and Ahrefs via a 406 response. We were unable to check Removeem, but the site was not yet blocking OSE. Here are some screenshots to show the results we are seeing. Notice in the first screenshot, all is well with Googlebot.

But A Different Story for Ahrefs

And a Different Story for MajesticSEO

We alerted both Ahrefs and MajesticSEO and neither responded beyond we will look into it canned response. We thought it important to let those dealing with link removal know to look even more carefully. Now August and three months in, both maintain the original response.

User-agents and how to run these tests

The user-agent or user-agent string is sent to the server along with any request. This allows the server to determine the best response to deliver based on conditions set up by its developers. It appears in the case of Netwerker’s servers that the response is to deny access to certain user-agents.

  1. We used the User-Agent Switcher extension for Chrome
  2. Next determine the user-agent string you would like to check (these can be found on various sites, one set of examples can be found at: http://www.useragentstring.com/. In most cases, the owner of the crawler or browser will have a webpage associated with them, for example the Ahrefs bot.)
  3. Within the User-Agent Switcher extension, open the options panel and add the new user-agent string.
  4. Browse to the site you would like to check.
  5. Using the User-Agent Switcher select the Agent you would like to view the site as, it will reload the page and you will be viewing it as the new user-agent string.
  6. We used eSolutions, What’s my info? to verify that the User-Agent Switcher was presenting the correct data to us.

A final summary

If you talk with anyone who is known for link removal (think people like Ryan Kent of Vitopian, an expert in Link cleanup), they will tell you to use every link report you can get your hands on to ensure you miss nothing. They always include Google Webmaster Tools as an important tool. Personally, while we always use GWT, early on I did not think GWT was important for other than checking to see if we missed anything due to them consistently showing less links than others and all of the links showing in GWT are usually showing in the other tools. My opinion has changed with this revelation.

Given we gather data on clients early on, we had something to refer back to with the link clean-up; today if someone comes in and we have no history of their links, we must assume they will have links from sites blocking major link discovery tools and we have a heightened sense of caution. We will not believe we have cleaned everything ever again; we can believe we cleaned everything in GWT.

If various directories and other sites with a lot of outbound links start blocking link discovery tools because they, “just don’t want to hear any more removal requests,” GWT just became your most important tool for catching the ones that block the tools. They would not want to block Google or Bing for the obvious reasons.

So, as you go forward and you look at links with your own site and/or with clients, I suggest that you go to GWT to make sure there is not something showing there which fails to show in the well-known link discovery tools.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in Latest NewsComments Off

SearchCap: Ripoff Report Returns In Yahoo, Google Shopping Campaigns, Bing Webmaster Roadshow

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. From Search Engine Land: Bing Webmaster Roadshow 2014 Takes Search Experts On 4-City Tour This Summer Bing announced today it will be hosting the Bing Webmaster Roadshow 2014 starting…

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Quickly Reversing the Fat Webmaster Curse

Long story short, -38 pounds in about 2 months or so. Felt great the entire time and felt way more focused day to day. Maybe you don’t have a lot of weight to lose but this whole approach can significantly help you cognitively.

In fact, the diet piece was originally formed for cognitive enhancements rather than weight loss.

Before I get into this post I just want to explicitly state that I am not a doctor, medical professional, medical researcher, or any type of medical/health anything. This is not advice, I am just sharing my experience.

You should consult a healthcare professional before doing any of this stuff.

Unhealthy Work Habits

The work habits associated with being an online marketer lend themselves to packing on some weight. Many of us are in front of a screen for large chunks of the day while also being able to take breaks whenever and wherever we want.

Sometimes those two things add up to a rather sedentary lifestyle with poor eating habits (I’m leaving travel out at the moment but that doesn’t help either).

In addition to the mechanics of our jobs being an issue we also tend to work longer/odd hours because all we really need to get a chunk of our work done is a computer or simply access to the internet. If you take all of those things and add them into the large amount of opportunity that exists on the web you have the perfect recipe for unhealthy, stressful work habits.

These habits tend to carry over into offline areas as well. Think about the things we touch or access every day:

  • Computers
  • Tablets
  • Smartphones
  • Search Engines
  • Online Tools
  • Email
  • Instant Messaging
  • Social Networks

What do many of these have in common? Instant “something”. Instant communication, results, gratification, and on and on. This is what we live in every day. We expect and probably prefer fast, instant, and quick. With that mindset, who has time to cook a healthy meal 3x per day on a regular basis? Some do, for sure. However, much like the office work environment this environment can be one that translates into lots of unhealthy habits and poor health.

I got to the point where I was about 40 pounds overweight with poor physicals and lackluster lipid profiles (high cholesterol, blood pressure, etc). I tried many things, many times but what ultimately turned the corner for me were 3 different investments.

Investment #1 – Standup/Sitdown Desk

Sitting down all day is no bueno. I bought a really high quality standup desk with an electrical motor so I can periodically sit down for a few minutes in between longer periods of standing.

It has a nice, wide table component and is quite sturdy. It also allows for different height adjustments via a simple up and down control:

A couple of tips here:

  • Wear comfy shoes
  • Take periodic breaks (I do so hourly) to go walk around the house or office or yard
  • I also like to look away from the CPU every 20-30 minutes or so, sometimes I get eyestrain but I bought these glasses from Gunnar and it’s relieved those symptoms

Investment #2 – Treaddesk

The reason I didn’t buy the full-on treadmill desk is because I wanted a bigger desk with more options. I bought the Treaddesk, which is essentially the bottom part of the treadmill, and I move it around during the week based on my workflow needs:

They have packages available as well (see the above referenced link).

I have a second, much cheaper standup desk that I hacked together from IKEA:

This desk acts as a holder for my coffee stuff but also allows me to put my laptop on it (which is paired with an external keyboard and trackpad) in case I want to do some lighter work (I have a hard time doing deeper work when doing the treadmill part while working).

I move the Treaddesk back and forth sometimes, but mostly it stays with this IKEA desk. If I have a week where the work is not as deeply analytical and more administrative then I’ll walk at a lower speed on the main computer for a longer period of time.

I tend to walk about 5-7 miles a day on this thing, usually in a block of time where I do that lighter-type work (Quickbooks, cobbling reports together, email triage, very light research/writing, reading, and so on).

Investment #3 – Bulletproof Coffee and the Bulletproof Diet

I’m a big fan of Joe Rogan in general, and I enjoy his podcast. I heard about Dave Asprey on the JRE podcast so I eventually ended up on his site, bulletproofexec.com. I purchased some private coaching with the relevant products and I was off to the races.

I did my own research on some of the stuff and came away confident that “good” fat had been erroneously hated on for years. I highly encourage you to conduct your own research based on your own personal situation, again this is not advice.

I really wanted to drop about 50 pounds so I went all in with Bulletproof Intermittent Fasting. A few quick points:

  • I felt great the entire time
  • In rare moments where I was hungry at night I just had a tablespoon of organic honey
  • I certainly felt a cognitive benefit
  • I was never hungry
  • I was much more patient with things 
  • I felt way more focused

So yeah, butter in the coffee and a mostly meat/veggie diet. I cheated from time to time, certainly over the holiday. I lost 38 pounds in slightly over 60 days. Here’s a before and after:

Fat Eric

Not So Fat Eric

I kept this post kind of short and to the point because my desire is not to argue or fight about whether carbs are good or bad, whether fat is good or bad, whether X is right, or whether Y is wrong. This is what worked for me and I was amazed by it, totally amazed by the outcome.

I also do things like cycling and martial arts but I’ve been doing those for awhile, along with running, and while I’ve lost weight I’ve never had it melt away like this.

I’ve stopped the fasting portion and none of the weight has piled back on. Lipid tests have been very positive as well, best in years.

Even if you don’t have a ton of weight to lose, seriously think about the standup desk and treadmill. 

Happy New Year!


SEO Book

Posted in Latest NewsComments Off

Comparing Rank-Tracking Methods: Browser vs. Crawler vs. Webmaster Tools

Posted by Dr-Pete

Deep down, we all have the uncomfortable feeling that rank-tracking is unreliable at best, and possibly outright misleading. Then, we walk into our boss’s office, pick up the phone, or open our email, and hear the same question: “Why aren’t we #1 yet?!” Like it or not, rank-tracking is still a fact of life for most SEOs, and ranking will be a useful signal and diagnostic for when things go very wrong (or very right) for the foreseeable future.

Unfortunately, there are many ways to run a search, and once you factor in localization, personalization, data centers, data removal (such as [not provided]), and transparency (or the lack thereof), it’s hard to know how any keyword really ranks. This post is an attempt to compare four common rank-tracking methods:

  1. Browser – Personalized
  2. Browser – Incognito
  3. Crawler
  4. Google Webmaster Tools (GWT)

I’m going to do my best to keep this information unbiased and even academic in tone. Moz builds rank-tracking tools based in part on crawled data, so it would be a lie to say that we have no skin in the game. On the other hand, our main goal is to find and present the most reliable data for our customers. I will do my best to present the details of our methodology and data, and let you decide for yourselves.


We started by collecting a set of 500 queries from Moz.com’s Google Webmaster Tools (GWT) data for the month of July 2013. We took the top 500 queries for that time period by impression count, which provided a decent range of rankings and click-through rates. We used GWT data because it’s the most constrained rank-tracking method on our list – in other words, we needed keywords that were likely to pop up on GWT when we did our final data collection.

On August 7th, we tracked these 500 queries using four methods:

(1) Browser – Personalized

This is the old-fashioned approach. I personally entered the queries on Google.com via the Chrome browser (v29) and logged into my own account.

(2) Browser – Incognito

Again, using Google.com on Chrome, I ran the queries manually. This time, though, I was fully logged out and used Chrome’s incognito mode. While this method isn’t perfect, it seems to remove many forms of personalization.

(3) Crawler

We modified part of the MozCast engine to crawl each of the 500 queries and parse the results. Crawls occurred across a range of IP addresses (and C-blocks), selected randomly. The crawler did not emulate cookies or any kind of login, and we added the personalization parameter (“&pws=0”) to remove other forms of personalization. The crawler also used the “&near=us” option to remove some forms of localization. We crawled up to five pages of Google results, which produced data for all but 12 of the 500 queries (since these were queries for which we knew Moz.com had recently ranked).

(4) Google Webmaster Tools

After Google made data available for August 7th, we exported average position data from GWT (via “Search Traffic” > “Search Queries”) for that day, filtering to just “Web” and “United States”, since those were the parameters of the other methods. While the other methods represent a single data point, GWT “Avg. position” theoretically represents multiple data points. Unfortunately, there is very little transparency about precisely how this data is measured.

Once the GWT data was exported and compared to the full list, there were 206 queries left with data from all four rank-tracking methods. All but a handful of the dropped keywords were due to missing data in GWT’s one-day report. Our analyses were conducted on this set of 206 queries with full data.

Results: Correlations

To compare the four ranking methods, we started with the pair-wise Spearman rank-order correlations (hat tip to my colleague, Dr. Matt Peters, for his assistance on this and the following analysis). All correlations were significant at the p<0.01* level, and r-values are shown in the table below:

*Given that the ranking methods are analogous to a repeated analysis of the same data set, we applied the Bonferroni correction to all p-values.

Interestingly, almost all of the methods showed very strong agreement, with Personalized vs. Incognito showing the most agreement (not surprisingly, as both are browser-based). Here’s a scatterplot of that data, plotted on log-log axes (done only for visualization’s sake, since the rankings were grouped pretty tightly at the upper spots):

Crawler vs. GWT had the lowest correlation, but it’s important to note that none of these differences were large enough to make a strong distinction between them. Here’s the scatterplot of that correlation, which is still very high/positive by most reasonable standards:

Since the GWT “Average” data is precise to one decimal point, there’s more variation in the Y-values, but the linear relationship remains very clear. Many of the keywords in this data set had #1 rankings in GWT, which certainly helped boost the correlations, but the differences in the methods appear to be surprisingly low.

If you’re new to correlation and r-values, check out my quick refresher: the correlation “mathographic”. The statement “p<0.01″ means that there is less than a 1% probability that these r-values were the result of random chance. In other words, we can be 99% sure that there was some correlation in play (and it wasn’t zero). This doesn’t tell us how meaningful the correlation is. In this particular case, we’re just comparing sets of data to see how similar they are – we’re not making any statements about causation.

Results: Agreement

One problem with the pair-wise correlations is that we can only compare any one method to another. In addition, there’s a certain amount of dependence between the methods, so it’s hard to determine what a “strong” correlation is. During a smaller, pilot study, we decided that what we’re really interested in is how any given method compares to the totality of the other three methods. In other words, which method agrees or disagrees the most with the rest of the methods?

With the help of Dr. Peters, I created a metric of agreement (or, more accurately, disagreement). I’ll save the full details for Appendix A at the end of this article, but here’s a short version. Let’s say that the four methods return the following rankings (keeping in mind that GWT is an average):

  1. 2
  2. 1
  3. 1
  4. 2.8

Our disagreement metric produces the following values for each of the methods:

  1. 2.89
  2. 2.34
  3. 2.34
  4. 3.58

Since the two #1 rankings show the most agreement, methods (2) and (3) have the same score, with method (1) showing more disagreement and (4) showing the most disagreement. The greater the distance between the rankings, the higher the disagreement score, but any rankings that match will have the same score for any given keyword.

This yielded a disagreement score for each of the four methods for each of the 206 queries. We then took the mean disagreement score for each method, and got the following results:

  1. Personal = 1.12
  2. Incognito = 0.82
  3. Crawler = 0.98
  4. GWT = 1.26

GWT showed the highest average disagreement from the other methods, with incognito rankings coming in on the low end. On the surface, this suggests that, across the entire set of methods, GWT disagreed with the other three methods the most often.

Given that we’ve invented this disagreement metric, though, it’s important to ask if this difference is statistically significant. This data proved not to be normally distributed (a chunk of disagreement=0 data points skewed it to one side), so we decided our best bet for comparison was the non-parametric Mann-Whitney U Test.

Comparing the disagreement data for each pair of methods, the only difference that approached statistical significance was Incognito vs. GWT (p=0.022). Since I generally try to keep the bar high (p<0.01), I have to play by my own rules and say that the disagreement scores were too close to call. Our data cannot reliably tell the levels of disagreement apart at this point.

Results: Outliers

Even if the statistics told us that one method clearly disagreed more than the other methods, it still wouldn’t answer one very important question – which method is right? Is it possible, for example, that Google Webmaster Tools could disagree with all of the other methods, and still be the correct one? Yes, it’s within the realm of possibility.

No statistic will tell us which method is correct if we fundamentally distrust all of the methods (and I do, at least to a point), so our next best bet is to dig into some of the specific cases of disagreement and try to sort out what’s happening. Let’s look at a few cases of large-scale disagreement, trying not to bias toward any particular method.

Case 1 – Personalization Boost

Many of the cases where personalization disagreed are what you’d expect – Moz.com was boosted in my personalized results. For example, a search for “seo checklist” had Moz.com at #3 in my logged-in results, but #7 for both incognito and crawled, and an average of 6.7 for GWT (which is consistent with the #7 ballpark). Even by just clicking personalization off, Moz.com dropped to #4, and in a logged out browser a few days after the original data collection, it was at #5.

What’s fascinating to me is that personalization didn’t disagree even more often. Consider that all of these queries were searches that generated traffic for Moz.com and I’m on the site every day and very active in the SEO community. If personalization has the impact we seem to believe it has, I would theorize that personalized searches would disagree the most with other methods. It’s interesting that that wasn’t the case. While personalization can have a huge impact on some queries, the number of searches it affects still seems to be limited.

Case 2 – Personalization Penalty

In some cases, personalization actually produced lower rankings. For example, a search for “what is an analyst” showed Moz.com at the #12 position for both personalized and incognito searches. Meanwhile, crawled rankings put us at #3, and GWT’s average ranking was #5. Checking back (semi-manually), I now see us at #10 on personalized search and up to #2 for crawled rankings.

Why would this happen? Both searches (personalized vs. crawled) show a definition box for “analyst” at the top, which could indicate some kind of re-ranking in play, but the top 10 after that box differ by quite a bit. One would naturally assume that Moz.com would get a boost in any of my personalized searches, but that’s simply not the case. The situation is much more complex and real-time than we generally believe.

Case 3 – GWT (Ok, Google) Hates Us

Here’s one where GWT seems to be out of whack. In our one-day data collection, a search for “seo” showed Moz at #3 for personalized rankings and #4 for incognito and crawled. Meanwhile, GWT had us down in the #6 spot. It’s not a massive difference, but for such an important head keyword, it definitely could lead to some soul-searching.

As of this writing, I was showing Moz.com in the #4 spot, so I called in some help via social media. I asked people to do a logged-in (personalized) search for “seo” and report back where they found Moz.com. I removed data from non-US participants, which left 63 rankings (36 from Twitter, and 27 from Facebook). The reported rankings ranged from #3 to #8, with an average of 4.11. These rankings were reported from across the US, and only two participants reported rankings at #6 or below. Here’s the breakdown of the raw data:

You can see the clear bias toward the #4 position across the social data. You could argue that, since many of my friends are SEOs, we all have similarly biased rankings, but this quickly leads to speculation. Saying that GWT numbers don’t match because of personalization is a bit like saying that the universe must be made of dark matter just because the numbers don’t add up without it. In the end, that may be true, but we still need the evidence.

Face Validity

Ultimately, this is my concern – when GWT’s numbers disagree, we’re left with an argument that basically boils down to “Just trust us.” This is difficult for many SEOs, given what feels like a concerted effort by Google to remove critical data from our view. On the one hand, we know that personalization, localization, etc. can skew our individual viewpoints (and that browser-based rankings are unreliable). On the other hand, if 56 out of 63 people (89%) all see my site at #3 or #4 for a critical head term and Google says the “average” is #6, that’s a hard pill to swallow with no transparency around where Google’s number is coming from.

In measurement, we call this “face validity”. If something doesn’t look right on the surface, we generally want more proof to sort out why, and that’s usually a reasonable instinct. Ultimately, Google’s numbers may be correct – it’s hard to prove they’re not. The problem is that we know almost nothing about how they’re measured. How does Google count local and vertical results, for example? What/who are they averaging? Is this a sample, and if so, how big of a sample and how representative? Is data from [not provided] keywords included in the mix?

Without these answers, we tend to trust what we can see, and while we may be wrong, it’s hard to argue that we shouldn’t. What’s more, it’s nearly impossible to convince our clients and bosses to trust a number they can’t see, right or wrong.


The “good” news, if we’re being optimistic, is that the four methods we considered in this study (Personalized, Incognito, Crawler, and GWT) really didn’t differ that much from each other. They all have their potential faults, but in most cases they’ll give you an answer that’s in the ballpark of reality. If you focus on relative change over time and not absolute numbers, then all four methods have some value, as long as you’re consistent.

Over time, this situation may change. Even now, none of these methods measure anything beyond core organic ranking. They don’t incorporate local results, they don’t indicate if there are prominent SERP features (like Answer Boxes or Knowledge Graph entries), they don’t tell us anything about click-through or traffic, and they all suffer from the little white lie of assumed linearity. In other words, we draw #1 – #10, etc. on a straight line, even though we know that click-through and impact drop dramatically after the first couple of ranking positions.

In the end, we need to broaden our view of rankings and visibility, regardless of which measurement method we use, and we need to keep our eyes open. In the meantime, the method itself probably isn’t critically important for most keywords, as long as we’re consistent and transparent about the limitations. When in doubt, consider getting data from multiple sources, and don’t put too much faith in any one number.

Appendix A: Measuring Disagreement

During a pilot study, we realized that, in addition to pair-wise comparisons of any two methods, what we really wanted to know was how any one method compared to the rest of the methods. In other words, which methods agreed (or disagreed) the most with the set of methods as a whole? We invented a fairly simple metric based on the sum of the differences between each of the methods. Let’s take the example from the post – here, the four methods returned the following rankings (for Keyword X):

  1. 2
  2. 1
  3. 1
  4. 2.8

We wanted to reward methods (2) and (3) for being the most similar (it doesn’t matter that they showed Keyword X in the #1 position, just that they agreed), and slightly penalize (1) and (4) for mismatching. After testing a few options, we settled (I say “we”, but I take full blame for this particular nonsense) on calculating the sum of the square roots of the absolute differences between each method and the other three methods.

That sounds a lot more complicated than it actually is. Let’s calculate the disagreement score for method 1, which we’ll call “M1″ (likewise, we’ll call the other methods M2, M3, and M4). I call it a “disagreement” score because larger values ended up representing lower agreement. For M1 for Keyword X, the disagreement score is calculated by:

sqrt(abs(M1-M2)) + sqrt(abs(M1-M3)) + sqrt(abs(M1-M4))

The absolute value is used because we don’t care about the direction of the difference, and the square root is essentially a dampening function. I didn’t want outliers to be overemphasized, or one bad data point for one method could potentially skew the results. For Method 1 (M1), then, the disagreement value is:

sqrt(abs(2-1)) + sqrt(abs(2-1)) + sqrt(abs(2-2.8))

…which works out to 2.89. Here are the values for all four methods:

  1. 2.89
  2. 2.34
  3. 2.34
  4. 3.58

Let’s look at a couple of more examples, just so that you don’t have to take my word for how this works. In this second case, two methods still agree, but the ranking positions are “lower” (which equates to larger numbers), as follows:

  1. 12
  2. 12
  3. 3
  4. 5

The disagreement metric yields the following values:

  1. 5.65
  2. 5.65
  3. 7.41
  4. 6.71

M1 and M2 are in agreement, so they have the same disagreement value, but all four values are elevated a bit to show that the overall distance across the four methods is fairly large. Finally, here’s an example where two methods each agree with one other method:

  1. 2
  2. 2
  3. 5
  4. 5

In this case, all four methods have the same disagreement score:

  1. 3.46
  2. 3.46
  3. 3.46
  4. 3.46

Again, we don’t care very much that two methods ranked Keyword X at #2 and two at #5 – we only care that each method agreed with one other method. So, in this case, all four methods are equally in agreement, when you consider the entire set of rank-tracking methods. If the difference between the two pairs of methods was larger, the disagreement score would increase, but all four methods would still share that score.

Finally, for each method, we took the mean disagreement score across the 206 keywords with full ranking data. This yielded a disagreement measurement for each method. Again, these measurements turned out not to differ by a statistically significant margin, but I’ve presented the details here for transparency and, hopefully, for refinement and replication by other people down the road.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in Latest NewsComments Off

An Updated Guide to Google Webmaster Tools

Posted by beammeup

With the recent Google Webmaster Tools security bug, I thought a deep dive into what GWT has to offer SEOs might be prudent since many SEOs may have logged in recently.

Google Webmaster Tools was once Google Webmaster Wasteland. But the past year has been a fruitful one as Webmaster Tools has rolled out improvements faster than Facebook does new privacy statements. Google Webmaster Tools (GWT) is now full of insightful data and metrics that you cannot get anywhere else. Some GWT data is useful, some is not. Let's dive in and take a look at each tool in GWT.

Guide to Google Webmaster Tools Index

Webmaster Tools Sections My Favorite Tools
Configuration #1. Download Your Latest Links
Health #2. View Your Website Crawl Stats
Traffic #3. Submit To Index
Optimization #4. Webmaster Tools Data in Google Analytics
Labs #5. Rich Snippets/Structured Data Test Tool

Webmaster Tools Home

When you first login, you'll see a list of all websites in your Google Webmaster tools account as well as few links to view all messages from Google, 'Preferences', 'Author Stats' (Labs), and a few miscellaneous links under 'Other Resources'.

Google Webmaster Tools Home

All Messages

Google used to rarely communicate with Webmasters through messages. This year some probably wish they communicated a little less with the amount of "love notes" many SEOs have received. You might see a message here if:

  • Google thinks your site may have been hacked
  • Google detected unnatural links pointing to your site
  • Google thinks links pointing to your site are using techniques outside Google’s Webmaster Guidelines

You can set the messages email threshold to: 'only important' or 'all messages' under the "Preferences" tab

See it: View All Your Messages

Labs – Author Stats

Author stats in Google Webmaster Tools

Since authorship isn't tied to a single domain, Google shows authorship stats for all sites you write for as well as individual stats. You'll need a valid author profile (go Google+!) to see stats here. The stats are interesting, and good for verifying which URLs are showing your ugly mug in the SERPs.

See it: View your Author Stats

Other Resources – Rich Snippets/Structured Data

Structured Data Testing ToolIf you've never used the rich snippets testing tool, now known as "structured data", bookmark it now. It's a one stop shop to test URLs to see if your author profile is linked correctly.

You can also use the tool to check if you've setup or verified your:

  • Author Page
  • Name
  • Google+ Page as a Publisher
  • Any structured data detected (reviews, products, song titles, etc) in the form of microdata, microformats, or RDFa

See it: Test Your URLs for Structured Data

Specific Site Dashboard in Google Webmaster Tools

Once you select a site after logging in, you see the real meat of the tool. The site specific dashboard has a nice overview showing:

  • Crawl Errors
  • URL Errors
  • Site Errors
  • Health status of DNS, Server Connectivity & Robots.txt
  • Overview of # of Queries (plus clicks and impressions)
  • Sitemaps (including submitted URLs and indexed URLs)

GWT Site Dashboard

There are five major sections once you've selected a site: 'Configuration', 'Health', 'Traffic', 'Optimization', and 'Labs'. I find that the most insightful data is in the 'Heath' and 'Traffic' sections, and what you can get inside Google Analytics.

The 'Configuration' Section


Google Webmaster Tools Settings

Here you can target a specific country for your website, choose a preferred domain (www or non-www), and limit the crawl rate of Googlebot if you so choose.


Google Sitelinks

Google automatically choosing Sitelinks to display below your main URL on certain queries, usually brand related. If you have certain URLs you wouldn't want showing as Sitelinks you can "demote" them and Google won't show those demoted URLs.

URL Parameters

If you're having problems with duplicate content on your site because of variables/parameters in your URLs you can restrict Google from crawling them with this tool. Unless you're sure about what you're restricting, don't play with the settings here!

Change of Address

If you are switching your site to a whole new domain, do a 301 redirect, then make sure Google knows about it here.


Ever taken like 20 minutes to add a new user to your Google Analytics account? No? OK, maybe that was just me. Luckily adding a user to GWT is much easier. There are two main user types: 'Full user' and 'Restricted User'. Restricted users are good for clients if you want to give them most view-only access, but little ability to change settings or submit things (you probably don't clients filing random reconsideration requests!).

adding users in GWT


This setting is a way for members of YouTube's Partner Program (probably not you) to link their YouTube Channel with Webmaster Tools. My guess is this section will get more settings in the future, but for now, it's very confusing. More details on the Google Webmaster Central blog here.

The 'Health' Section

Crawl Errors

Crawl errors shows you issues Googlebot had in crawling your site. This includes response codes (404s, 301s) as well as a graph of the errors over time. This is a fantastic resource for spotting broken links, as the URL shows up as a 404 error. You can see when Google first detected the error codes and download the table of errors into a spreadsheet.

google webmaster tools crawl errors

Crawl Stats

Pages crawled per day is a good SEO metric to track over time. You can get some insight from the chart, but this is a metric to check in on and record every week. Ideally you want that number continuing to climb, especially if you are adding new content.

google webmaster tools crawl stats

Blocked URLs Fetch as Googlebot & Submit To Index

Fetch as Googlebot will return exactly what Google's spider "sees" on the URL you submit. This is handy for spotting hacked sites as well as seeing your site the way Google does. It's a good place to start an SEO audit.

The really neat feature that's new this year is "Submit to Index". Ever made a title tag change and wished Google would update its index faster to get those changes live? 'Submit to Index' does just that. 50 times a month you can submit a page to update in near real-time in Google's index. Very handy for testing on-page changes.

Here's Matt Cutts on how to use the 'Submit to Index' tool:

Index Status

Make sure and hit the 'Advanced' button here so you can see all the interesting index stats Google shows about your site. Keep an eye on the 'Not Selected' number as that could indicate that Google is not viewing your content favorably or you have a duplicate content issue if that number is rising.

google webmaster tools index status


If Google has detected any malware on your site you will see more information here. Google often sends messages now if Malware is detected as well.

The 'Traffic' Section

Search Queries

These queries are when your site shows up in a search result, not just when someone clicks your site. So you may find some keyword opportunities where you are showing up but not getting clicks. I much prefer the interface in Google Analytics for this query data, and you may find a lot more queries showing up there then here.

Keep an eye on the CTR % for queries. If you have a known #1 ranking (your brand terms for example) for but an abnormally low position 1 CTR that's a sign that someone might be bidding on your brand terms (which may or may not be good). If you have a high position but low CTR it usually indicates that your meta descriptions and title tags may not be enticing enough. Can you add a verified author to the page? Or other structured data? That could help CTR rates.

google webmaster tools search queries

Links To Your Site

This is my favorite addition to GWT this year. The link data here keeps getting updated faster and faster. When this was first launched earlier this year the delay on finding links was around three weeks. I've seen the delay down to as little as one week now.

There are two ways to download lists of links, but the "Download Latest Links" is the more useful of the two.

"Download More Sample Links" just gives a list of the same links as the latest links but in alphabetical order instead of most recent. The main report lists the domains linking to your site sorted by the number of links. Unfortunately drilling down into the domain level doesn't give really any useful insights other than the pages that are linked too (but you can't see where they are linked from on the domain). You'll find domains listed here but not in the "Latest Links" report. Bummer.

google webmaster tools links to site

Internal Links

Pretty good report for diagnosing internal link issues. This tool is nothing fancy but URLs are sorted by most internal links. Use this to diagnose pages on your site that should be getting more internal link juice.

The 'Optimization' Section


See a list of all the different types of sitemaps Google has found or that you have added and some stats about each one. You can also test a sitemap as well before submitting it and Google will scan to find any errors. Webmaster Tools shows stats here on Web sitemaps, as well as Video, News, and Image sitemaps as wellgoogle webmaster tools sitemaps

Remove URLs

You can submit URLs (only for sites you control of course) that you wish removed from Google. Make sure and follow the removal requirements process.

HTML Improvement

Think of this as a basic On-Page SEO audit tool. Google will show you lists of URLs on your site that don't have unique Title Tags, or are missing Meta Descriptions. This is a handy tool for quick On-Page SEO issues when you first take over a new website. Click on any of the issues found to return a list of the URLs that need improvement.

google webmaster tools html improvements

Content Keywords

See a list of single keywords, not key phrases, which Google thinks your site is about. As long as you don't see spam stuff here, you're good.

Structured Data

If you have some structured data on your site, such as a linked Google+ author or product review data, you can see stats about that data including the type of data found and the schema. This is useful to mass verify that all the pages you think are marked up correctly actually are.

google webmaster tools structured data tool

The 'Labs' Section

Custom Search

Ever wanted to build your own search engine? You can with Google Custom Search. If you have a collection of sites that you're always searching through using Google, you might consider using Google Custom search to build your own Google that just returns results from those sites. You can see how the custom search engine would work on just your own site using the preview tool here in Webmaster Tools.

Instant Previews

Input any URL on your site (or just leave blank and click 'Compare' to see the homepage) to see what the preview of the site might look like in a Google desktop search results set, or on a mobile SERP.

google webmaster tools instant preview

Site Performance

This tool got dropped by Google's spring cleaning in April 2012. I like using webpagetest.org for testing site performance.

Webmaster Tools Data In Google Analytics

Connecting your Google Analytics account with your verified site profile in Google Webmaster tools brings some GWT data directly into your Google Analytics account. No need to login to two places.

To connect a verified GWT site to the correct analytics site, click the "manage site" dropdown:

google webmaster tools connection to Google Analytics

Once connected, GWT data shows up in the Standard Reporting section of Google Analytics under "Traffic Sources" -> "Search Engine Optimization".

Not all GWT data is available in GA. You'll only get three data sets in Google Analytics:

  • Queries
  • Landing Pages
  • Geographical Summary

Let's look at each of these and see what's worth looking at.


Queries are interesting because you can see some of the keywords that might be hidden under (not provided). This doesn't help with attribution of course, but at least we can still use that data for keyword research. Darn you (not provided).

What's really interesting is how many more queries show up in the query report in Google Analytics (that is supposed to be GWT data) than do when you directly get the query data in Google Webmaster Tools. For example, for the timeframe: Oct 28th-Nov 27th we had 317 queries report in Google Analytics:

analytics query data from webmaster tools

but only 93 in the Google Webmaster Tools 'Top queries' report:

google webmaster tools top queries

I'm not sure why such a big discrepancy between GWT queries and queries in Analytics from GWT. I definitely see more Google Images type queries in the GA report and less in the 'Top Queries' in GWT. Interesting discrepancy. Anyone else notice a big difference in query data?

Nonetheless the Query data can be interesting and it's nice to have right in GA. I hope that Google continues to provide more GWT data directly into Google Analytics like this.

Landing Pages

You're better off getting your actual top landing pages list from Analytics, but you can see what GWT sees as your tops pages sorted by Impressions. The interesting nugget of info here is the CTR. That's not data you see in analytics and could be insightful. I like comparing the CTR to the site average:

landing pages in google analytics

Geographical Summary

This section is again useful really for the CTR rate data. Looking at specific countries you can see where it might be worth running more Facebook ads or doing some international SEO work in.

What do you use Google Webmaster Tools For?

OK, I've ranted enough about what I like in GWT. What about you?

What data do you find useful in Google Webmaster tools?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

SEOmoz Daily SEO Blog

Posted in Latest NewsComments Off

What Google’s Webmaster Tools Tell Us About International Click Through Rates

Last week, I was taking delegates on an international SEO course through their paces on using the data Google so generously provides in their webmaster tools. I’m a little puzzled that more people don’t dip into the data, analyse it and then take actions on their campaigns. So, high…

Please visit Search Engine Land for the full article.

Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Phoenix Rising: Bing’s New Webmaster Tools – Whiteboard Friday

Posted by randfish

In this week's Whiteboard Friday, Bing's very own Duane Forrester will be joining us in the Moz studio to personally walk us through the newly released Bing Webmaster Tools.  

Let us know how you feel about these Webmaster Tools in the comments below. Duane will be keeping an eye on them as well so let him know what you think.

Happy Friday Everyone!

Video Transcription

Transcription Coming Soon. Our apologies for any inconveniences.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

SEOmoz Daily SEO Blog

Posted in Latest NewsComments Off