Tag Archive | "SEO’s"

How to Face 3 Fundamental Challenges Standing Between SEOs and Clients/Bosses

Posted by sergeystefoglo

Every other year, the good people at Moz conduct a survey with one goal in mind: understand what we (SEOs) want to read more of. If you haven’t seen the results from 2017, you can view them here.

The results contain many great questions, challenges, and roadblocks that SEOs face today. As I was reading the 2017 Moz Blog readership survey, a common thread stood out to me: there are disconnects on fundamental topics between SEOs and clients and/or bosses. Since I work at an agency, I’ll use “client” through the rest of this article; if you work in-house, replace that with “boss.”

Check out this list:

I can definitely relate to these challenges. I’ve been at Distilled for a few years now, and worked in other firms before — these challenges are real, and they’re tough. Through sharing my experience dealing with these challenges, I hope to help other consultants and SEOs to overcome them.

In particular, I want to discuss three points of disconnect that happen between SEOs and clients.

  1. My client doesn’t understand the value of SEO and it’s difficult to prove ROI.
  2. My client doesn’t understand how SEO works and I always have to justify my actions.
  3. My client and I disagree about whether link building is the right answer.

Keep in mind, these are purely my own experiences. This doesn’t mean these answers are the end-all-be-all. In fact, I would enjoy starting a conversation around these challenges with any of you so please grab me at SearchLove (plug: our San Diego conference is selling out quickly and is my favorite) or MozCon to bounce off more ideas!

1. My client doesn’t understand the value of SEO and it’s difficult to prove ROI

The value of SEO is its influence on organic search, which is extremely valuable. In fact, SEO is more prominent in 2018 than it has ever been. To illustrate this, I borrowed some figures from Rand’s write up on the state of organic search at the end of 2017.

  • Year over year, the period of January–October 2017 has 13% more search volume than the same months in 2016.
  • That 13% represents 54 billion more queries, which is just about the total number of searches Google did, worldwide, in 2003.

Organic search brings in the most qualified visitors (at a more consistent rate) than any other digital marketing channel. In other words, more people are searching for things than ever before, which results in more potential to grow organic traffic. How do we grow organic traffic? By making sure our sites are discoverable by Google and clearly answer user queries with good content.

Source: Search Engine Land

When I first started out in SEO, I used to think I was making all my clients all the moneys. “Yes, Bill, if you hire me and we do this SEO thing I will increase rankings and sessions, and you will make an extra x dollars!” I used to send estimates on ROI with every single project I pitched (even if it wasn’t asked of me).

After a few years in the industry I began questioning the value of providing estimates on ROI. Specifically, I was having trouble determining ift I was doing the right thing by providing a number that was at best an educated guess. It would stress me out and I would feel like I was tied to that number. It also turns out, not worrying about things that are out of our control helps control stress levels.

I’m at a point now where I’ve realized the purpose of providing an estimated ROI. Our job as consultants is to effect change. We need to get people to take action. If what it takes to get sign-off is to predict an uplift, that’s totally fine. In fact, it’s expected. Here’s how that conversation might look.

In terms of a formula for forecasting uplifts in SEO, Mike King said it best:

“Forecast modeling is questionable at best. It doesn’t get much better than this:”

  • Traffic = Search Volume x CTR
  • Number of Conversions = Conversion Rate x Traffic
  • Dollar Value = Traffic x # Conversions x Avg Conversion Value

TL;DR:

  • Don’t overthink this too much — if you do, you’ll get stuck in the weeds.
  • When requested, provide the prediction to get sign-off and quickly move on to action.
  • For more in-depth thoughts on this, read Will Critchlow’s recent post on forecast modeling.
  • Remember to think about seasonality, overall trends, and the fact that few brands exist in a vacuum. What are your competitors doing and how will that affect you?

2. My client doesn’t understand how SEO works and I always have to justify my actions

Does your client actually not understand how SEO works? Or, could it be that you don’t understand what they need from you? Perhaps you haven’t considered what they are struggling with at the moment?

I’ve been there — constantly needing to justify why you’re working on a project or why SEO should be a focus. It isn’t easy to be in this position. But, more often than not I’ve realized what helps the most is to take a step back and ask some fundamental questions.

A great place to start would be asking:

  • What are the things my client is concerned about?
  • What is my client being graded on by their boss?
  • Is my client under pressure for some reason?

The answers to these questions should shine some clarity on the situation (the why or the motivation behind the constant questioning). Some of the reasons why could be:

  • You might know more about SEO than your client, but they know more about their company. This means they may see the bigger picture between investments, returns, activities, and the interplay between them all.
  • SEO might be 20% of what your client needs to think about — imagine a VP of marketing who needs to account for 5–10 different channels.
  • If you didn’t get sign off/budget for a project, it doesn’t mean your request was without merit. This just means someone else made a better pitch more aligned to their larger goals.

When you have some answers, ask yourself, “How can I make what I’m doing align to what they’re focused on?” This will ensure you are hitting the nail on the head and providing useful insight instead of more confusion.

That conversation might look like this:

TL;DR

  • This is a good problem to have — it means you have a chance to effect change.
  • Also, it means that your client is interested in your work!
  • It’s important to clarify the why before getting to in the weeds. Rarely will the why be “to learn SEO.”

3. My client and I disagree about whether link building is the right answer

The topic of whether links (and by extension, link building) are important is perhaps the most talked about topic in SEO. To put it simply, there are many different opinions and not one “go-to” answer. In 2017 alone there have been many conflicting posts/talks on the state of links.

The quick answer to the challenge we face as SEOs when it comes to links is, unless authority is holding you back do something else.

That answer is a bit brief and if your client is constantly bringing up links, it doesn’t help. In this case, I think there are a few points to consider.

  1. If you’re a small business, getting links is a legitimate challenge and can significantly impact your rankings. The problem is that it’s difficult to get links for a small business. Luckily, we have some experts in our field giving out ideas for this. Check out this, this, and this.
  2. If you’re an established brand (with authority), links should not be a priority. Often, links will get prioritized because they are easier to attain, measurable (kind of), and comfortable. Don’t fall into this trap! Go with the recommendation above: do other impactful work that you have control over first.
    1. Reasoning: Links tie success to a metric we have no control over — this gives us an excuse to not be accountable for success, which is bad.
    2. Reasoning: Links reduce an extremely complicated situation into a single variable — this gives us an excuse not to try and understand everything (which is also bad).
  3. It’s good to think about the topic of links and how it’s related to brand. Big brands get talked about (and linked to) more than small brands. Perhaps the focus should be “build your brand” instead of “gain some links”.
  4. If your client persists on the topic of links, it might be easier to paint a realistic picture for them. This conversation might look like this:

TL;DR

  • There are many opinions on the state of links in 2018: don’t get distracted by all the noise.
  • If you’re a small business, there are some great tactics for building links that don’t take a ton of time and are probably worth it.
  • If you’re an established brand with more authority, do other impactful work that’s in your control first.
  • If you are constantly getting asked about links from your client, paint a realistic picture.

Conclusion

If you’ve made it this far, I’m really interested in hearing how you deal with these issues within your company. Are there specific challenges you face within the topics of ROI, educating on SEO, getting sign-off, or link building? How can we start tackling these problems more as an industry?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Should SEOs & Content Marketers Play to the Social Networks’ "Stay-On-Our-Site" Algorithms? – Whiteboard Friday

Posted by randfish

Increasingly, social networks are tweaking their algorithms to favor content that remains on their site, rather than send users to an outside source. This spells trouble for those trying to drive traffic and visitors to external pages, but what’s an SEO or content marketer to do? Do you swim with the current, putting all your efforts toward placating the social network algos, or do you go against it and continue to promote your own content? This edition of Whiteboard Friday goes into detail on the pros and cons of each approach, then gives Rand’s recommendations on how to balance your efforts going forward.

Should SEOs and content marketers play to the social networks "stay-on-our-site" algorithms?

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about whether SEOs and content marketers, for that matter, should play to what the social networks are developing in their visibility and engagement algorithms, or whether we should say, “No. You know what? Forget about what you guys are doing. We’re going to try and do things on social networks that benefit us.” I’ll show you what I’m talking about.

Facebook

If you’re using Facebook and you’re posting content to it, Facebook generally tends to frown upon and lower the average visibility and ability of content to reach its audience on Facebook if it includes an external link. So, on average, posts that include an external link will fare more poorly in Facebooks’ news feed algorithm than on-site content, exclusively content that lives on Facebook.

For example, if you see this video promoted on Facebook.com/Moz or Facebook.com/RandFishkin, it will do more poorly than if Moz and I had promoted a Facebook native video of Whiteboard Friday. But we don’t want that. We want people to come visit our site and subscribe to Whiteboard Friday here and not stay on Facebook where we only reach 1 out of every 50 or 100 people who might subscribe to our page.

So it’s clearly in our interest to do this, but Facebook wants to keep you on Facebook’s website, because then they can do the most advertising and targeting to you and get the most time on site from you. That’s their business, right?

Twitter

The same thing is true of Twitter. So it tends to be the case that links off Twitter fare more poorly. Now, I am not 100% sure in Twitter’s case whether this is algorithmic or user-driven. I suspect it’s a little of both, that Twitter will promote or make most visible to you when you log in to Twitter the posts that have been made or the tweets that have been made that are self-contained. They live entirely on Twitter. They might contain a bunch of different stuff, a poll or images or be a thread. But links off Twitter will be dampened.

Instagram

The same thing is true on Instagram. Well, on Instagram, they’re kind of the worst. They don’t allow links at all. The only thing you can do is a link in profile. More engaging content on Instagram, as of just a couple weeks ago, more engaging content equals higher placement in the feed. In fact, Instagram has now just come out and said that they will show you content posts from people you’re not following but that they think will be engaging to you, which gives influential Instagram accounts that get lots of engagement an additional benefit, but kind of hurts everyone else that you’re normally following on the network.

LinkedIn

LinkedIn, LinkedIn’s algorithm includes extra visibility in the feed for self-contained post content, which is why you see a lot of these posts of, “Oh, here’s all the crazy amounts of work I did and what my experience was like building this or doing that.” If it’s a self-contained, sort of blog post-style content in LinkedIn that does not link out, it will do much better than posts that contain an external link, which LinkedIn sort of dampens in their visibility algorithm for their feed.

Play to the algos?

So all of these sites have these components of their algorithm that basically reward you if you are willing to play to their algos, meaning you keep all of the content on their sites and platform, their stuff, not yours. You essentially play to what they’re trying to achieve, which is more time on site for them, more engagement for them, less people going away to other places. You refuse or you don’t link out, so no external linking to other places. You maintain sort of what I call a high signal to noise ratio, so that rather than sharing all the things you might want to share, you only share posts that you can count on having relatively high engagement.

That track record is something that sticks with you on most of these networks. Facebook, for example, if I have posts that do well, many in a row, I will get more visibility for my next one. If my last couple of posts have performed poorly on Facebook, my next one will be dampened. You sort of get a string or get on a roll with these networks. Same thing is true on Twitter, by the way.

$ #@! the algos, serve your own site?

Or you say, “Forget you” to the algorithms and serve your own site instead, which means you use the networks to tease content, like, “Here’s this exciting, interesting thing. If you want the whole story or you want to watch full video or see all the graphs and charts or whatever it is, you need to come to our website where we host the full content.” You link externally so that you’re driving traffic back to the properties that you own and control, and you have to be willing to promote some potentially promotional content, in order to earn value from these social networks, even if that means slightly lower engagement or less of that get-on-a-roll reputation.

My recommendation

The recommendation that I have for SEOs and content marketers is I think we need to balance this. But if I had to, I would tilt it in favor of your site. Social networks, I know it doesn’t seem this way, but social networks come and go in popularity, and they change the way that they work. So investing very heavily in Facebook six or seven years ago might have made a ton of sense for a business. Today, a lot of those investments have been shown to have very little impact, because instead of reaching 20 or 30 out of 100 of your followers, you’re reaching 1 or 2. So you’ve lost an order of magnitude of reach on there. The same thing has been true generally on Twitter, on LinkedIn, and on Instagram. So I really urge you to tilt slightly to your own site.

Owned channels are your website, your email, where you have the email addresses of the people there. I would rather have an email or a loyal visitor or an RSS subscriber than I would 100 times as many Twitter followers, because the engagement you can get and the value that you can get as a business or as an organization is just much higher.

Just don’t ignore how these algorithms work. If you can, I would urge you to sometimes get on those rolls so that you can grow your awareness and reach by playing to these algorithms.

So, essentially, while I’m urging you to tilt slightly this way, I’m also suggesting that occasionally you should use what you know about how these algorithms work in order to grow and accelerate your growth of followers and reach on these networks so that you can then get more benefit of driving those people back to your site. You’ve got to play both sides, I think, today in order to have success with the social networks’ current reach and visibility algorithms.

All right, everyone, look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

State of Enterprise SEO 2017: Overworked SEOs Need Direction

Posted by NorthStarInbound

This survey and its analysis was co-authored with North Star Inbound’s senior creative strategist, Andrea Pretorian.

In the spring of 2017, North Star Inbound partnered up with seoClarity and BuzzStream to survey the state of enterprise SEO. We had a fair share of anecdotal evidence from our clients, but we wanted a more objective measurement of how SEO teams are assembled, what resources are allocated to them, what methods they use, and how they perform.

We hadn’t seen such data collected, particularly for enterprise SEO. We found this surprising given its significance, evident even in the number of “enterprise SEO tools” and solutions being marketed.

What is enterprise SEO?

There is no single fixed-industry definition of “enterprise” beyond “large business.” For the purposes of this survey, we defined enterprise businesses as being comprised of 500 or more employees. “Small enterprise” means 500–1000 employees, while “large enterprise” means over 1000 employees.

Industry discussion often points to the number of pages as being a potential defining factor for enterprise SEO, but even that is not necessarily a reliable measure.

What was our survey methodology?

We developed the widest enterprise SEO survey to date, made up of 29 questions that delved into every aspect of the enterprise SEO practice. From tools and tactics to content development, keyword strategy, and more, we left no stone unturned. We then picked the brains of 240 SEO specialists across the country. You can check out our complete survey, methodology, and results here.

Team size matters — or does it?

Let’s start by looking at enterprise team size and the resources allocated to them. We focused on companies with an in-house SEO team, and broke them down in terms of small (500–1000 employees) and large enterprise (>1000 employees).

We found that 76% of small enterprise companies have in-house SEO teams of 5 people or less, but were surprised that 68% of large enterprise companies also had teams of this size. We expected a more pronounced shift into larger team sizes paralleling the larger size of their parent company; we did not expect to see roughly the same team size across small and large enterprise companies.

Chart_Q4_170522.png

Interestingly, in larger companies we also see less confidence in the team’s experience in SEO. Of the companies with in-house SEO, only 31.67% of large enterprise teams called themselves “leaders” in the SEO space, which was defined in this survey as part of a team engaged broadly and critically within the business. 40% of small enterprise teams called themselves “leaders.” In terms of viewing themselves more positively (leaders, visionaries) or less (SEO pioneers in their company or else new SEO teams), we did not notice a big difference between small or large enterprise in-house SEO teams.

Large enterprise companies should have more resources at their disposal — HR teams to hire the best talent, reliable onboarding practices in place, access to more sophisticated project management tools, and more experience managing teams — which makes these results surprising. Why are large enterprise companies not more confident about their SEO skills and experience?

Before going too far in making assumptions about their increased resources, we made sure to ask our survey-takers about this. Specifically, we asked for how much budget is allocated to SEO activity per month — not including the cost of employees’ salaries, or the overhead costs of keeping the lights on — since this would result in a figure easier to report consistently across all survey takers.

It turns out that 57% of large enterprise companies had over $ 10K dedicated strictly to SEO activity each month, in contrast to just 24% of small enterprise companies allocating this much budget. 40% of large enterprise had over $ 20K dedicated to SEO activity each month, suggesting that SEO is a huge priority for them. And yet, as we saw earlier, they are not sold on their team having reached leader status.

Enterprise SEO managers in large companies value being scalable and repeatable

We asked survey takers to rate the success of their current SEO strategy, per the scale mapped below, and here are the results:

Chart_Q8_170522.png

A smaller percentage of large enterprise SEOs had a clearly positive rating of the current success of their SEO strategy than did small enterprise SEOs. We even see more large enterprise SEOs “on the fence” about their strategy’s performance as opposed to small. This suggests that, from the enterprise SEOs we surveyed, the ones who work for smaller companies tend to be slightly more optimistic about their campaigns’ performance than the larger ones.

What’s notable about the responses to this question is that 18.33% of managers at large enterprise companies would rate themselves as successful — calling themselves “scalable and repeatable.” No one at a small enterprise selected this to describe their strategy. We clearly tapped into an important value for these teams, who use it enough to measure their performance that it’s a value they can report on to others as a benchmark of their success.

Anyone seeking to work with large enterprise clients needs to make sure their processes are scalable and repeatable. This also suggests that one way for a growing company to step up its SEO team’s game as it grows is by achieving these results. This would be a good topic for us to address in greater detail in articles, webinars, and other industry communication.

Agencies know best? (Agencies think they know best.)

Regardless of the resources available to them, across the board we see that in-house SEOs do not show as much confidence as agencies. Agencies are far more likely to rate their SEO strategy as successful: 43% of survey takers who worked for agencies rated their strategy as outright successful, as opposed to only 13% of in-house SEOs. That’s huge!

While nobody said their strategy was a total disaster — we clearly keep awesome company — 7% of in-house SEOs expressed frustration with their strategy, as opposed to only 1% of agencies.

Putting our bias as a link building agency aside, we would expect in-house SEO enterprise teams to work like in-house agencies. With the ability to hire top talent and purchase enterprise software solutions to automate and track campaigns, we expect them to have the appropriate tools and resources at their disposal to generate the same results and confidence as any agency.

So why the discrepancy? It’s hard to say for sure. One theory might be that those scalable, repeatable results we found earlier that serve as benchmarks for enterprise are difficult to attain, but the way agencies evolve might serve them better. Agencies tend to develop somewhat organically — expanding their processes over time and focusing on SEO from day one — as opposed to an in-house team in a company, which rarely was there from day one and, more often than not, sprouted up when the company’s growth made it such that marketing became a priority.

One clue for answering this question might come from examining the differences between how agencies and in-house SEO teams responded to the question asking them what they find to be the top two most difficult SEO obstacles they are currently facing.

Agencies have direction, need budget; in-house teams have budget, need direction

If we look at the top three obstacles faced by agencies and in-house teams, both of them place finding SEO talent up there. Both groups also say that demonstrating ROI is an issue, although it’s more of an obstacle for agencies rather than in-house SEO teams.

When we look at the third obstacles, we find the biggest reveal. While agencies find themselves hindered by trying to secure enough budget, in-house SEO teams struggle to develop the right content; this seems in line with the point we made in the previous section comparing agency versus in-house success. Agencies have the processes down, but need to work hard to fit their clients’ budgets. In-house teams have the budget they need, but have trouble lining them up to the exact processes their company needs to grow as desired. The fact that almost half of the in-house SEOs would rank developing the right content as their biggest obstacle — as opposed to just over a quarter of agencies — further supports this, particularly given how important content is to any marketing campaign.

Now, let’s take a step back and dig deeper into that second obstacle we noted: demonstrating ROI.

Everyone seems to be measuring success differently

One question that we asked of survey takers was about the top two technical SEO issues they monitor:

The spread across the different factors were roughly the same across the two different groups. The most notable difference between the two groups was that even more in-house SEO teams looked at page speed, although this was the top factor for both groups. Indexation was the second biggest factor for both groups, followed by duplicate content. There seems to be some general consensus about monitoring technical SEO issues.

But when we asked everyone what their top two factors are when reviewing their rankings, we got these results:

For both agencies and in-house SEO teams, national-level keywords were the top factor, although this was true for almost-three quarters of in-house SEOs and about half of agencies. Interestingly, agencies focused a bit more on geo/local keywords as well as mobile. From when we first opened this data we found this striking, because it suggests a narrative where in-house SEO teams focus on more conservative, “seasoned” methods, while agencies are more likely to stay on the cutting-edge.

Looking at the “Other” responses (free response), we had several write-ins from both subgroups who answered that traffic and leads were important to them. One agency survey-taker brought up a good point: that what they monitor “differs by client.” We would be remiss if we did not mention the importance of vertical-specific and client-specific approaches — even if you are working in-house, and your only client is your company. From this angle, it makes sense that everyone is measuring rankings and SEO differently.

However, we would like to see a bit more clarity within the community on setting these parameters, and we hope that these results will foster that sort of discussion. Please do feel free to reply in the comments:

  • How do you measure ROI on your SEO efforts?
  • How do you show your campaigns’ value?
  • What would you change about how you’re currently measuring the success of your efforts?

So what’s next?

We’d love to hear about your experiences, in-house or agency, and how you’ve been able to demonstrate ROI on your campaigns.

We’re going to repeat this survey again next year, so stay tuned. We hope to survey a larger audience so that we can break down the groups we examine further and analyze response trends among the resulting subgroups. We wanted to do this here in this round of analysis, but were hesitant because of how small the resulting sample size would be.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Google (Almost Certainly) Has an Organic Quality Score (Or Something a Lot Like It) that SEOs Need to Optimize For – Whiteboard Friday

Posted by randfish

Entertain the idea, for a moment, that Google assigned a quality score to organic search results. Say it was based off of click data and engagement metrics, and that it would function in a similar way to the Google AdWords quality score. How exactly might such a score work, what would it be based off of, and how could you optimize for it?

While there’s no hard proof it exists, the organic quality score is a concept that’s been pondered by many SEOs over the years. In today’s Whiteboard Friday, Rand examines this theory inside and out, then offers some advice on how one might boost such a score.

Google's Organic Quality Score

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about organic quality score.

So this is a concept. This is not a real thing that we know Google definitely has. But there’s this concept that SEOs have been feeling for a long time, that similar to what Google has in their AdWords program with a paid quality score, where a page has a certain score assigned to it, that on the organic side Google almost definitely has something similar. I’ll give you an example of how that might work.

So, for example, if on my site.com I have these three — this is a very simplistic website — but I have these three subfolders: Products, Blog, and About. I might have a page in my products, 14axq.html, and it has certain metrics that Google associates with it through activity that they’ve seen from browser data, from clickstream data, from search data, and from visit data from the searches and bounces back to the search results, and all these kinds of things, all the engagement and click data that we’ve been talking about a lot this year on Whiteboard Friday.

So they may have these metrics, pogo stick rate and bounce rate and a deep click rate (the rate with which someone clicks to the site and then goes further in from that page), the time that they spend on the site on average, the direct navigations that people make to it each month through their browsers, the search impressions and search clicks, perhaps a bunch of other statistics, like whether people search directly for this URL, whether they perform branded searches. What rate do unique devices in one area versus another area do this with? Is there a bias based on geography or device type or personalization or all these kinds of things?

But regardless of that, you get this idea that Google has this sort of sense of how the page performs in their search results. That might be very different across different pages and obviously very different across different sites. So maybe this blog post over here on /blog is doing much, much better in all these metrics and has a much higher quality score as a result.

Current SEO theories about organic quality scoring:

Now, when we talk to SEOs, and I spend a lot of time talking to my fellow SEOs about theories around this, a few things emerge. I think most folks are generally of the opinion that if there is something like an organic quality score…

1. It is probably based on this type of data — queries, clicks, engagements, visit data of some kind.

We don’t doubt for a minute that Google has much more sophistication than the super-simplified stuff that I’m showing you here. I think Google publicly denies a lot of single types of metric like, “No, we don’t use time on site. Time on site could be very variable, and sometimes low time on site is actually a good thing.” Fine. But there’s something in there, right? They use some more sophisticated format of that.

2. We also are pretty sure that this is applying on three different levels:

This is an observation from experimentation as well as from Google statements which is…

  • Domain-wide, so that would be across one domain, if there are many pages with high quality scores, Google might view that domain differently from a domain with a variety of quality scores on it or one with generally low ones.
  • Same thing for a subdomain. So it could be that a subdomain is looked at differently than the main domain, or that two different subdomains may be viewed differently. If content appears to have high quality scores on this one, but not on this one, Google might generally not pass all the ranking signals or give the same weight to the quality scores over here or to the subdomain over here.
  • Same thing is true with subfolders, although to a lesser extent. In fact, this is kind of in descending order. So you can generally surmise that Google will pass these more across subfolders than they will across subdomains and more across subdomains than across root domains.

3. A higher density of good scores to bad ones can mean a bunch of good things:

  • More rankings in visibility even without other signals. So even if a page is sort of lacking in these other quality signals, if it is in this blog section, this blog section tends to have high quality scores for all the pages, Google might give that page an opportunity to rank well that it wouldn’t ordinarily for a page with those ranking signals in another subfolder or on another subdomain or on another website entirely.
  • Some sort of what we might call “benefit of the doubt”-type of boost, even for new pages. So a new page is produced. It doesn’t yet have any quality signals associated with it, but it does particularly well.

    As an example, within a few minutes of this Whiteboard Friday being published on Moz’s website, which is usually late Thursday night or very early Friday morning, at least Pacific time, I will bet that you can search for “Google organic quality score” or even just “organic quality score” in Google’s engine, and this Whiteboard Friday will perform very well. One of the reasons that probably is, is because many other Whiteboard Friday videos, which are in this same subfolder, Google has seen them perform very well in the search results. They have whatever you want to call it — great metrics, a high organic quality score — and because of that, this Whiteboard Friday that you’re watching right now, the URL that you see in the bar up above is almost definitely going to be ranking well, possibly in that number one position, even though it’s brand new. It hasn’t yet earned the quality signals, but Google assumes, it gives it the benefit of the doubt because of where it is.

  • We surmise that there’s also more value that gets passed from links, both internal and external, from pages with high quality scores. That is right now a guess, but something we hope to validate more, because we’ve seen some signs and some testing that that’s the case.

3 ways to boost your organic quality score

If this is true — and it’s up to you whether you want to believe that it is or not — even if you don’t believe it, you’ve almost certainly seen signs that something like it’s going on. I would urge you to do these three things to boost your organic quality score or whatever you believe is causing these same elements.

1. You could add more high-performing pages. So if you know that pages perform well and you know what those look like versus ones that perform poorly, you can make more good ones.

2. You can improve the quality score of existing pages. So if this one is kind of low, you’re seeing that these engagement and use metrics, the SERP click-through rate metrics, the bounce rate metrics from organic search visits, all of these don’t look so good in comparison to your other stuff, you can boost it, improve the content, improve the navigation, improve the usability and the user experience of the page, the load time, the visuals, whatever you’ve got there to hold searchers’ attention longer, to keep them engaged, and to make sure that you’re solving their problem. When you do that, you will get higher quality scores.

3. Remove low-performing pages through a variety of means. You could take a low-performing page and you might say, “Hey, I’m going to redirect that to this other page, which does a better job answering the query anyway.” Or, “Hey, I’m going to 404 that page. I don’t need it anymore. In fact, no one needs it anymore.” Or, “I’m going to no index it. Some people may need it, maybe the ones who are visitors to my website, who need it for some particular direct navigation purpose or internal purpose. But Google doesn’t need to see it. Searchers don’t need it. I’m going to use the no index, either in the meta robots tag or in the robots.txt file.”

One thing that’s really interesting to note is we’ve seen a bunch of case studies, especially since MozCon, when Britney Muller, Moz’s Head of SEO, shared the fact that she had done some great testing around removing tens of thousands of low-quality, really low-quality performing pages from Moz’s own website and seen our rankings and our traffic for the remainder of our content go up quite significantly, even controlling for seasonality and other things.

That was pretty exciting. When we shared that, we got a bunch of other people from the audience and on Twitter saying, “I did the same thing. When I removed low-performing pages, the rest of my site performed better,” which really strongly suggests that there’s something like a system in this fashion that works in this way.

So I’d urge you to go look at your metrics, go find pages that are not performing well, see what you can do about improving them or removing them, see what you can do about adding new ones that are high organic quality score, and let me know your thoughts on this in the comments.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Should SEOs Care About Internal Links? – Whiteboard Friday

Posted by randfish

Internal links are one of those essential SEO items you have to get right to avoid getting them really wrong. Rand shares 18 tips to help inform your strategy, going into detail about their attributes, internal vs. external links, ideal link structures, and much, much more in this edition of Whiteboard Friday.

Should SEOs Care About Internl Links?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat a little bit about internal links and internal link structures. Now, it is not the most exciting thing in the SEO world, but it’s something that you have to get right and getting it wrong can actually cause lots of problems.

Attributes of internal links

So let’s start by talking about some of the things that are true about internal links. Internal links, when I say that phrase, what I mean is a link that exists on a website, let’s say ABC.com here, that is linking to a page on the same website, so over here, linking to another page on ABC.com. We’ll do /A and /B. This is actually my shipping routes page. So you can see I’m linking from A to B with the anchor text “shipping routes.”

The idea of an internal link is really initially to drive visitors from one place to another, to show them where they need to go to navigate from one spot on your site to another spot. They’re different from internal links only in that, in the HTML code, you’re pointing to the same fundamental root domain. In the initial early versions of the internet, that didn’t matter all that much, but for SEO, it matters quite a bit because external links are treated very differently from internal links. That is not to say, however, that internal links have no power or no ability to change rankings, to change crawling patterns and to change how a search engine views your site. That’s what we need to chat about.

1. Anchor text is something that can be considered. The search engines have generally minimized its importance, but it’s certainly something that’s in there for internal links.

2. The location on the page actually matters quite a bit, just as it does with external links. Internal links, it’s almost more so in that navigation and footers specifically have attributes around internal links that can be problematic.

Those are essentially when Google in particular sees manipulation in the internal link structure, specifically things like you’ve stuffed anchor text into all of the internal links trying to get this shipping routes page ranking by putting a little link down here in the footer of every single page and then pointing over here trying to game and manipulate us, they hate that. In fact, there is an algorithmic penalty for that kind of stuff, and we can see it very directly.

We’ve actually run tests where we’ve observed that jamming this type of anchor text-rich links into footers or into navigation and then removing it gets a site indexed, well let’s not say indexed, let’s say ranking well and then ranking poorly when you do it. Google reverses that penalty pretty quickly too, which is nice. So if you are not ranking well and you’re like, “Oh no, Rand, I’ve been doing a lot of that,” maybe take it away. Your rankings might come right back. That’s great.

3. The link target matters obviously from one place to another.

4. The importance of the linking page, this is actually a big one with internal links. So it is generally the case that if a page on your website has lots of external links pointing to it, it gains authority and it has more ability to sort of generate a little bit, not nearly as much as external links, but a little bit of ranking power and influence by linking to other pages. So if you have very well-linked two pages on your site, you should make sure to link out from those to pages on your site that a) need it and b) are actually useful for your users. That’s another signal we’ll talk about.

5. The relevance of the link, so pointing to my shipping routes page from a page about other types of shipping information, totally great. Pointing to it from my dog food page, well, it doesn’t make great sense. Unless I’m talking about shipping routes of dog food specifically, it seems like it’s lacking some of that context, and search engines can pick up on that as well.

6. The first link on the page. So this matters mostly in terms of the anchor text, just as it does for external links. Basically, if you are linking in a bunch of different places to this page from this one, Google will usually, at least in all of our experiments so far, count the first anchor text only. So if I have six different links to this and the first link says “Click here,” “Click here” is the anchor text that Google is going to apply, not “Click here” and “shipping routes” and “shipping.” Those subsequent links won’t matter as much.

7. Then the type of link matters too. Obviously, I would recommend that you keep it in the HTML link format rather than trying to do something fancy with JavaScript. Even though Google can technically follow those, it looks to us like they’re not treated with quite the same authority and ranking influence. Text is slightly, slightly better than images in our testing, although that testing is a few years old at this point. So maybe image links are treated exactly the same. Either way, do make sure you have that. If you’re doing image links, by the way, remember that the alt attribute of that image is what becomes the anchor text of that link.

Internal versus external links

A. External links usually give more authority and ranking ability.

That shouldn’t be surprising. An external link is like a vote from an independent, hopefully independent, hopefully editorially given website to your website saying, “This is a good place for you to go for this type of information.” On your own site, it’s like a vote for yourself, so engines don’t treat it the same.

B. Anchor text of internal links generally have less influence.

So, as we mentioned, me pointing to my page with the phrase that I want to rank for isn’t necessarily a bad thing, but I shouldn’t do it in a manipulative way. I shouldn’t do it in a way that’s going to look spammy or sketchy to visitors, because if visitors stop clicking around my site or engaging with it or they bounce more, I will definitely lose ranking influence much faster than if I simply make those links credible and usable and useful to visitors. Besides, the anchor text of internal links is not as powerful anyway.

C. A lack of internal links can seriously hamper a page’s ability to get crawled + ranked.

It is, however, the case that a lack of internal links, like an orphan page that doesn’t have many internal or any internal links from the rest of its website, that can really hamper a page’s ability to rank. Sometimes it will happen. External links will point to a page. You’ll see that page in your analytics or in a report about your links from Moz or Ahrefs or Majestic, and then you go, “Oh my gosh, I’m not linking to that page at all from anywhere else on my site.” That’s a bad idea. Don’t do that. That is definitely problematic.

D. It’s still the case, by the way, that, broadly speaking, pages with more links on them will send less link value per link.

So, essentially, you remember the original PageRank formula from Google. It said basically like, “Oh, well, if there are five links, send one-fifth of the PageRank power to each of those, and if there are four links, send one-fourth.” Obviously, one-fourth is bigger than one-fifth. So taking away that fifth link could mean that each of the four pages that you’ve linked to get a little bit more ranking authority and influence in the original PageRank algorithm.

Look, PageRank is old, very, very old at this point, but at least the theories behind it are not completely gone. So it is the case that if you have a page with tons and tons of links on it, that tends to send out less authority and influence than a page with few links on it, which is why it can definitely pay to do some spring cleaning on your website and clear out any rubbish pages or rubbish links, ones that visitors don’t want, that search engines don’t want, that you don’t care about. Clearing that up can actually have a positive influence. We’ve seen that on a number of websites where they’ve cleaned up their information architecture, whittled down their links to just the stuff that matters the most and the pages that matter the most, and then seen increased rankings across the board from all sorts of signals, positive signals, user engagement signals, link signals, context signals that help the engine them rank better.

E. Internal link flow (aka PR sculpting) is rarely effective, and usually has only mild effects… BUT a little of the right internal linking can go a long way.

Then finally, I do want to point out that what was previous called — you probably have heard of it in the SEO world — PageRank sculpting. This was a practice that I’d say from maybe 2003, 2002 to about 2008, 2009, had this life where there would be panel discussions about PageRank sculpting and all these examples of how to do it and software that would crawl your site and show you the ideal PageRank sculpting system to use and which pages to link to and not.

When PageRank was the dominant algorithm inside of Google’s ranking system, yeah, it was the case that PageRank sculpting could have some real effect. These days, that is dramatically reduced. It’s not entirely gone because of some of these other principles that we’ve talked about, just having lots of links on a page for no particularly good reason is generally bad and can have harmful effects and having few carefully chosen ones has good effects. But most of the time, internal linking, optimizing internal linking beyond a certain point is not very valuable, not a great value add.

But a little of what I’m calling the right internal linking, that’s what we’re going to talk about, can go a long way. For example, if you have those orphan pages or pages that are clearly the next step in a process or that users want and they cannot find them or engines can’t find them through the link structure, it’s bad. Fixing that can have a positive impact.

Ideal internal link structures

So ideally, in an internal linking structure system, you want something kind of like this. This is a very rough illustration here. But the homepage, which has maybe 100 links on it to internal pages. One hop away from that, you’ve got your 100 different pages of whatever it is, subcategories or category pages, places that can get folks deeper into your website. Then from there, each of those have maybe a maximum of 100 unique links, and they get you 2 hops away from a homepage, which takes you to 10,000 pages who do the same thing.

I. No page should be more than 3 link “hops” away from another (on most small–>medium sites).

Now, the idea behind this is that basically in one, two, three hops, three links away from the homepage and three links away from any page on the site, I can get to up to a million pages. So when you talk about, “How many clicks do I have to get? How far away is this in terms of link distance from any other page on the site?” a great internal linking structure should be able to get you there in three or fewer link hops. If it’s a lot more, you might have an internal linking structure that’s really creating sort of these long pathways of forcing you to click before you can ever reach something, and that is not ideal, which is why it can make very good sense to build smart categories and subcategories to help people get in there.

I’ll give you the most basic example in the world, a traditional blog. In order to reach any post that was published two years ago, I’ve got to click Next, Next, Next, Next, Next, Next through all this pagination until I finally get there. Or if I’ve done a really good job with my categories and my subcategories, I can click on the category of that blog post and I can find it very quickly in a list of the last 50 blog posts in that particular category, great, or by author or by tag, however you’re doing your navigation.

II. Pages should contain links that visitors will find relevant and useful.

If no one ever clicks on a link, that is a bad signal for your site, and it is a bad signal for Google as well. I don’t just mean no one ever. Very, very few people ever and many of them who do click it click the back button because it wasn’t what they wanted. That’s also a bad sign.

III. Just as no two pages should be targeting the same keyword or searcher intent, likewise no two links should be using the same anchor text to point to different pages. Canonicalize!

For example, if over here I had a shipping routes link that pointed to this page and then another shipping routes link, same anchor text pointing to a separate page, page C, why am I doing that? Why am I creating competition between my own two pages? Why am I having two things that serve the same function or at least to visitors would appear to serve the same function and search engines too? I should canonicalize those. Canonicalize those links, canonicalize those pages. If a page is serving the same intent and keywords, keep it together.

IV. Limit use of the rel=”nofollow” to UGC or specific untrusted external links. It won’t help your internal link flow efforts for SEO.

Rel=”nofollow” was sort of the classic way that people had been doing PageRank sculpting that we talked about earlier here. I would strongly recommend against using it for that purpose. Google said that they’ve put in some preventative measures so that rel=”nofollow” links sort of do this leaking PageRank thing, as they call it. I wouldn’t stress too much about that, but I certainly wouldn’t use rel=”nofollow.”

What I would do is if I’m trying to do internal link sculpting, I would just do careful curation of the links and pages that I’ve got. That is the best way to help your internal link flow. That’s things like…

V. Removing low-value content, low-engagement content and creating internal links that people actually do want. That is going to give you the best results.

VI. Don’t orphan! Make sure pages that matter have links to (and from) them. Last, but not least, there should never be an orphan. There should never be a page with no links to it, and certainly there should never be a page that is well linked to that isn’t linking back out to portions of your site that are of interest or value to visitors and to Google.

So following these practices, I think you can do some awesome internal link analysis, internal link optimization and help your SEO efforts and the value visitors get from your site. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

SearchCap: Google fun facts, how to hire SEOs & SEO analysis

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google fun facts, how to hire SEOs & SEO analysis appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Most SEOs Are No Better than a Coin-Flip at Predicting Which Page Will Rank Better. Can You?

Posted by willcritchlow

We want to be able to answer questions about why one page outranks another.

“What would we have to do to outrank that site?”

“Why is our competitor outranking us on this search?”

These kind of questions — from bosses, from clients, and from prospective clients — are a standard part of day-to-day life for many SEOs. I know I’ve been asked both in the last week.

It’s relatively easy to figure out ways that a page can be made more relevant and compelling for a given search, and it’s straightforward to think of ways the page or site could be more authoritative (even if it’s less straight-forward to get it done). But will those changes or that extra link cause an actual reordering of a specific ranking? That’s a very hard question to answer with a high degree of certainty.

When we asked a few hundred people to pick which of two pages would rank better for a range of keywords, the average accuracy on UK SERPs was 46%. That’s worse than you’d get if you just flipped a coin! This chart shows the performance by keyword. It’s pretty abysmal:

It’s getting harder to unpick all the ranking factors

I’ve participated in each iteration of Moz’s ranking factors survey since its inception in 2009. At one of our recent conferences (the last time I was in San Diego for SearchLove) I talked about how I used to enjoy it and feel like I could add real value by taking the survey, but how that’s changed over the years as the complexity has increased.

While I remain confident when building strategies to increase overall organic visibility, traffic, and revenue, I’m less sure than ever which individual ranking factors will outweigh which others in a specific case.

The strategic approach looks at whole sites and groups of keywords

My approach is generally to zoom out and build business cases on assumptions about portfolios of rankings, but it’s been on my mind recently as I think about the ways machine learning should make Google rankings ever more of a black box, and cause the ranking factors to vary more and more between niches.

In general, “why does this page rank?” is the same as “which of these two pages will rank better?”

I’ve been teaching myself about deep neural networks using TensorFlow and Keras — an area I’m pretty sure I’d have ended up studying and working in if I’d gone to college 5 years later. As I did so, I started thinking about how you would model a SERP (which is a set of high-dimensional non-linear relationships). I realized that the litmus test of understanding ranking factors — and thus being able to answer “why does that page outrank us?” — boils down to being able to answer a simpler question:

Given two pages, can you figure out which one will outrank the other for a given query?

If you can answer that in the general case, then you know why one page outranks another, and vice-versa.

It turns out that people are terrible at answering this question.

I thought that answering this with greater accuracy than a coin flip was going to be a pretty low bar. As you saw from the sneak peak of my results above, that turned out not to be the case. Reckon you can do better? Skip ahead to take the test and find out.

(In fact, if you could find a way to test this effectively, I wonder if it would make a good qualifying question for the next moz ranking factors survey. Should you only listen only to the opinion of those experts who are capable of answering with reasonable accuracy? Note that my test that follows isn’t at all rigorous because you can cheat by Googling the keywords — it’s just for entertainment purposes).

Take the test and see how well you can answer

With my curiosity piqued, I put together a simple test, thinking it would be interesting to see how good expert SEOs actually are at this, as well as to see how well laypeople do.

I’ve included a bit more about the methodology and some early results below, but if you’d like to skip ahead and test yourself you can go ahead here.

Note that to simplify the adversarial side, I’m going to let you rely on all of Google’s spam filtering — you can trust that every URL ranks in the top 10 for its example keyword — so you’re choosing an ordering of two pages that do rank for the query rather than two pages from potentially any domain on the Internet.

I haven’t designed this to be uncheatable — you can obviously cheat by Googling the keywords — but as my old teachers used to say: “If you do, you’ll only be cheating yourself.”

Unfortunately, Google Forms seems to have removed the option to be emailed your own answers outside of an apps domain, so if you want to know how you did, note down your answers as you go along and compare them to the correct answers (which are linked from the final page of the test).

You can try your hand with just one keyword or keep going, trying anywhere up to 10 keywords (each with a pair of pages to put in order). Note that you don’t need to do all of them; you can submit after any number.

You can take the survey either for the US (google.com) or UK (google.co.uk). All results are considering only the “blue links” results — i.e. links to web pages — rather than universal search results / one-boxes etc.

Take the test!

What do the early responses show?

Before publishing this post, we sent it out to the @distilled and @moz networks. At the time of writing, almost 300 people have taken the test, and there are already some interesting results:

It seems as though the US questions are slightly easier

The UK test appears to be a little harder (judging both by the accuracy of laypeople, and with a subjective eye). And while accuracy generally increases with experience in both the UK and the US, the vast majority of UK respondents performed worse than a coin flip:

Some easy questions might skew the data in the US

Digging into the data, there are a few of the US questions that are absolute no-brainers (e.g. there’s a question about the keyword [mortgage calculator] in the US that 84% of respondents get right regardless of their experience). In comparison, the easiest one in the UK was also a mortgage-related query ([mortgage comparisons]) but only 2/3 of people got that right (67%).

Compare the UK results by keyword…

…To the same chart for the US keywords:

So, even though the overall accuracy was a little above 50% in the US (around 56% or roughly 5/9), I’m not actually convinced that US SERPs are generally easier to understand. I think there are a lot of US SERPs where human accuracy is in the 40% range.

The Dunning-Kruger effect is on display

The Dunning-Kruger effect is a well-studied psychological phenomenon whereby people “fail to adequately assess their level of competence,” typically feeling unsure in areas where they are actually strong (impostor syndrome) and overconfident in areas where they are weak. Alongside the raw predictions, I asked respondents to give their confidence in their rankings for each URL pair on a scale from 1 (“Essentially a guess, but I’ve picked the one I think”) to 5 (“I’m sure my chosen page should rank better”).

The effect was most pronounced on the UK SERPs — where respondents answering that they were sure or fairly sure (4–5) were almost as likely to be wrong as those guessing (1) — and almost four percentage points worse than those who said they were unsure (2–3):

Is Google getting some of these wrong?

The question I asked SEOs was “which page do you think ranks better?”, not “which page is a better result?”, so in general, most of the results say very little about whether Google is picking the right result in terms of user satisfaction. I did, however, ask people to share the survey with their non-SEO friends and ask them to answer the latter question.

If I had a large enough sample-size, you might expect to see some correlation here — but remember that these were a diverse array of queries and the average respondent might well not be in the target market, so it’s perfectly possible that Google knows what a good result looks like better than they do.

Having said that, in my own opinion, there are one or two of these results that are clearly wrong in UX terms, and it might be interesting to analyze why the “wrong” page is ranking better. Maybe that’ll be a topic for a follow-up post. If you want to dig into it, there’s enough data in both the post above and the answers given at the end of the survey to find the ones I mean (I don’t want to spoil it for those who haven’t tried it out yet). Let me know if you dive into the ranking factors and come up with any theories.

There is hope for our ability to fight machine learning with machine learning

One of the disappointments of putting together this test was that by the time I’d made the Google Form I knew too many of the answer to be able to test myself fairly. But I was comforted by the fact that I could do the next best thing — I could test my neural network (well, my model, refactored by our R&D team and trained on data they gathered, which we flippantly called Deeprank).

I think this is fair; the instructions did say “use whatever tools you like to assess the sites, but please don’t skew the results by performing the queries on Google yourself.” The neural network wasn’t trained on these results, so I think that’s within the rules. I ran it on the UK questions because it was trained on google.co.uk SERPs, and it did better than a coin flip:

So maybe there is hope that smarter tools could help us continue to answer questions like “why is our competitor outranking us on this search?”, even as Google’s black box gets ever more complex and impenetrable.

If you want to hear more about these results as I gather more data and get updates on Deeprank when it’s ready for prime-time, be sure to add your email address when you:

Take the test (or just drop me your email here)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

An Essential Training Task List for Junior SEOs

Posted by DaveSottimano

Let’s face it: SEO isn’t as black & white as most marketing channels. In my opinion, to become a true professional requires a broad skill set. It’s not that a professional SEO needs to know the answer for everything; rather, it’s more important to have the skills to be able to find the answer.

I’m really pleased with the results of various bits of training I’ve put together for successful juniors over the years, so I think it’s time to share.

This is a Junior SEO task list designed to help new starters in the field get the right skills by doing hands-on jobs, and possibly to help find a specialism in SEO or digital marketing.

How long should this take? Let’s ballpark at 60–90 days.

Before anything, here’s some prerequisite reading:

Project 1 – Technical Fundamentals:

Master the lingo and have a decent idea of how the Internet works before they start having conversations with developers or contributing online. Have the trainee answer the following questions. To demonstrate that they understand, have them answer the questions using analogies. Take inspiration from this post.

Must be able to answer the following in detail:

  • What is HTTP / HTTPS / HTTP2? Explain connections and how they flow.
  • Do root domains have trailing slashes?
  • What are the fundamental parts of a URL?
  • What is “www,” anyway?
  • What are generic ccTLDs?
  • Describe the transaction between client and server?
  • What do we mean when we say “client side” and “server side?”
  • Name 3 common servers. Explain each one.
  • How does DNS work?
  • What are ports?
  • How do I see/find my public IP address?
  • What is a proxy server?
  • What is a reverse proxy server?
  • How do CDNs work?
  • What is a VPN?
  • What are server response codes and how do they relate to SEO?
  • What is the difference between URL rewriting and redirecting?
  • What is MVC?
  • What is a development sprint / scrum?
  • Describe a development deployment workflow.
  • What are the core functions that power Google search?
  • What is PageRank?
  • What is toolbar PageRank?
  • What is the reasonable surfer model?
  • What is the random surfer model?
  • What is Mozrank, Domain Authority, and Page Authority — and how are they calculated?
  • Name 3 Google search parameters and explain what they do (hint: gl= country).
  • What advanced operator search query will return: all URLs with https, with “cat” in the title, not including www subdomains, and only PDFs?
  • Describe filtering in search results, and which parameter can be appended to the search URL to omit filtering.
  • How can I Google search by a specific date?
  • If we say something is “indexed,” what does that mean?
  • If we say something is “canonicalized,” what does that mean?
  • If we say something is “indexable,” what does that mean?
  • If we say something is “non indexable,” what does that mean?
  • If we say something is “crawlable,” what does that mean?
  • If we say something is “not crawlable,” what does that mean?
  • If we say something is “blocked,” what does that mean?
  • Give examples of “parameters” in the wild, and manipulate any parameter on any website to show different content.
  • How should you check rankings for a particular keyword in a particular country?
  • Where are some places online you can speak to Googlers for advice?
  • What are the following: rel canonical, noindex, nofollow, hreflang, mobile alternate?(Explain each directive and its behavior in detail and state any variations in implementation)

Explaining metrics from popular search tools

  • Explain SearchMetrics search visibility — how is this calculated? Why would you see declines in SM graphs but not in actual organic traffic?
  • Explain Google Trends Index — how is this calculated?
  • Explain Google Keyword Planner search volume estimates & competition metric — is search volume accurate? Is the competition metric useful for organic?
  • Explain SEMrush.com’s organic traffic graphs — Why might you see declines in SEMrush graphs, but not in actual organic traffic?

Link architecture

  • By hand, map out the world’s first website — http://info.cern.ch/hypertext/WWW/TheProject.html (we want to see the full link architecture here in a way that’s digestable)
  • Explain its efficiency from an SEO perspective — are this website’s pages linked efficiently? Why or why not?

Project 2 – Creating a (minimum) 10-page website

If the trainee doesn’t understand what something is, make sure that they try and figure it out themselves before coming for help. Building a website by hand is absolutely painful, and they might want to throw their computer out the window or just install WordPress — no, no, no. There are so many things to learn by doing it the hard way, which is the only way.

  1. Grab a domain name and go setup shared hosting. A LAMP stack with Cpanel and log file access (example: hostgator) is probably the easiest.
  2. Set up Filezilla with your host’s FTP details
  3. Set up a text editor (example: Notepad++, Sublime) and connect via FTP for quick deploy
  4. Create a 10-page flat site (NO CMS. That means no WordPress!)
    • Within the site, it must contain at least one instance of each the following:
      • <div>,<table>,<a>,<strong>, <em>, <iframe>, <button>, <noscript>, <form>, <option>, <button>, <img>, <h1>, <h2>, <h3>, <p>, <span>
      • Inline CSS that shows/hides a div on hover
      • Unique titles, meta descriptions, and H1s on every page
      • Must contain at least 3 folders
      • Must have at least 5 pages that are targeted to a different country
      • Recreate the navigation menu from the bbc.co.uk homepage (or your choice) using an external CSS stylesheet
      • Do the exact same as the previous, but make the Javascript external, and the function must execute with a button click.
      • Must receive 1,000 organic sessions in one month
      • Must contain Google Analytics tracking, Google search console setup, Bing webmaster tools, and Yandex webmaster tools setup
      • Create a custom 404 page
      • Create a 301, 302, and 307 redirect
      • Create a canonical to an exact duplicate, and another to a unique page — watch behavior

The site must contain at least one instance of each of the following, and every page which contains a directive (accompanying pages affected by directives as well) must be tracked through a rank tracker:

  • Rel canonical
  • Noindex
  • Noindex, follow
  • Mobile alternate (one page must be mobile-friendly)
  • Noarchive
  • Noimageindex
  • Meta refresh

Set up rank tracking

The trainee can use whatever tracking tool they like; https://www.wincher.com/ is $ 6/month for 100 keywords. The purpose of the rank tracking is to measure the effects of directives implemented, redirects, and general fluctuation.

Create the following XML sitemaps:

  • Write the following XML sitemaps by hand for at least 5 URLs: mobile, desktop, Android App, and create one desktop XML sitemap with hreflang annotations
  • Figure out how to ping Google & Bing with your sitemap URL

Writing robots.txt

  • Design a robots.txt that has specific blocking conditions for regular Googlebot, Bingbot, all user agents. They must be independent and not interfere with each other.
  • Write a rule that disallows everything, but allows at least 1 folder.
  • Test the robots.txt file through the Search Console robots.txt tester.

Crawl the site and fix errors (Use Screaming Frog)

Project 3 – PR, Sales, Promotion and Community Involvement

These tasks can be done on an independent website or directly for a client; it depends on your organizational requirements. This is the part of the training where the trainee learns how to negotiate, sell, listen, promote, and create exposure for themselves.

Sales & negotiation

  • Close one guest post deal (i.e. have your content placed on an external website). Bonus if this is done via a phone call.
  • Create & close one syndication deal (i.e. have your content placed and rel canonical’d back to your content). Bonus if this is done via a phone call.
  • Close one advertising deal (this could be as simple as negotiating a banner placement, and as hard as completely managing the development of the ad plus tracking)
  • Sit in on 5 sales calls (depending on your business, this may need to be adjusted — it could be customer service calls)
  • Sit in on 5 sales meetings (again, adjust this for your business)

PR

  1. Create a story, write a press release, get the story covered by any publication (bonus if there’s a link back to your original release, or a rel canonical)
  2. Use a PR wire to syndicate, or find your own syndication partner

Community involvement

  • Sign up for a Moz account and answer at least 15 questions in the forum
  • Sign up for a Quora account and answer at least 5 questions
  • Write 3 blog posts and get them featured on an industry website
  • Speak at an event, no matter how small; must be at least 10 minutes long

YouTube

  • Create a screencast tutorial, upload it to YouTube, get 1,000 views (they will also need to optimize description, tags, etc.)
  • Here’s an example: https://www.youtube.com/watch?v=EXhmF9rjqP4 (that was my first try at this, years ago which you can use as inspiration)

Facebook & Twitter Paid Ads

  • On both networks, pay to get 100 visits from an ad. These campaigns must be tracked properly in an analytics platform, not only in FB and Twitter analytics!

Adwords

  • Create 1 campaign (custom ad) with the goal of finding real number of impressions versus estimated search volume from Keyword Planner.
  • Bonus: Drive 100 visits with an ad. Remember to keep the costs low — this is just training!

Project 4 – Data Manipulation & Analytics

Spreadsheets are to SEOs as fire trucks are to firefighters. Trainees need to be proficient in Excel or Google Docs right from the start. These tasks are useful for grasping data manipulation techniques in spreadsheets, Google Analytics, and some more advanced subjects, like scraping and machine learning classification.

Excel skills

Must be able to fill in required arguments for the following formulas in under 6 seconds:

  • Index + match
  • VLOOKUP (we should really be teaching people to index-match, because it’s more versatile and is quicker when dealing with larger datasets)
  • COUNTIF, COUNTIFS (2 conditions)
  • SUMIF, SUMIFS (2 conditions)
  • IF & AND statement in the same formula
  • Max, Min, Sum, Avg, Correl, Percentile, Len, Mid, Left, Right, Search, & Offset are also required formulas.

Also:

  • Conditional formatting based on a formula
  • Create a meaningful pivot table + chart
  • Record a macro that will actually be used
  • Ability to copy, paste, move, transpose, and copy an entire row and paste in new sheet — all while never touching the mouse.

Google Analytics

  • Install Google Analytics (Universal Analytics), and Google Tag Manager at least once — ensure that the bare minimum tracking works properly.
  • Pass the GAIQ Exam with at least 90%
  • Create a non-interaction event
  • Create a destination goal
  • Create a macro that finds a value in the DOM and only fires on a specific page
  • Create a custom segment, segmenting session by Google organic, mobile device only, Android operating system, US traffic only — then share the segment with another account.
  • Create an alert for increasing 404 page errors (comparison by day, threshold is 10% change)
  • Install the Google Tag Assistant for Chrome and learn to record and decipher requests for debugging
  • Use the Google Analytics Query explorer to pull from any profile — you must pull at least 3 metrics, 1 dimension, sort by 1 metric, and have 1 filter.
  • Create one Google Content Experiment — this involves creating two pages and A/B testing to find the winner. They’ll need to have some sort of call to action; it could be as simple as a form or a targeted click. Either way, traffic doesn’t determine the winner here; it’s conversion rate.

Google Search Console

  • Trainee must go through every report (I really mean every report), and double-check the accuracy of each using external SEO tools (except crawl activity reports). The point here is to find out why there are discrepancies between what SEO tools find and what Google Search Console reports.
  • Fetch and render 5 different pages from 5 domains, include at least 2 mobile pages
  • Fetch (only fetch) 3 more pages; 1 must be mobile
  • Submit an XML sitemap
  • Create https, http, www, and non-www versions of their site they built in the previous project and identify discrepancies.
  • Answer: Why don’t clicks from search analytics add up compared to Google Analytics?
  • Answer: How are impressions from search analytics measured?

Link auditing

  • Download link reports for 1 website. Use Google Search Console, Majestic, Ahrefs, and Moz, and combine them all in one Excel file (or Google Doc sheet). If the total number of rows between all 4 exports are over Excel’s limit, the trainee will need to figure out how to handle large files on their own (hint: SQL or other database).
  • Must combine all links, de-duplicate, have columns for all anchor texts, and check if links are still alive (hint: the trainee can use Screaming Frog to check live links, or URL Profiler)

Explore machine learning

Scrape something

  • Use at least 3 different methods to extract information from any webpage (hint: import.io, importxml)

Log file analysis

  • Let the trainee use whatever software they want to parse the log files; just remember to explain how different servers will have different fields.
  • Grab a copy of any web server access log files that contain at least the following fields: user-agent, timestamp, URI, IP, Method, Referrer (ensure that CDNs or other intermediary transactions are not rewriting the IP addresses).
  • Trainee must be able to do the following:
    • Find Googlebot requests; double-check by reverse DNS that it’s actually Googlebot
    • Find a 4xx error encountered by Googlebot, then find the referrer for that 4xx error by looking at other user agent requests to the same 4xx error
    • Create a pivot table with all the URLs requested and the amount of times they were requested by Googlebot

Keyword Planner

The candidate must be able to do the following:

  • Find YoY search volume for any given term
  • Find keyword limits, both in the interface and by uploading a CSV
  • Find the mobile trends graph for a set of keywords
  • Use negative keywords
  • Find breakdown by device

Google Chrome Development tools

The candidate must be able to do the following:

  • Turn off Javascript
  • Manipulate elements of the page (As a fun exercise, get them to change a news article to a completely new story)
  • Find every request Chrome makes when visiting a webpage
  • Download the HAR file
  • Run a speed audit & security audit directly from the development tool interface
  • Change their user agent to Googlebot
  • Emulate an Apple iPhone 5
  • Add a CSS attribute (or change one)
  • Add a breakpoint
  • Use the shortcut key to bring up development tools

Project 5 – Miscellaneous / Fun Stuff

These projects are designed to broaden their skills, as well as as prepare the trainee for the future and introduce them to important concepts.

Use a proxy and a VPN

  • As long as they are able to connect to a proxy and a VPN in any application, this is fine — ensure that they understand how to verify their new IP.

Find a development team, and observe the development cycle

  • Have the trainees be present during a scrum/sprint kickoff, and a release.
  • Have the trainees help write development tickets and prioritize accordingly.

Have them spend a day helping other employees with different jobs

  • Have them spend a day with the PR, analytics folks, devs… everyone. The goal should be to understand what it’s like to live a day in their shoes, and assist them throughout the entire day.

Get a website THEY OWN penalized. Heck, make it two!

  • Now that the trainee has built a website by hand, feel free to get them to put up another couple of websites and get some traffic pouring in.
  • Then, start searching for nasty links and other deceptive SEO tactics that are against the Webmaster Guidelines and get that website penalized. Hint: Head to fiverr.com for some services.
  • Bonus: Try to get the penalty reversed. Heh, good luck :)

API skills

  • Request data from 2 different APIs using at least 2 different technologies (either a programming language or software — I would suggest the SEMrush APIand Alchemy Language API). Hints: They can use Postman, Google Docs, Excel, command line, or any programming language.
  • Google APIs are also fantastic, and there are lots of free services in the Google Cloud Console.

Learn concepts of programming

Write 2 functions in 2 different programming languages — these need to be functions that do something useful (i.e. “hello world” is not useful).

Ideas:

  • A Javascript bookmark that extracts link metrics from Majestic or Moz for the given page
  • A simple application that extracts title, H1, and all links from a given URL
  • A simple application that emails you if a change has been detected on a webpage
  • Pull word count from 100 pages in less than 10 seconds

If I were to pick which technology, it would be Javascript and Python. Javascript (Node, Express, React, Angular, Ember, etc.) because I believe things are moving this way, i.e. 1 language for both front and back end. Python because of its rich data science & machine learning libraries, which may become a core part of SEO tasks in the future.

Do an introductory course on computer science / build a search engine

I strongly recommend anyone in SEO to build their own search engine — and no, I’m not crazy, this isn’t crazy, it’s just hard. There are two ways to do this, but I’d recommend both.

  • Complete intro to Computer Science (you build a search engine in Python). This is a fantastic course; I strongly recommend it even if the junior already has a CS degree.
  • Sign up to https://opensolr.com/, crawl a small website, and build your own search engine. You’ll go through a lot of pain to configure what you want, but you’ll learn all about Apache Solr and how a popular search technology works.

Super Evil Genius Bonus Training

Get them to pass http://oap.ninja/, built by the infamous Dean Cruddace. Warning, this is evil — I’ve seen seasoned SEOs give up after just hours into it.

These days, SEO job requirements demand a lot from candidates.

Employers are asking for a wider array of skills that range from development to design as standard, not “preferred.”

Have a look around at current SEO job listings. You might be surprised just how much we’re expected to know these days:

  • Strong in Google Analytics/Omniture
  • Assist in the development of presentations to clients
  • Advanced proficiency with MS Excel, SQL
  • Advanced writing, grammar, spelling, editing, and English skills with a creative flair
  • Creating press releases and distribution
  • Proficiency in design software, Photoshop and Illustrator preferred
  • Develop and implement architectural, technical, and content recommendations
  • Conduct keyword research including industry trends and competitive analysis
  • Experience with WordPress and/or Magento (preferred)
  • Experience creating content for links and outreach
  • Experience in building up social media profiles and executing a social media strategy
  • Ability to program in HTML/CSS, VB/VBA, C++, PHP, and/or Python are a plus
  • A/B and Multivariate testing
  • Knowledge of project management software such as Basecamp, MS Project, Visio, Salesforce, etc
  • Basic knowledge of PHP, HTML, XML, CSS, JavaScript
  • Develop + analyze weekly and monthly reports across multiple clients

The list goes on and on, but you get the point. We’re expected to be developers, designers, PR specialists, salespeople, CRO, and social managers. This is why I believe we need to expose juniors to a wide set of tasks and help them develop a broad skill set.

“I’m a Junior SEO and my boss is making me do this training now, I hate you Dave!”

You might hate me now, but when you’re making a lot more money you might change your mind (you might even want to cuddle).

Plus, I’m putting you through hell so that….

  • You don’t lose credibility in front of developers (hint: these are the people who will have to implement your consulting). By using the correct terminology, and by doing parts of the work, you’ll be able to empathize and give better advice.
  • You don’t limit yourself to specific projects/tasks because of lack of knowledge/experience in other specialisms within SEO.
  • You will become a well-rounded marketer, able to take on whatever Google’s Algorithm of Wonder throws at you or jump into other disciplines within digital marketing with a solid foundation.

Feel free to ping me on Twitter (@dsottimano) or you can catch me hanging out with the DMG crew.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

Wake Up, SEOs – the NEW New Google is Here

Posted by gfiorelli1

In 2011 I wrote a post here on Moz. The title was “Wake Up SEOs, the New Google is Here.”

In that post I presented some concepts that, in my personal opinion, we SEOs needed to pay attention to in order to follow the evolution of Google.

Sure, I also presented a theory which ultimately proved incorrect; I was much too confident about things like rel=”author”, rel=”publisher”, and the potential decline of the Link Graph influence.

However, the premises of that theory were substantially correct, and they remain correct five years later:

  1. Technical SEO is foundational to the SEO practice;
  2. The user is king, which means that Google will focus more and more on delivering the best user search experience — hence, SEO must evolve from “Search Engine Optimization” into “Search Experience Optimization”;
  3. That web performance optimization (SiteSpeed), 10X content, and semantics would have played a big role in SEO.

Many things have changed in our industry in the past 5 years. The time has come to pause, take a few minutes, and assess what Google is and where it’s headed.

I’ll explain how I “study” Google and what I strongly believe we, the SEOs, should pay attention to if we want not only to survive, but to anticipate Google’s end game, readying ourselves for the future.

Obviously, consider that, while I believe it’s backed up by data, facts, and proof, this is my opinion. As such, I kindly ask you not to take what I write for granted, but rather as an incentive for your own investigations and experiments.

Exploring the expanded universe of Google

Credit: Robson Ribeiro

SEO is a kingdom of uncertainty.

However, one constant never changes: almost every SEO dreams of being a Jedi at least once in her life.

I, too, fantasize about using the Force… Gianlu Ka Fiore Lli, Master Jedi.

Honestly, though, I think I’m more like Mon Mothma.

Like her, I am a strategist by nature. I love to investigate, to see connections where nobody else seems to see them, and to dig deeper into finding answers to complex questions, then design plans based on my investigations.

This way of being means that, when I look at the mysterious wormhole that is Google, I examine many sources:

  1. The official Google blogs;
  2. The “Office Hours” hangouts;
  3. The sometimes contradictory declarations Googlers make on social media (when they don’t share an infinite loop of GIFs);
  4. The Google Patents and the ones filed by people now working for Google;
  5. The news (and stories) about the companies Google acquires;
  6. The biographies of the people Google employs in key areas;
  7. The “Google Fandom” (aka what we write about it);
  8. Rumors and propaganda.

Now, when examining all these sources, it’s easy to create amazing conspiranoiac (conspiracy + paranoia) theories. And I confess: I helped create, believed, and defended some of them, such as AuthorRank.

In my opinion, though, this methodology for finding answers about Google is the best one for understanding the future of our beloved industry of search.

If we don’t dig into the “Expanded Universe of Google,” what we have is a timeline composed only by updates (Panda 1.N, Penguin 1.N, Pigeon…), which is totally useless in the long term:

Click to open a bigger version in a new tab

Instead, if we create a timeline with all the events related to Google Search (which we can discover simply by being well-informed), we begin to see where Google’s heading:

Click to open a bigger version in a new tab

The timeline above confirms what Google itself openly declared:

“Machine Learning is a core, transformative way by which we’re rethinking how we’re doing everything.”
– (Sundar Pichai)

Google is becoming a “Machine Learning-First Company,” as defined by Steven Levy in this post.

Machine learning is becoming so essential in the evolution of Google and search, perhaps we should go beyond listening only to official Google spokespeople like Gary Illyes or John Mueller (nothing personal, just to be clear… for instance, read this enlightening interview of Gary Illyes by Woj Kwasi). Maybe we should start paying more attention to what people like Christine Robson, Greg Corrado, Jeff Dean, and the staff of Google Brain write and say.

The second timeline tells us that starting in 2013 Google started investing money, intellectual efforts, and energy on a sustained scale in:

  • Machine learning;
  • Semantics;
  • Context understanding;
  • User behavior (or “Signals/Semiotics,” as I like to call it).

2013: The year when everything changed

Google rolled out Hummingbird only three years ago, but it’s not just a saying: that feels like decades ago.

Let’s quickly rehash: what’s Hummingbird?

Hummingbird is the Google algorithm as a whole. It’s composed of four phases:

  1. Crawling, which collects information on the web;
  2. Parsing, which identifies the type of information collected, sorts it, and forwards it to a suitable recipient;
  3. Indexing, which identifies and associates resources in relation to a word and/or a phrase;
  4. Search, which…
    • Understands the queries of the users;
    • Retrieves information related to the queries;
    • Filters and clusters the information retrieved;
    • Ranks the resources; and
    • Paints the search result page and so answers the queries.

This last phase, Search, is where we can find the “200+ ranking factors” (RankBrain included) and filters like Panda or anti-spam algorithms like Penguin.

Remember that there are as many search phases as vertical indices exist (documents, images, news, video, apps, books, maps…).

We SEOs tend to fixate almost exclusively on the Search phase, forgetting that Hummingbird is more than that.

This approach to Google is myopic and does not withstand a very simple logical square exercise.

  1. If Google is able to correctly crawl a website (Crawling);
  2. to understand its meaning (Parsing and Indexing);
  3. and, finally, if the site itself responds positively to the many ranking factors (Search);
  4. then that website will be able to earn the organic visibility it aims to reach.

If even one of the three elements of the logical square is missing, organic visibility is missing; think about non-optimized AngularJS websites, and you’ll understand the logic.

The website on the left in a non-JS enabled browser. On the right, JS enabled reveals all of the content. Credit: Builtvisible.com

How can we be SEO Jedi if we only see one facet of the Force?

Parsing and indexing: often forgotten

Over the past 18 months, we’ve a sort of technical SEO Renaissance, as defined by Mike King in this fundamental deck and despite attempts to classify technical SEOs as makeup artists.

On the contrary, we’re still struggling to fully understand the importance of the Parsing and Indexing phases.

Of course, we can justify that by claiming that parsing is the most complex of the four phases. Google agrees, as it openly declared when announcing SintaxNet.

Announcing SintaxNext.gif

However, if we don’t optimize for parsing, then we’re not going to fully benefit from organic search, especially in the months and years to come.

How to optimize for parsing and indexing

As a premise to parsing and indexing optimization, we must remember an oft-forgotten aspect of search, which Hummingbird highlighted and enhanced: entity search.

If you remember what Amit Singhal said when he announced Hummingbird, he declared that it had “something of Knowledge Graph.”

That part was — and I’m simplifying here for clarity’s sake — entity search, which is based over two kinds of entities:

  1. Named entities are what the Knowledge Graph is about, such as persons, landmarks, brands, historic movements, and abstract concepts like “love” or “desire”;
  2. Search entities are “things” related to the act of searching. Google uses them to determine the answer for a query, especially in a personalized context. They include:
    • Query;
    • Documents and domain answering to the query;
    • Search session;
    • Anchor text of links (internal and external);
    • Time when the query is executed;
    • Advertisements responding to a query.

Why does entity search matter?

It matters because entity search is the reason Google better understands the personal and almost unique context of a query.

Moreover, thanks to entity search, Google better understands the meaning of the documents it parses. This means it’s able to index them better and, finally, to achieve its main purpose: serving the best answers to the users’ queries.

This is why semantics is important: semantic search is optimizing for meaning.

Credit: Starwars.com

It’s not a ranking factor, it’s not needed to improve crawling, but it is fundamental for Parsing and Indexing, the big forgotten-by-SEOs algorithm phases.

Semantics and SEO

First of all, we must consider that there are different kinds of semantics and that, sometimes, people tend to get them confused.

  1. Logical semantics, which is about the relations between concepts/linguistic elements (e.g.: reference, presupposition, implication, et al)
  2. Lexical semantics, which is about the meaning of words and their relation.

Logical semantics

Structured data is the big guy right now in logical semantics, and Google (both directly and indirectly) is investing a lot in it.

A couple of months ago, when the mainstream marketing gurusphere was discussing the 50 shades of the new Instagram logo or the average SEO was (justifiably) shaking his fists against the green “ads” button in the SERPs, Google released the new version of Schema.org.

This new version, as Aaron Bradley finely commented here, improves the ability to disambiguate between entities and/or better explain their meaning.

For instance, now:

At the same time, we shouldn’t forget to always use the most important property of all: “SameAs”, one of few properties that’s present in every Schema.org type.

Finally, as Mike Arnesen recently explained quite well here on the Moz blog, take advantage of the semantic HTML attributes ItemRef and ItemID.

How do we implement Schema.org in 2016?

It is clear that Google is pushing JSON-LD as the preferred method for implementing Schema.org

The best way to implement JSON-LD Schema.org is to use the Knowledge Graph Search API, which uses the standard Schema.org types and is compliant with JSON-LD specifications.

As an alternative, you can use the recently rolled out JSON-LD Schema Generator for SEO tool by Hall Analysis.

To solve a common complaint about JSON-LD (its volume and how it may affect the performance of a site), we can:

  1. Use Tag Manager in order to fire Schema.org when needed;
  2. Use PreRender in order to let the browser begin uploading the pages your users may visit after the one they’re currently on, anticipating the upload of the JSON-LD elements of those pages.

The importance Google gives to Schema.org and structured data is confirmed by the new and radically improved version of the Structured Data Testing Tool, which is now more actionable for identifying mistakes and test solutions thanks to its JSON-LD (again!) and Schema.org contextual autocomplete suggestions.

Semantics is more than structured data #FTW!

One mistake I foresee is thinking that semantic search is only about structured data.

It’s the same kind of mistake people do in international SEO, when reducing it to hreflang alone.

The reality is that semantics is present from the very foundations of a website, found in:

  1. Its code, specifically HTML;
  2. Its architecture.

HTML

Click to open a bigger version in a new tab

Since its beginnings, HTML included semantic markup (e.g.: title, H1, H2…).

Its latest version, HTML5, added new semantic elements, the purpose of which is to semantically organize the structure of a web document and, as W3C says, to allow “data to be shared and reused across applications, enterprises, and communities.”

A clear example of how Google is using the semantic elements of HTML are its Featured Snippets or answer boxes.

As declared by Google itself (“We do not use structured data for creating Featured Snippets”) and explained well by Dr. Pete, Richard Baxter, and very recently Simon Penson, the documents that tend to be used for answer boxes usually display these three factors:

  1. They already rank on the first page for the query pulling out the answer box;
  2. They positively answer using basic on-page factors;
  3. They have a clean — or almost clean — HTML code

The conclusion, then, is that semantic search starts in the code and that we should pay more attention to those “boring,” time-consuming, not-a-priority W3C error reports.

Architecture

The semiotician in me (I studied semiotics and the philosophy of language in university with the likes of Umberto Eco) cannot help but not consider information architecture itself as semantics.

Let me explain.

Open http://www.starwars.com/ in a tab of your browser to follow along below

Everything starts with the right ontology

Ontology is a set of concepts and categories in a subject area (or domain) that shows their properties and the relations between them.

If we take the Starwars.com site as example, we can see in the main menu the concepts in the Star Wars subject area:

  1. News/Blog;
  2. Video;
  3. Events;
  4. Films;
  5. TV Shows;
  6. Games/Apps;
  7. Community;
  8. Databank (the Star Wars Encyclopedia).
Ontology leads to taxonomy (because everything can be classified)

If we look at Starwars.com, we see how every concept included in the Star Wars domain has its own taxonomy.

For instance, the Databank presents several categories, like:

  1. Characters;
  2. Creatures;
  3. Locations;
  4. Vehicles;
  5. Et cetera, et cetera.
Ontology and taxonomy, then, lead to context

If we think of Tatooine, we tend to think about the planet where Luke Skywalker lived his youth.

However, if we visit a website about deep space exploration, Tatooine would be one of the many exoplanets that astronomers have discovered in the past few years.

As you can see, ontology (Star Wars vs celestial bodies) and taxonomies (Star Wars planets vs exoplanets) determine context and help disambiguate between similar entities.

Ontology, taxonomy, and context lead to meaning

The better we define the ontology of our website, structure its taxonomy, and offer better context to its elements, the better we explain the meaning of our website — both to our users and to Google.

Starwars.com, again, is very good at doing this.

For instance, if we examine how it structures a page like the one on TIE fighters, we see that every possible kind of content is used to help explain what a TIE fighter is:

  1. Generic description (text);
  2. Appearances of the TIE fighter in the Star Wars movies (internal links with optimized anchor text);
  3. Affiliations (internal links with optimized anchor text);
  4. Dimensions (text);
  5. Videos;
  6. Photo gallery;
  7. Soundboard (famous quotes by characters. In this case, it would be the classic “zzzzeeewww” sound many of us used as the ring tone on our old Nokias :D );
  8. Quotes (text);
  9. History (a substantial article with text, images, and links to other documents);
  10. Related topics (image plus internal links).

In the case of characters like Darth Vader, the information can be even richer.

The effectiveness of the information architecture of the Star Wars website (plus its authority) is such that its Databank is one of the very few non-Wikidata/Wikipedia sources that Google is using as a Knowledge Graph source.

Click to enlarge

What tool can we use to semantically optimize the structure of a website?

There are, in fact, several tools we can use to semantically optimize the information architecture of a website.

Knowledge Graph Search API

The first one is the Knowledge Graph Search API, because in using it we can get a ranked list of the entities that match given criteria.

This can help us better define the subjects related to a domain (ontology) and can offer ideas about how to structure a website or any kind of web document.

RelFinder

A second tool we can use is RelFinder, which is one of the very few free tools for entity research.

As you can see in the screencast below, RelFinder is based on Wikipedia. Its use is quite simple:

  1. Choose your main entity (eg: Star Wars);
  2. Choose the entity you want to see connections with (eg: Star Wars Episode IV: A New Hope);
  3. Click “Find Relations.”

RelFinder will detect entities related to both (e.g.: George Lucas or Marcia Lucas), their disambiguating properties (e.g.: George Lucas as director, producer, and writer) and factual ones (e.g.: lightsabers as an entity related to Star Wars and first seen in Episode IV).

RelFinder is very useful if we must do entity research on a small scale, such as when preparing a content piece or a small website.

However, if we need to do entity research on a bigger scale, it’s much better to rely on the following tools:

AlchemyAPI and other tools

AlchemyAPI, which was acquired by IBM last year, uses machine and deep learning in order to do natural language processing, semantic text analysis, and computer vision.

AlchemyAPI, which offers a 30-day trial API Key, is based on the Watson technology; it allows us to extract a huge amount of information from text, with concepts, entities, keywords, and taxonomy offered by default.

Resources about AlchemyAPI

Others tools that allow us to do entity extraction and semantic analysis on a big scale are:

Lexical semantics

As said before, lexical semantics is that branch of semantics that studies the meaning of words and their relations.

In the context of semantic search, this area is usually defined as keyword and topical research.

Here on Moz you can find several Whiteboard Friday videos on this topic:

How do we conduct semantically focused keyword and topical research?

Despite its recent update, Keyword Planner still can be useful for performing semantically focused keyword and topical research.

In fact, that update could even be deemed as a logical choice, from a semantic search point of view.

Terms like “PPC” and “pay-per-click” are synonyms, and even though each one surely has a different search volume, it’s evident how Google presents two very similar SERPs if we search for one or the other, especially if our search history already exhibits a pattern of searches related to SEM.

Yet this dimming of keyword data is less helpful for SEOs in that it makes for harder forecasting and prioritization of which keywords to target. This is especially true when we search for head terms, because it exacerbates a problem that Keyword Planner had: combining stemmed keywords that — albeit having “our keyword” as a base — have nothing in common because they mean completely different things and target very different topics.

However (and this is a pro tip), there is a way to discover the most useful keyword, even when they all have the same search volume: how much advertisers bids for it. Trust the market ;-) .

(If you want to learn more about the recent changes to Keyword Planner, go read this post by Bill Slawski.)

Keyword Planner for semantic search

Let’s say we want to create a site about Star Wars lightsabers (yes, I am a Star Wars geek).

What we could do is this:

  1. Open Keyword Planner / Find new Keywords and get (AH!) search volume data;
  2. Describe our product or service (“News” in the snapshot above);
  3. Use the Wikipedia page about lightsabers as a landing page (if your site were Spanish, the Wikipedia should be the Spanish one);
  4. Indicate our product category (Movies & Films above);
  5. Define the target and eventually indicate negative keywords;
  6. Click on “Get Ideas.”

Google will offer us these Ad Groups as results:

Click to open a bigger version in a new tab

The Ad Groups are a collection of semantically related keywords. They’re very useful for:

  1. Individuating topics;
  2. Creating a dictionary of keywords that can be given to writers for text, which will be both natural and semantically consistent.

Remember, then, that Keyword Planner allows us to do other kinds of analysis too, such as breaking down how the discovered keywords/Ad Groups are used by device or by location. This information is useful for understanding the context of our audience.

If you have one or a few entities for which you want to discover topics and grouped keywords, working directly in Keyword Planner and exporting everything to Google Sheets or an Excel file can be enough.

However, when you have tens or hundreds of entities to analyze, it’s much better to use the Adwords API or a tool like SEO Powersuite, which allows you to do keyword research following the method I described above.

Google Suggest, Related Searches, and Moz Keyword Explorer

Alongside with using Keyword Planner, we can use Google Suggest and Related Searches. Not for simply individuating topics that people search and then writing an instant blog post or a landing page about them, but for reaffirming and perfecting our site’s architecture.

Continuing with the example of a site or section specializing in lightsabers, if we look at Google Suggest we can see how “lightsaber replica” is one of the suggestions.

Moreover, amongst the Related Searches for “lightsaber,” we see “lightsaber replica” again, which is a clear signal of its relevance to “lightsaber.”

Finally, we can click on and discover “lightsaber replica”-related searches, thus creating what I define as the “search landscape” about a topic.

The model above is not scalable if we have many entities to analyze. In that case, a tool like Moz Keyword Explorer can be helpful thanks to the options it offers, as you can see in the snapshot below:

Click to open a bigger version in a new tab

Other keywords and topical research sources

Recently, Powerreviews.com presented survey results that state how Internet users tend to prefer Amazon over Google for searching information about a product (38% vs 35%).

So, why not use Amazon for doing keyword and topical research, especially if we are doing it for ecommerce websites or for the MOFU and BOFU phases of our customers’ journey?

We can use the Amazon Suggest:

Or we can use a free tool like the Amazon Keyword Tool by SISTRIX.

The Suggest function, though, is present in (almost) every website that has a search box (your own site, even, if you have it well-implemented!).

This means that if we’re searching for more mainstream and top-of-the-funnel topics, we can use the suggestions of social networks like Pinterest (i.e.: explore the voluptous universe of the “lightsaber cakes” and related topics):

Pinterest, then, is a real topical research goldmine thanks to its tagging system:

Pinterest Lightsaber Tags

On-page

Once we’ve defined the architecture, the topics, and prepared our keyword dictionaries, we can finally work on the on-page facet of our work.

The details of on-page SEO are another post for another time, so I’ll simply recommend you read this evergreen post by Cyrus Shepard.

The best way to grade the semantic search optimization of a written textis to use TF-IDF analysis, offered by sites like OnPage.org (which offers also a clear guide about the advantages and disadvantages of TF-IDF analyisis).

Remember that TF-IDF can also be used for doing competitive semantic search analysis and to discover the keyword dictionaries used by our competitors.

User behavior / Semiotics and context

In the beginning of this post, we saw how Google is heavily investing in better understanding the meaning of the documents it crawls, so to better answer the queries users perform.

Semantics (and semantic search) is only one of the pillars on which Google is basing this tremendous effort.

The other pillar consists of understanding user search behaviors and the context of the users performing a search.

User search behavior

Recently, Larry Kim shared two posts based on experiments he did, demonstrating his theory about how RankBrain is about factors like CTR and dwell time.

While these posts are super actionable, present interesting information with original data, and confirm other tests conducted in the past, these so-called user signals (CTR and dwell time) may not be directly related to RankBrain but, instead, to user search behaviors and personalized search.

Be aware, however, that my statement here above should be taken as a personal theory, because Google itself doesn’t really know how RankBrain works.

AJ Kohn, Danny Sullivan, and David Harry wrote additional interesting posts about RankBrain, if you want to dig into it (for the record, I wrote about it too here on Moz).

Even if RankBrain may be included in the semantic search landscape due to its use of Word2Vec technology, I find it better to concentrate on how Google may use user search behaviors to better understand the relevance of the parsed and indexed documents.

Click-through rate

Since Rand Fishkin presented his theory — backed up with tests — that Google may use CTR as a ranking factor more than two years ago, a lot has been written about the importance of click-through rate.

Common sense suggests that if people click more often on one search snippet than another that perhaps ranks in a higher position, then Google should take that users’ signal into consideration, and eventually lift the ranking of the page that consistently receives higher CTR.

Common sense, though, is not so easy to apply when it comes to search engines, and repeatedly Googlers have declared that they do not use CTR as a ranking factor (see here and here).

And although Google has long since developed a click fraud detection system for Adwords, it’s still not clear if it would be able to scale it for organic search.

On the other hand — let me be a little bit conspiranoiac — if CTR is not important at all, then why Google has changed the pixels of the title tag and meta description? Just for “better design?”

But as Eric Enge wrote in this post, one of the few things we know is that Google filed a patent (Modifying search result ranking based on a temporal element of user feedback, May 2015) about CTR. It’s surely using CTR in testing environments to better calculate the value and grade of other rankings factors and — this is more speculative — it may give a stronger importance to click-through rate in those subsets of keywords that clearly express a QDF (Query Deserves Freshness) need.

What’s less discussed is the importance CTR has in personalized search, as we know that Google tends to paint a custom SERP for each of us depending on both our search history and our personal click-through rate history. They’re key in helping Google determine which SERPs will be the most useful for us.

For instance:

  1. If we search something for the first time, and
  2. for that search we have no search history (or not enough to trigger personalized results), and
  3. the search presents ambiguous entities (i.e.: “Amber“),
  4. then it’s only thanks to our personal CTR/search history that Google will determine which search results related to a given entity to show or not (amber the stone or Amber Rose or Amber Alerts…).

Finally, even if Google does not use CTR as a ranking factor, this doesn’t mean it’s not an important metric and signal for SEOs. We have years of experience and hundreds of tests proving how important is to optimize our search snippets (and now Rich Cards) with the appropriate use of structured data in order to earn more organic traffic, even if we rank worst than our competitors.

Watch time

Having good CTR metrics is totally useless if the pages our visitors land on don’t fulfill the expectation the search snippet created.

This is similar to the difference between a clickbait and a persuasive headline. The first will probably cause a click back to the search results page and the second, instead, will trap and engage the visitors.

The ability of a site to retain its users is what we usually call dwell time, but that Google defines as watch time in this patent: Watch Time-Based Ranking (March 2013).

This patent is usually cited in relation to video because the patent itself uses video as content example, but Google doesn’t restrict its definition to videos alone:

In general, “watch time” refers to the total time that a user spends watching a video. However, watch times can also be calculated for and used to rank other types of content based on an amount of time a user spends watching the content.

Watch time is indeed a more useful user signal than CTR for understanding the quality of a web document and its content.

Are you skeptical and don’t trust me? Trust Facebook, then, because it also uses watch time in its news feed algorithm:

We’re learning that the time people choose to spend reading or watching content they clicked on from News Feed is an important signal that the story was interesting to them.


We are adding another factor to News Feed ranking so that we will now predict how long you spend looking at an article in the Facebook mobile browser or an Instant Article after you have clicked through from News Feed. This update to ranking will take into account how likely you are to click on an article and then spend time reading it. We will not be counting loading time towards this — we will be taking into account time spent reading and watching once the content has fully loaded. We will also be looking at the time spent within a threshold so as not to accidentally treat longer articles preferentially.

With this change, we can better understand which articles might be interesting to you based on how long you and others read them, so you’ll be more likely to see stories you’re interested in reading.

Context and the importance of personalized search

I usually joke and say that the biggest mistake a gang of bank robbers could do is bring along their smartphones. It’d be quite easy to do PreCrime investigations simply by checking their activity board, which includes their location history on Google Maps.

A conference day in Adelaide.

In order to fulfill its mission of offering the best answers to its users, Google must not only understand the web documents it crawls so to index them properly, and not only improve its own ranking factors (taking into consideration the signals users give during their search sessions), but it also needs to understand the context in which users performs a search.

Here’s what Google knows about us:

It’s because of this compelling need to understand our context that Google hired the entire Behav.io team back in 2013.

Behav.io, if you don’t know already, was a company that developed an alpha test software based on its open source framework Funf (still alive), the purpose of which was to record and analyze the data that smartphones keep track of: location, speed, nearby devices and networks, phone activity, noise levels, et al.

All this information is required in order to better understand the implicit aspects of a query, especially if done from a smartphone and/or via voice search, and to better process what Tom Anthony and Will Critchlow define as compound queries.

However, personalized search is also determined by (again) entity search, specifically by search entities.

The relation between search entities creates a “probability score,” which may determine if a web document is shown in a determined SERP or not.

For instance, let’s say that someone performs a search about a topic (e.g.: Wookies) for which she never clicked on a search snippet of our site, but on another that had content about that same topic (e.g.: Wookieepedia) and which linked to the page about it on our site (e.g.: “How to distinguish one wookiee from another?”).

Those links — specifically their anchor texts — would help our site and page to earn a higher probability score than a competitor site that isn’t linked to by those sites present in the user’s search history.

This means that our page will have a better probability of appearing in that user’s personalized SERP than our competitors’.

You’re probably asking: what’s the actionable point of this patent?

Link building/earning is not dead at all, because it’s relevant not only to the Link Graph, but also to entity search. In other words, link building is semantic search, too.

The importance of branding and offline marketing for SEO

One of classic complaints SEOs have about Google is how it favors brands.

The real question, though, should be this: “Why aren’t you working to become a brand?”

Be aware! I am not talking about “vision,” “mission,” and “values” here — I’m talking about plain and simple semantics.

All throughout this post I spoke of entities (named and search ones), cited Word2Vec (vectors are “vast amounts of written language embedded into mathematical entities”), talked about lexical semantics, meaning, ontology, personalized search, and implied topics like co-occurrences and knowledge base.

Branding has a lot to do with all of these things.

I’ll try to explain it with a very personal example.

Last May in Valencia I debuted as conference organizer with The Inbounder.

One of the problems I faced when promoting the event was that “inbounder,” which I thought was a cool name for an event targeting inbound marketers, is also a basketball term.

The problem was obvious: how do I make Google understand that The Inbounder was not about basketball, but digital marketing?

The strategy we followed from the very beginning was to work on the branding of the event (I explain more about The Inbounder story here on Inbound.org).

We did this:

  • We created small local events, so as to
    • develop presence in local newspapers online and offline, a tactic that also obliged marketers to search on Google about the event using branded keywords (e.g.: “The Inbounder conference,” “The Inbounder Inbound Marketing Conference,” etc…), and
    • click on our search results snippets, hence activating personalized search
  • We worked with influencers (the speakers themselves) to trigger branded searches and direct traffic (remember: Chrome stores every URL we visit);
  • We did outreach and published guest posts about the event on sites visited by our audience (and recorded in its search history).

As a result, right now The Inbounder occupies all the first page of Google for its brand name and, more importantly in semantics terms, Google presents The Inbounder events as suggested and related searches. It associates it with all the searches I could ever want:

Another example is Trivago and its global TV advertising campaigns:

Trivago was very smart in constantly showing “Trivago” and “hotel” in the same phrase, even making their motto “Hotel? Trivago.”

This is a simple psychological trick for creating word associations.

As a result, people searched on Google for “hotel Trivago” (or “Trivago hotel”), especially just after the ads were broadcasted:

One of the results is that now, Google suggests “hotel Trivago” when we start typing “hotel” and, as in the case of The Inbounder, it presents “hotel Trivago” as a related search:

Wake up SEOs, the new new Google is here

Yes, it is. And it’s all about better understanding web documents and queries in order to provide the best answers to its users (and make money in the meantime).

To achieve this objective, ideally becoming the long-desired “Star Trek computer,” Google is investing money, people, and efforts into machine/deep learning, neural networks, semantics, search behavior, context analysis, and personalized search.

Remember, SEO is no longer just about “200 ranking factors.” SEO is about making our websites become the sources Google cannot help but use for answering queries.

This is exactly why semantic search is of utmost importance and not just something worth the attention of a few geeks passionate about linguistics, computer science, and patents.

Work on parsing and indexing optimization now, seriously implement semantic search in your SEO strategy, take advantage of the opportunities personalized search offers you, and always put users at the center of everything you do.

In doing so you’ll build a solid foundation for your success in the years to come, both via classic search and with Google Assistant/Now.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

Should SEOs and Marketers Continue to Track and Report on Keyword Rankings? – Whiteboard Friday

Posted by randfish

Is the practice of tracking keywords truly dying? There’s been a great deal of industry discussion around the topic of late, and some key points have been made. In today’s Whiteboard Friday, Rand speaks to the biggest challenges keyword rank tracking faces today and how to solve for them.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about keyword ranking reports. There have been a few articles that have come out recently on a number of big industry sites around whether SEOs should still be tracking their keyword rankings.

I want to be clear: Moz has a little bit of a vested interest here. And so the question is: Can you actually trust me, who obviously I’m a big shareholder in Moz and I’m the founder, and so I care a lot about how Moz does as a software business. We help people track rankings. Does that mean I’m biased? I’m going to do my best not to be. So rather than saying you absolutely should track rankings, I’m instead going to address what most of these articles have brought up as the problems of rank tracking and then talk about some solutions by which you can do this.

My suspicion is you should probably be rank tracking. I think that if you turn it off and you don’t do it, it’s very hard to get a lot of the value that we need as SEOs, a lot of the intelligence. It’s true there are challenges with keyword ranking reports, but not true enough to avoid doing it entirely. We still get too much value from them.

The case against — and solutions for — keyword ranking data

A. People, places, and things

So let’s start with the case against keyword ranking data. First off, “keyword ranking reports are inaccurate.” There’s personalization, localization, and device type, and that biases and has removed what is the “one true ranking.” We’ve done a bunch of analyses of these, and this is absolutely the case.

Personalization, turns out, doesn’t change ranking that much on average. For an individual it can change rankings dramatically. If they visited your website before, they could be historically biased to you. Or if they visited your competitor’s, they could be biased. Their previous search history might have biased them in a single session, those kinds of things. But with the removal of Google+ from search results, personalization is actually not as dramatically changing as it used to be. Localization, though, still huge, absolutely, and device differences, still huge.

Solution

But we can address this, and the way to do that is by tracking these things separately. So here you can see I’ve got a ranking report that shows me my mobile rankings versus my desktop rankings. I think this is absolutely essential. Especially if you’re getting a lot of traffic from both mobile and desktop search, you need to be tracking those separately. Super smart. Of course we should do that.

We can do the same thing on the local side as well. So I can say, “Here, look. This is how I rank in Seattle. Here’s how I rank in Minneapolis. Here’s how I rank in the U.S. with no geographic personalization,” if Google were to do that. Those types of rankings can also be pretty good.

It is true that local ranked tracking has gotten a little more challenging, but we’ve seen that folks like, well Moz itself, but folks like STAT (GetStat), SERPs.com, Search Metrics, they have all adjusted their rank tracking methodologies in order to have accurate local rank tracking. It’s pretty good. Same with device type, pretty darn good.

B. Keyword value estimation

Another big problem that is expressed by a number of folks here is we no longer know how much traffic an individual keyword sends. Because we don’t know how much an individual keyword sends, we can’t really say, “What’s the value of ranking for that keyword?” Therefore, why bother to even track keyword rankings?

I think this is a little bit of spurious logic. The leap there doesn’t quite make sense to me. But I will say this. If you don’t know which keywords are sending you traffic specifically, you still know which pages are receiving search traffic. That is reported. You can get it in your Google Analytics, your Omniture report, whatever you’re using, and then you can tie that back to keyword ranking reports showing which pages are receiving traffic from which keywords.

Most all of the ranked tracking platforms, Moz included, has a report that shows you something like this. It says, “Here are the keywords that we believe are likely to have sent these percentages of traffic to this page based on the keywords that you’re tracking, based on the pages that are ranking for them, and how much search traffic those pages receive.”

Solution

So let’s track that. We can look at pages receiving visits from search, and we can look at which keywords they rank for. Then we can tie those together, which gives us the ability to then make not only a report like this, but a report that estimates the value contributed by content and by pages rather than by individual keywords.

In a lot of ways, this is almost superior to our previous methodology of tracking by keyword. Keyword can still be estimated through AdWords, through paid search, but this can be estimated on a content basis, which means you get credit for how much value the page has created, based on all the search traffic that’s flowed to it, and where that’s at in your attribution lifecycle of people visiting those pages.

C. Tracking rankings and keyword relevancy

Pages often rank for keywords that they aren’t specifically targeting, because Google has gotten way better with user intent. So it can be hard or even impossible to track those rankings, because we don’t know what to look for.

Well, okay, I hear you. That is a challenge. This means basically what we have to do is broaden the set of keywords that we look at and deal with the fact that we’re going to have to do sampling. We can’t track every possible keyword, unless you have a crazy budget, in which case go talk to Rob Bucci up at STAT, and he will set you up with a huge campaign to track all your millions of keywords.

Solution

If you have a smaller budget, what you have to do is sample, and you sample by sets of keywords. Like these are my high conversion keywords — I’m going to assume I have a flower delivery business — so flower delivery and floral gifts and flower arrangements for offices. My long tail keywords, like artisan rose varieties and floral alternatives for special occasions, and my branded keywords, like Rand’s Flowers or Flowers by Rand.

I can create a bunch of different buckets like this, sample the keywords that are in them, and then I can track each of these separately. Now I can see, ah, these are sets of keywords where I’ve generally been moving up and receiving more traffic. These are sets of keywords where I’ve generally been moving down. These are sets of keywords that perform better or worse on mobile or desktop, or better or worse in these geographic areas. Right now I can really start to get true intelligence from there.

Don’t let your keyword targeting — your keyword targeting meaning what keywords you’re targeting on which pages — determine what you rank track. Don’t let it do that exclusively. Sure, go ahead and take that list and put that in there, but then also do some more expansive keyword research to find those broad sets of search terms and phrases that you should be monitoring. Now we can really solve this issue.

D. Keyword rank tracking with a purpose

This one I think is a pretty insidious problem. But for many organizations ranking reports are more of a historical artifact. We’re not tracking them for a particular reason. We’re tracking them because that’s what we’ve always tracked and/or because we think we’re supposed to track them. Those are terrible reasons to track things. You should be looking for reasons of real value and actionability. Let’s give some examples here.

Solution

What I want you to do is identify the goals of rank tracking first, like: What do I want to solve? What would I do differently based on whether this data came back to me in one way or another?

If you don’t have a great answer to that question, definitely don’t bother tracking that thing. That should be the rule of all analytics.

So if your goal is to say, “Hey, I want to be able to attribute a search traffic gain or a search traffic loss to what I’ve done on my site or what Google has changed out there,” that is crucially important. I think that’s core to SEO. If you don’t have that, I’m not sure how we can possibly do our jobs.

We attribute search traffic gains and losses by tracking broadly, a broad enough set of keywords, hopefully in enough buckets, to be able to get a good sample set; by tracking the pages that receive that traffic so we can see if a page goes way down in its search visits. We can look at, “Oh, what was that page ranking for? Oh, it was ranking for these keywords. Oh, they dropped.” Or, “No, they didn’t drop. But you know what? We looked in Google Trends, and the traffic demand for those keywords dropped,” and so we know that this is a seasonality thing, or a fluctuation in demand, or those types of things.

And we can track by geography and device, so that we can say, “Hey, we lost a bunch of traffic. Oh, we’re no longer mobile-friendly.” That is a problem. Or, “Hey, we’re tracking and, hey, we’re no longer ranking in this geography. Oh, that’s because these two competitors came in and they took over that market from us.”

We could look at would be something like identify pages that are in need of work, but they only require a small amount of work to have a big change in traffic. So we could do things like track pages that rank on page two for given keywords. If we have a bunch of those, we can say, “Hey, maybe just a few on-page tweaks, a few links to these pages, and we could move up substantially.” We had a Whiteboard Friday where we talked about how you could do that with internal linking previously and have seen some remarkable results there.

We can track keywords that rank in position four to seven on average. Those are your big wins, because if you can move up from position four, five, six, seven to one, two, three, you can double or triple your search traffic that you’re receiving from keywords like that.

You should also track long tail, untargeted keywords. If you’ve got a long tail bucket, like we’ve got up here, I can then say, “Aha, I don’t have a page that’s even targeting any of these keywords. I should make one. I could probably rank very easily because I have an authoritative website and some good content,” and that’s really all you might need.

We might look at some up-and-coming competitors. I want to track who’s in my space, who might be creeping up there. So I should track the most common domains that rank on page one or two across my keyword sets.

I can track specific competitors. I might say, “Hey, Joel’s Flower Delivery Service looks like it’s doing really well. I’m going to set them up as a competitor, and I’m going to track their rankings specifically, or I’m going to see…” You could use something like SEMrush and see specifically: What are all the keywords they rank for that you don’t rank for?

This type of data, in my view, is still tremendously important to SEO, no matter what platform you’re using. But if you’re having these problems or if these problems are being expressed to you, now you have some solutions.

I look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Advert