Tag Archive | "impact"

How to prioritize SEO tasks by impact

How do you know if the SEO and content changes you’re making will benefit your site? Contributor Casie Gillette looks at ways to prioritize resources so they impact your bottom line and support your business objectives.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

4 underutilized schema markup opportunities that impact SEO

Contributor Tony Edwards recommends taking advantage of little-used brand, image, app and person schema that indirectly help position a website for better rankings.

The post 4 underutilized schema markup opportunities that impact SEO appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

How Does Mobile-First Indexing Work, and How Does It Impact SEO?

Posted by bridget.randolph

We’ve been hearing a lot about mobile-first indexing lately, as the latest development in Google’s ever-continuing efforts to make the web more mobile-friendly and reflect user behavior trends.

But there’s also a lot of confusion around what this means for the average business owner. Do you have to change anything? Everything? If your site is mobile-friendly, will that be good enough?

IS THIS GOING TO BE ANOTHER MOBILEGEDDON?!!

In this post I’ll go over the basics of what “mobile-first indexing” means, and what you may need to do about it. I’ll also answer some frequently asked questions about mobile-first indexing and what it means for our SEO efforts.

What is “mobile-first indexing”?

Mobile-first indexing is exactly what it sounds like. It just means that the mobile version of your website becomes the starting point for what Google includes in their index, and the baseline for how they determine rankings. If you monitor crawlbot traffic to your site, you may see an increase in traffic from Smartphone Googlebot, and the cached versions of pages will usually be the mobile version of the page.

It’s called “mobile-first” because it’s not a mobile-only index: for instance, if a site doesn’t have a mobile-friendly version, the desktop site can still be included in the index. But the lack of a mobile-friendly experience could impact negatively on the rankings of that site, and a site with a better mobile experience would potentially receive a rankings boost even for searchers on a desktop.

You may also want to think of the phrase “mobile-first” as a reference to the fact that the mobile version will be considered the primary version of your website. So if your mobile and desktop versions are equivalent — for instance if you’ve optimized your content for mobile, and/or if you use responsive design — this change should (in theory) not have any significant impact in terms of your site’s performance in search results.

However it does represent a fundamental reversal in the way Google is thinking about your website content and how to prioritize crawling and indexation. Remember that up until now the desktop site was considered the primary version (similar to a canonical URL) and the mobile site was treated as an “alternate” version for a particular use case. This is why Google encouraged webmasters with a separate mobile site (m.domain.com) to implement switchboard tags (which indicated the existence of a mobile URL version with a special rel=alternate tag). Google might not even make the effort to crawl and cache the mobile versions of all of these pages, as they could simply display that mobile URL to mobile searchers.

This view of the desktop version as the primary one often meant in practice that the desktop site would be prioritized by SEOs and marketing teams and was treated as the most comprehensive version of a website, with full content, structured data markup, hreflang (international tags), the majority of backlinks, etc.; while the mobile version might have lighter content, and/or not include the same level of markup and structure, and almost certainly would not receive the bulk of backlinks and external attention.

What should I do about mobile-first indexing?

The first thing to know is that there’s no need to panic. So far this change is only in the very earliest stages of testing, and is being rolled out very gradually only to websites which Google considers to be “ready” enough for this change to have a minimal impact.

According to Google’s own latest guidance on the topic, if your website is responsive or otherwise identical in its desktop and mobile versions, you may not have to do anything differently (assuming you’re happy with your current rankings!).

That said, even with a totally responsive site, you’ll want to ensure that mobile page speed and load time are prioritized and that images and other (potentially) dynamic elements are optimized correctly for the mobile experience. Note that with mobile-first indexing, content which is collapsed or hidden in tabs, etc. due to space limitations will not be treated differently than visible content (as it may have been previously), since this type of screen real estate management is actually a mobile best practice.

If you have a separate mobile site, you’ll want to check the following:

  • Content: make sure your mobile version has all the high-quality, valuable content that exists on your desktop site. This could include text, videos and images. Make sure the formats used on the mobile version are crawlable and indexable (including alt-attributes for images).
  • Structured data: you should include the same structured data markup on both the mobile and desktop versions of the site. URLs shown within structured data on mobile pages should be the mobile version of the URL. Avoid adding unnecessary structured data if it isn’t relevant to the specific content of a page.
  • Metadata: ensure that titles and meta descriptions are equivalent on both versions of all pages.
    • Note that the official guidance says “equivalent” rather than “identical” – you may still want to optimize your mobile titles for shorter character counts, but make sure the same information and relevant keywords are included.
  • Hreflang: if you use rel=hreflang for internationalization, your mobile URLs’ hreflang annotations should point to the mobile version of your country or language variants, and desktop URLs should point to the desktop versions.
  • Social metadata: OpenGraph tags, Twitter cards and other social metadata should be included on the mobile version as well as the desktop version.
  • XML and media sitemaps: ensure that any links to sitemaps are accessible from the mobile version of the site. This also applies to robots directives (robots.txt and on-page meta-robots tags) and potentially even trust signals, like links to your privacy policy page.
  • Search Console verification: if you have only verified your desktop site in Google Search Console, make sure you also add and verify the mobile version.
  • App indexation: if you have app indexation set up for your desktop site, you may want to ensure that you have verified the mobile version of the site in relation to app association files, etc.
  • Server capacity: Make sure that your host servers can handle increased crawl rate.
    • (This only applies for sites with their mobile version on a separate host, such as m.domain.com.)
  • Switchboard tags: if you currently have mobile switchboard tags implemented, you do not need to change this implementation. These should remain as they are.

Common questions about mobile-first indexing

Is mobile-first indexing adding mobile pages to a separate mobile index?

With mobile-first indexing, there is only one index (the same one Google uses now). The change to mobile-first indexing does not generate a new “mobile-first” index, nor is it creating a separate “mobile index” with a “desktop index” remaining active. Instead, it simply changes how content is added to the existing index.

Is the mobile-first index live and affecting my site now? If not, when does it go live?

Google has been experimenting with this approach to indexing on a small number of sites, which were selected based on perceived “readiness”. A wider rollout is likely going to take a long time and in June 2017, Gary Illyes stated that it will probably take a few years before “we reach an index that is only mobile-first.”

Google has also stated the following on the Webmasters Blog, in a blog post dated Dec 18 2017:

“We will be evaluating sites independently on their readiness for mobile-first indexing based on the above criteria and transitioning them when ready. This process has already started for a handful of sites and is closely being monitored by the search team.

“We continue to be cautious with rolling out mobile-first indexing. We believe taking this slowly will help webmasters get their sites ready for mobile users, and because of that, we currently don’t have a timeline for when it’s going to be completed.”

Will Google only use my mobile site to determine my rankings?

Mobile-first means that the mobile version will be considered the primary version when it comes to how rankings are determined. However, there may be circumstances where the desktop version could be taken into consideration (for instance, if you don’t have a mobile version of a page).

That being said, you will potentially still see varying ranking results between mobile search results and desktop search results, so you’ll still want to track both. (In the same way that now, Google primarily uses the desktop site to determine rankings but you still want to track mobile rankings as these vary from desktop rankings based on user behavior and other factors).

When might Google use the desktop site to determine rankings vs. the mobile site?

The primary use case I’ve seen referred to so far is that they will use the desktop site to determine rankings when there is no mobile version.

It is possible that for websites where the desktop version has additional ranking information (such as backlinks), that information could also be taken into consideration – but there is no guarantee that they will crawl or index the desktop version once they’ve seen the mobile version, and I haven’t seen any official statements that this would be the case.

Therefore one of the official recommendations is that once the mobile-first indexing rollout happens, if you’re in the process of building your mobile site or have a “placeholder” type mobile version currently live it would actually be better to have no mobile site than a broken or incomplete one. In this case, you should wait to launch your mobile site until it is fully ready.

What if I don’t have a mobile version of my site?

If you don’t have a mobile version of your site and your desktop version is not mobile-friendly, your content can still be indexed; however you may not rank as well in comparison to mobile-friendly websites. This may even negatively impact your overall rankings on desktop search as well as mobile search results because it will be perceived as having a poorer user experience than other sites (since the crawler will be a “mobile” crawler).

What could happen to sites with a large desktop site and a small mobile site? Will content on your desktop site that does not appear on the mobile version be indexed and appear for desktop searches?

The end goal for this rollout is that the index will be based predominantly on crawling mobile content. If you have a heavily indexed desktop version, they’re not going to suddenly purge your desktop content from the existing index and start fresh with just your thin mobile site indexed; but the more you can ensure that your mobile version contains all relevant and valuable content, the more likely it is to continue to rank well, particularly as they cut back on crawling desktop versions of websites.

How does this change ranking factors and strategy going forward?

This may impact a variety of ranking factors and strategy in the future; Cindy Krum at Mobile Moxie has written two excellent articles on what could be coming in the future around this topic.

Cindy talks about the idea that mobile-first indexing may be “an indication that Google is becoming less dependent on traditional links and HTML URLS for ranking.” It seems that Google is moving away from needing to rely so much on a “URL” system of organizing content, in favor of a more API type approach based on “entities” (thanks, structured data!) rather than URL style links. Check out Cindy’s posts for more explanation of how this could impact the future of search and SEO.

Is there a difference between how responsive sites and separate mobile sites will be treated?

Yes and no. The main difference will be in terms of how much work you have to do to get ready for this change.

If you have a fully responsive site, you should already have everything present on your mobile version that is currently part of the desktop version, and your main challenge will simply be to ensure that the mobile experience is well optimized from a user perspective (e.g. page speed, load time, navigation, etc).

With a separate mobile site, you’ll need to make sure that your mobile version contains everything that your desktop site does, which could be a lot of work depending on your mobile strategy so far.

Will this change how I should serve ads/content/etc. on my mobile site?

If your current approach to ads is creating a slow or otherwise poor user experience you will certainly need to address that.

If you currently opt to hide some of your mobile site content in accordions or tabs to save space, this is actually not an issue as this content will be treated in the same way as if it was loaded fully visible (as long as the content is still crawlable/accessible).

Does this change how I use rel=canonical/switchboard tags?

No. For now, Google has stated that if you have already implemented switchboard tags, you should leave them as they are.


Has this overview helped you to feel more prepared for the shift to mobile-first indexing? Are there any questions you still have?

I’d love to hear what you’re thinking about in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How Links in Headers, Footers, Content, and Navigation Can Impact SEO – Whiteboard Friday

Posted by randfish

Which link is more valuable: the one in your nav, or the one in the content of your page? Now, how about if one of those in-content links is an image, and one is text? Not all links are created equal, and getting familiar with the details will help you build a stronger linking structure.

How Links in Headers, Footers, Content, and Navigation Can Impact SEO

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about links in headers and footers, in navigation versus content, and how that can affect both internal and external links and the link equity and link value that they pass to your website or to another website if you’re linking out to them.

So I’m going to use Candy Japan here. They recently crossed $ 1 million in sales. Very proud of Candy Japan. They sell these nice boxes of random assortments of Japanese candy that come to your house. Their website is actually remarkably simplistic. They have some footer links. They have some links in the content, but not a whole lot else. But I’m going to imagine them with a few more links in here just for our purposes.

It turns out that there are a number of interesting items when it comes to internal linking. So, for example, some on-page links matter more and carry more weight than other kinds. If you are smart and use these across your entire site, you can get some incremental or potentially some significant benefits depending on how you do it.

Do some on-page links matter more than others?

So, first off, good to know that…

I. Content links tend to matter more

…just broadly speaking, than navigation links. That shouldn’t be too surprising, right? If I have a link down here in the content of the page pointing to my Choco Puffs or my Gummies page, that might actually carry more weight in Google’s eyes than if I point to it in my navigation.

Now, this is not universally true, but observably, it seems to be the case. So when something is in the navigation, it’s almost always universally in that navigation. When something is in here, it’s often only specifically in here. So a little tough to tell cause and effect, but we can definitely see this when we get to external links. I’ll talk about that in a sec.

II. Links in footers often get devalued

So if there’s a link that you’ve got in your footer, but you don’t have it in your primary navigation, whether that’s on the side or the top, or in the content of the page, a link down here may not carry as much weight internally. In fact, sometimes it seems to carry almost no weight whatsoever other than just the indexing.

III. More used links may carry more weight

This is a theory for now. But we’ve seen some papers on this, and there has been some hypothesizing in the SEO community that essentially Google is watching as people browse the web, and they can get that data and sort of see that, hey, this is a well-trafficked page. It gets a lot of visits from this other page. This navigation actually seems to get used versus this other navigation, which doesn’t seem to be used.

There are a lot of ways that Google might interpret that data or might collect it. It could be from the size of it or the CSS qualities. It could be from how it appears on the page visually. But regardless, that also seems to be the case.

IV. Most visible links may get more weight

This does seem to be something that’s testable. So if you have very small fonts, very tiny links, they are not nearly as accessible or obvious to visitors. It seems to be the case that they also don’t carry as much weight in Google’s rankings.

V. On pages with multiple links to the same URL

For example, let’s say I’ve got this products link up here at the top, but I also link to my products down here under Other Candies, etc. It turns out that Google will see both links. They both point to the same page in this case, both pointing to the same page over here, but this page will only inherit the value of the anchor text from the first link on the page, not both of them.

So Other Candies, etc., that anchor text will essentially be treated as though it doesn’t exist. Google ignores multiple links to the same URL. This is actually true internal and external. For this reason, if you’re going ahead and trying to stuff in links in your internal content to other pages, thinking that you can get better anchor text value, well look, if they’re already in your navigation, you’re not getting any additional value. Same case if they’re up higher in the content. The second link to them is not carrying the anchor text value.

Can link location/type affect external link impact?

Other items to note on the external side of things and where they’re placed on pages.

I. In-content links are going to be more valuable than footers or nav links

In general, nav links are going to do better than footers. But in content, this primary content area right in here, that is where you’re going to get the most link value if you have the option of where you’re going to get an external link from on a page.

II. What if you have links that open in a new tab or in a new window versus links that open in the same tab, same window?

It doesn’t seem to matter at all. Google does not appear to carry any different weight from the experiments that we’ve seen and the ones we’ve conducted.

III. Text links do seem to perform better, get more weight than image links with alt attributes

They also seem to perform better than JavaScript links and other types of links, but critically important to know this, because many times what you will see is that a website will do something like this. They’ll have an image. This image will be a link that will point off to a page, and then below it they’ll have some sort of caption with keyword-rich anchors down here, and that will also point off. But Google will treat this first link as though it is the one, and it will be the alt attribute of this image that passes the anchor text, unless this is all one href tag, in which case you do get the benefit of the caption as the anchor. So best practice there.

IV. Multiple links from same page — only the first anchor counts

Well, just like with internal links, only the first anchor is going to count. So if I have two links from Candy Japan pointing to me, it’s only the top one that Google sees first in the HTML. So it’s not where it’s organized in the site as it renders visually, but where it comes up in the HTML of the page as Google is rendering that.

V. The same link and anchor on many or most or all pages on a website tends to get you into trouble.

Not always, not universally. Sometimes it can be okay. Is Amazon allowed to link to Whole Foods from their footer? Yes, they are. They’re part of the same company and group and that kind of thing. But if, for example, Amazon were to go crazy spamming and decided to make it “cheap avocados delivered to your home” and put that in the footer of all their pages and point that to the WholeFoods.com/avocadodelivery page, that would probably get penalized, or it may just be devalued. It might not rank at all, or it might not pass any link equity. So notable that in the cases where you have the option of, “Should I get a link on every page of a website? Well, gosh, that sounds like a good deal. I’d pass all this page rank and all this link equity.” No, bad deal.

Instead, far better would be to get a link from a page that’s already linked to by all of these pages, like, hey, if we can get a link from the About page or from the Products page or from the homepage, a link on the homepage, those are all great places to get links. I don’t want a link on every page in the footer or on every page in a sidebar. That tends to get me in trouble, especially if it is anchor text-rich and clearly keyword targeted and trying to manipulate SEO.

All right, everyone. I look forward to your questions. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SEO for Copywriters: Tips on Measuring SEO Impact – Next Level

Posted by BrianChilds

Welcome to the newest installment of our educational Next Level series! In our last episode, Brian Childs shared a few handy shortcuts for targeting multiple keywords with one page. Today, he’s back to share how to use Google Analytics to measure the SEO impact of your content. Read on and level up!

Understanding how to write web content for SEO is important. But equally important is knowing how to measure the SEO impact of your content after it’s published. In this article I’ll describe how to use Google Analytics to create reports that evaluate the performance of articles or the writers creating those articles.

Let’s start with some definitions.

What is SEO content?

Search engine optimized content is the strategic process of researching and writing website copy with the goal of maximizing its impact in the SERPs. This requires having a keyword strategy, the ability to conduct competitive analyses, and knowledge of current ranking factors.

If you’re a copywriter, you’ve likely already been asked by your clients to create content “written for SEO.” Translating this into action often means the writer needs to have a greater role in both strategy and research. Words matter in SEO, and spending the time to get them right is a big part of creating content effectively. Adding SEO research and analysis to the process of researching content often fits nicely.

So the question is: How do I measure the effectiveness of my content team?

We go in greater depth on the research and reporting processes during the Moz seminar SEO for Content Writers, but I’ll explain a few useful concepts here.

What should I measure?

Well-defined goals are at the heart of any good digital marketing strategy, whether you’re doing SEO or PPC. Goals will differ by client and I’ve found that part of my role as a digital marketer is to help the client understand how to articulate the business goals into measurable actions taken by visitors on their site.

Ideally, goals have a few essential traits. They should:

  • Have measurable value (revenue, leads generated, event registrations)
  • Be identifiable on the site (PDF downloads, button clicks, confirmation page views)
  • Lead to business growth (part of an online campaign, useful to sales team, etc.)

Broad goals such as “increase organic sessions on site” are rarely specific enough for clients to want to invest in after the first 3–6 months of a relationship.

One tool you can use to measure goals is Google Analytics (GA). The nice part about GA is that almost everyone has an account (even if they don’t know how to use it) and it integrates nicely with almost all major SEO software platforms.

Lay the foundation for your SEO research by taking a free trial of Moz Pro. After you’ve researched your content strategy and competition with Keyword Explorer and Open Site Explorer, you can begin measuring the content you create in Google Analytics.

Let me show you how I set this up.

How to measure SEO content using Google Analytics

Step 1: Review conversion actions on site

As I mentioned before, your SEO goals should tie to a business outcome. We discuss setting up goals, including a worksheet that shows monthly performance, during the Reporting on SEO Bootcamp.

During the launch phase of a new project, locate the on-site actions that contribute to your client’s business and then consider how your content can drive traffic to those pages. Some articles have CTAs pointing to a whitepaper; others may suggest setting up a consultation.

When interviewing your client about these potential conversion locations (contact us page, whitepaper download, etc), ask them about the value of a new customer or lead. For nonprofits, maybe the objective is to increase awareness of events or increase donations. Regardless of the goal, it’s important that you define a value for each conversion before creating goals in Google Analytics.

Step 2: Navigate to the Admin panel in Google Analytics

Once you have goals identified and have settled on an acceptable value for that goal, open up Google Analytics and navigate to the admin panel. At the time of writing this, you can find the Admin panel by clicking on a little gear icon at the bottom-left corner of the screen.

Step 3: Create a goal (including dollar value)

There are three columns in the Admin view: Account, Property, and View. In the “View” column, you will see a section marked “Goals.”

Once you are in Goals, select “+New Goal.”

I usually select “Custom” rather than the pre-filled templates. It’s up to you. I’d give the Custom option a spin just to familiarize yourself with the selectors.

Now fill out the goal based on the analysis conducted in step #1. One goal should be filled out for each conversion action you’ve identified. The most important factor is filling out a value. This is the dollar amount for this goal.

The Google description of how to create goals is located here: Create or Edit Goals

Step 4: Create and apply a “Segment” for Organic Traffic

Once you have your goals set up, you’ll want to set up and automate reporting. Since we’re analyzing traffic from search engines, we want to isolate only traffic coming from the Organic Channel.

Organic traffic = people who arrive on your site after clicking on a link from a search engine results page.

An easy way to isolate traffic of a certain type or from a certain source is to create a segment.

Navigate to any Google Analytics page in the reports section. You will see some boxes near the top of the page, one of them labeled “All Users” (assuming segments haven’t been configured in the past).

Select the box that says “All Users” and it will open up a list with checkboxes.

Scroll down until you find the checkbox that says “Organic Traffic,” then select and apply that.

Now no matter what reports you look at In Google Analytics, you’ll only be viewing the traffic from search engines.

Step 5: Review the Google Analytics Landing Page Report

Now that we’ve isolated only traffic from search engines using a Google Analytics Segment, we can view our content performance and assess what is delivering the most favorable metrics. There are several reports you can use, but I prefer the “Landing Pages” report. It shows you the page where a visitor begins their session. If I want to measure blog writers, I want to know whose writing is generating the most traffic for me. The Landing Pages report will help do that.

To get to the Landing Pages report in Google Analytics, select this sequence of subheadings on the left sidebar:

Behavior > Site Content > Landing Pages

This report will show you, for any period of time, which pages are delivering the most visits. I suggest going deeper and sorting the content by the columns “Pages per session” and “Session Duration.” Identify the articles that are generating the highest average page depth and longest average session duration. Google will see these behaviors and signal that you’re delivering value to your visitors. That is good for SEO.

Step 6: Review the conversion value of your writers

Remember those goals we created? In the far right columns of the Landing Pages report, you will find the value being delivered by each page on your site. This is where you can help answer the question, “Which article topics or writers are consistently delivering the most business value?”

If you want to share this report with your team to help increase transparency, I recommend navigating up to the top of the page and, just beneath the name of the report, you’ll see a link called “Email.”

Automate your reporting by setting up an email that delivers either a .csv file or PDF on a monthly basis. It’s super easy and will save you a ton of time.

Want to learn more SEO content tips?

If you find this kind of step-by-step process helpful, consider joining Moz for our online training course focused on SEO for copywriters. You can find the upcoming class schedule here:

See upcoming schedule

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

How do our biases impact PPC performance?

With experience comes wisdom, but columnist Brett Middleton believes that search marketers can sometimes limit themselves by clinging to old habits.

The post How do our biases impact PPC performance? appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Maximizing your mobile impact

Search marketers, are you prepared for a mobile world? Columnist Amy Bishop discusses trends and opportunities to help guide your optimization effort and make the most of your mobile experience.

The post Maximizing your mobile impact appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Does Organic CTR Impact SEO Rankings? [New Data]

Posted by larry.kim

[Estimated read time: 13 minutes]

Does organic click-through rate (CTR) data impact page rankings? This has been a huge topic of debate for years within the search industry.

xC5RnYx.jpg

Some people think the influence of CTR on rankings is nothing more than a persistent myth. Like the one where humans and dinosaurs lived together at the same time — you know, like in that reality series “The Flintstones”?

Some other people are convinced that Google must look at end user data. Because how in the world would Google know which pages to rank without it?

Google (OK, at least one Google engineer who spoke at SMX) seems to indicate the latter is indeed the case:

I also highly encourage you to check out Rand Fishkin’s Whiteboard Friday discussing clicks and click-through rate. In short, the key point is this: If a page is ranking in position 3, but gets a higher than expected CTR, Google may decide to rank that page higher because tons of people are obviously interested in that result.

Seems kind of obvious, right?

And if true, we ought to be able to measure it! In this post, I’m going to try to show that RankBrain may just be the missing link between CTR and rankings.

Untangling meaning from Google RankBrain confusion

SNHK184.png

Let’s be honest: Suddenly, everyone is claiming to be a RankBrain expert. RankBrain-shaming is quickly becoming an industry epidemic.

Please ask yourself: Do most of these people — especially those who aren’t employed by Google, but even some of the most helpful and well-intentioned spokespeople who actually work for Google — thoroughly know what they’re talking about? I’ve seen a lot of confusing and conflicting statements floating around.

Here’s the wildest one. At SMX West, Google’s Paul Haahr said Google doesn’t really understand what RankBrain is doing.

If this really smart guy who works at Google doesn’t know what RankBrain does, how in the heck does some random self-proclaimed SEO guru definitively know all the secrets of RankBrain? They must be one of those SEOs who “knew” RankBrain was coming, even before Google announced it publicly on October 26, but just didn’t want to spoil the surprise.

Now let’s go to two of the most public Google figures: Gary Illyes and John Mueller.

Illyes seemed to shoot down the idea that RankBrain could become the most important ranking factor (something which I strongly believe is inevitable). Google’s Greg Corrado publicly stated that RankBrain is “the third-most important signal contributing to the result of a search query.”

Illyes also said on Twitter that: “Rankbrain lets us understand queries better. No affect on crawling nor indexing or replace anything in ranking.” But then later clarified: “…it does change ranking.”

I don’t disagree at all. It hasn’t. (Not yet, anyway.)

Links still matter. Content still matters. Hundreds of other signals still matter.

It’s just that RankBrain had to displace something as a ranking signal. Whatever used to be Google’s third most important signal is no longer the third most important signal. RankBrain couldn’t be the third most important signal before it existed!

Now let’s go to Mueller. He believes machine learning will gain more prominence in search results, noting Bing and Yandex do a lot of this already. He noted that machine learning needs to be tested over time, but there are a lot of interesting cases where Google’s algorithm needs a system to react to searches it hasn’t seen before.

Bottom line: RankBrain, like other new Google changes, is starting out as a relatively small part of the Google equation today. RankBrain won’t replace other signals any time soon (think of it simply like this: Google is adding a new ingredient to your favorite dish to make it even tastier). But if RankBrain delivers great metrics and keeps users happy, then surely it will be given more weight and expanded in the future.

jXhUQJX.png

RankBrain headaches

If you want to nerd out on RankBrain, neural networks, semantic theory, word vectors, and patents, then you should read:

To be clear: my goal with this post isn’t to discuss tweets from Googlers, patents, research, or speculative theories.

Rather, I’m just going to ignore EVERYBODY and look at actual click data.

Searching for Rankbrain

Rand conducted one of the most popular tests of the influence of CTR on Google’s search results. He asked people to do a specific search and click on the link to his blog (which was in 7th position). This impacted the rankings for a short period of time, moving the post up to 1st position.

But these are all transient changes. Changes don’t persist.

It’s like how you can’t increase your AdWords Quality Scores simply by clicking on your own ads a few times. This is the oldest trick in the book and it doesn’t work.

The results of another experiment appeared on Search Engine Land last August and concluded that CTR isn’t a ranking factor. But this test had a pretty significant flaw — it relied on bots artificially inflating CTRs and search volume (and this test was only for a single two-word keyword: “negative SEO”). So essentially, this test was the organic search equivalent of click fraud. Google AdWords has been fighting click fraud for 15 years and they can easily apply these learnings to organic search. What did I just say about old tricks?

Before we look at the data, a final “disclaimer.” I don’t know if what this data reveals is definitively RankBrain, or another CTR-based ranking signal that’s part of the core Google algorithm. Regardless, there’s something here — and I can most certainly say with confidence that CTR is impacting rank. For simplicity, I’ll be referring to this as Rankbrain.

A crazy new experiment

Google has said that RankBrain is being tested on long-tail terms, which makes sense. Google wants to start testing its machine-learning system with searches they have little to no data on — and 99.9 percent of pages have zero external links pointing to them.

So how is Google able to tell which pages should rank in these cases? By examining engagement and relevance. CTR is one of the best indicators of both.

Head terms, as far as we know, aren’t being exposed to RankBrain right now. So by observing the differences between the organic search CTRs of long-tail terms versus head terms, we should be able to spot the difference:

pFoOmjV.png

We used 1,000 keywords in the same keyword niche (to isolate external factors like Google shopping and other SERP features that can alter CTR characteristics). The keywords are all from my own website: Wordstream.com.

I compared CTR versus rank for 1–2 word search terms, and did the same thing for long-tail keywords (4–10 word search terms).

Notice how the long-tail terms get much higher average CTRs for a given position. For example, in this data set, the head term in position 1 got an average CTR of 17.5 percent, whereas the long-tail term in position 1 had a remarkably high CTR, at an average of 33 percent.

You’re probably thinking: “Well, that makes sense. You’d expect long-tail terms to have stronger query intent, thus higher CTRs.” That’s true, actually.

But why is that long-tail keyword terms with high CTRs are so much more likely to be in top positions versus bottom-of-page organic positions? That’s a little weird, right?

OK, let’s do an analysis of paid search queries in the same niche. I use organic search to come up with paid search keyword ideas and vice versa, so we’re looking at the same keywords in many cases.

V36BJAe.png

Long-tail terms in this same vertical get higher CTRs than head terms. However, the difference between long-tail and head term CTR is very small in positions 1–2, and becomes huge as you go out to lower positions.

So in summary, something unusual is happening:

  • In paid search, long-tail and head terms do roughly the same CTR in high ad spots (1–2) and see huge differences in CTR for lower spots (3–7).
  • But in organic search, the long-tail and head terms in spots (1–2) have huge differences in CTR and very little difference as you go down the page.

Why are the same keywords behaving so differently in organic versus paid?

The difference (we think) is that RankBrain is boosting the search rankings of pages that have higher organic click-through rates.

Not convinced yet?

Which came first: the CTR or the ranking?

CTR and ranking are codependent variables. There’s obviously a relationship between the two, but which is causing what? In order to get to the bottom of this “chicken versus egg” situation, we’re going to have to do a bit more analysis.

The following graph takes the difference between an observed organic search CTR minus the expected CTR, to figure out if your page is beating — or being beaten by — the expected average CTR for a given organic position.

By only looking at the extent by which a keyword beats or is beaten by the predicted CTR, you are essentially isolating the natural relationship between CTR and ranking in order to get a better picture of what’s going on.

mxm8BIa.png

We found on average, that if you beat the expected CTR, then you’re far more likely to rank in more prominent positions. Failing to beat the expected CTR makes it more likely you’ll appear in positions 6–10.

So, based on our example of long-tail search terms for this niche, if a page:

  • Beats the expected CTR for a given position by 20 percent, you’re likely to appear in position 1.
  • Beats beat the expected CTR for a given position by 12 percent, then you’re likely to appear in position 2.
  • Falls below the expected CTR for a given position by 6 percent, then you’re likely to appear in position 10.

And so on.

Here’s a greatly simplified rule of thumb:

The more your pages beat the expected organic CTR for a given position, the more likely you are to appear in prominent organic positions.

If your pages fall below the expected organic search CTR, then you’ll find your pages in lower organic positions on the SERP.

Want to move up by one position in Google’s rankings? Increase your CTR by 3 percent. Want to move up another spot? Increase your CTR by another 3 percent.

If you can’t beat the expected click-through rate for a given position, you’re unlikely to appear in positions 1–5.

Essentially, you can think of all of this as though Google is giving bonus points to pages that have high click-through rates. The fact that it looks punitive is just a natural side effect.

If Google gives “high CTR bonus points” to other websites, then your relative performance will decline. It’s not that you got penalized; it’s just you’re the only one who didn’t get the rewards.

A simple example: The Long-tail Query That Could

Here’s one quick example from our 1000-keyword data set. For the query: “email subjects that get opened,” this page has a ridiculously high organic CTR of 52.17%, which beats the expected CTR for the top spot in this vertical by over 60%. It also generates insanely great engagement rates, including a time on page of over 24 minutes.

aKi0dOy.png

We believe that these two strong engagement metrics send a clear signal to Google that the page matches the query’s intent, despite not having an exact keyword match in the content.

What does Google want?

A lot of factors go into ranking. We know links, content, and RankBrain are the top 3 search ranking factors in Google’s algorithm. But there are hundreds of additional signals Google looks at.

So let’s make this simple. Your website is a house.

qwg4ycW.jpg

This is a terrible website. It was built a long time ago and has received no SEO love in a long time (terrible structure, markup, navigation, content, etc). It ranks terribly. Nobody visits it. And those poor souls who do stumble across it wish they never had and quickly leave, wondering why it even exists.

pIXgIt7.jpg

This website is pretty good. It’s designed well. It’s obviously well-maintained. It addresses all the SEO essentials. Everything is optimized. It ranks reasonably well. A good amount of people visit and hang out a while since, hey, it has everything you’d expect in a website nowadays.

TfHENwv.jpg

Now we get to the ultimate house. It has everything you could want in a website — beautifully designed, great content, and optimized in every way possible. It owns tons of prominent search positions and everyone goes here to visit (the parties are AMAZING) again and again because of the amazing experience — and they’re very likely to tell their friends about it after they leave.

People love this house. Google goes where the people are. So Google rewards it.

This is the website you need to look like to Google.

No fair, right? The big house gets all the advantages!

Wrong!

So now what the heck do I do?

A bunch of articles say that there’s absolutely nothing you can or should do to optimize your site for Rankbrain today, and for any future updates. I couldn’t disagree more.

If you want to rank better, you need to get more people to YOUR party. This is where CTR comes in.

It appears that Google RankBrain has been “inspired by” AdWords and many other technologies that look at user engagement signals to determine page quality and relevance. And RankBrain is learning how to assign ratings to pages that may have insufficient link or historical page data, but are relevant to a searcher’s query.

So how do you raise your CTRs? You should focus your efforts in four key areas:

  1. Optimize pages with low “organic Quality Scores.” Download all of your query data from Google Search Console. Sort your data, figure out which of your pages have below average CTRs, and prioritize those — it’s far less risky to focus on fixing your losers because they have the most potential upside. None of these pages will get any love from RankBrain!
  2. Combine your SEO keywords with emotional triggers to create irresistible headlines. Emotions like anger, disgust, affirmation, and fear are proven to increase click-through rates and conversion rates. If everyone who you want to beat already has crafted optimized title tags, then packing an emotional wallop will give you the edge you need and make your listing stand out.
  3. Increase other user engagement rates. Like click-through rate, we believe you need to have higher-than-expected engagement metrics (e.g. time on site, bounce rate — more on this in a future article). This is a critical relevance signal! Google knows the expected conversion and engagement rates based on a variety of factors (e.g. industry, query, location, time of day, device type). So create 10X content!
  4. Use social media ads and remarketing to increase search volume and CTR. Paid social ads and remarketing display ads can generate serious awareness and exposure for a reasonable cost (no more than $ 50 a day). If people aren’t familiar with your brand, bombard your target audience with Facebook and Twitter ads. People who are familiar with your brand are 2x more likely to click through and to convert.

Key summary

Whether or not RankBrain becomes the most important ranking signal (and I believe it will be someday), it’s smart to ensure your pages get as many organic search clicks as possible. It means more people are visiting your site and it sends important signals to Google that your page is relevant and awesome.

Our research also shows that achieving above-expected user engagement metrics result in better organic rankings, which results in even more clicks to your site.

Don’t settle for average CTRs. Be a unicorn among a sea of donkeys! Raise your organic CTRs and engagement rates! Get optimizing now!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Beyond App Streaming & AMP: Connection Speed’s Impact on Mobile Search

Posted by Suzzicks

Most people in the digital community have heard of Facebook’s 2G Tuesdays. They were established to remind users that much of the world still accesses the Internet on slow 2G connections, rather than 3G, 4G, LTE or WiFi.

For an online marketer in the developed world, it’s easy to forget about slow connections, but Facebook is particularly sensitive to them. A very high portion of their traffic is mobile, and a large portion of their audience uses their mobile device as their primary access to the Internet, rather than a desktop or laptop.

Facebook and Google agree on this topic. Most digital marketers know that Google cares about latency and page speed, but many don’t realize that Google also cares about connection speed.

Last year they began testing their revived mobile transcoding service, which they call Google Web Lite, to make websites faster in countries like India and Indonesia, where connection speed is a significant problem for a large portion of the population. They also recently added Data Saver Mode in Chrome, which has a similar impact on browsing.

AMP pages begin ranking in mobile results this month

This February, Google will begin ranking AMP pages in mobile search results. These will provide mobile users access to news articles that universally render in about one second. If you haven’t seen it yet, use this link on your phone to submit a news search, and see how fast AMP pages really are. The results are quite impressive.

In addition to making web pages faster, Google wants to make search results faster. They strive to provide results that send searchers to sites optimized for the device they’re searching from. They may alter mobile search results based on the connection speed of the searcher’s device.

To help speed up websites and search results as the same time, Google is also striving to make Chrome faster and lighter. They’re even trying to ensure that it doesn’t drain device batteries, which is something that Android users will especially appreciate! Updated versions of Chrome actually have a new compression method called Brötli, which promises to compress website files 26% more than previous versions of Chrome.

We’ll review the impact of Google’s tests on changing search results based on connection speed. We’ll outline how and why results from these tests could become more salient and impact search results at various different speeds. Finally, we’ll explain why Google has a strong incentive to push this type of initiative forward, and how it will impact tracking and attribution for digital marketers now and in the future.

The diagram below provides a sneak peak of the various connection speeds at which Google products are best accessed and how these relationships will likely impact cross-device search results in the future.

Connection Speed

Best for these Google Products

Impact on SERP

WiFi & Fiber

Fiber, ChromeCast, ChromeCast Music, Google Play, Google Music, Google TV, ChromeBooks, Nest, YouTube, YouTube Red

Streaming Apps, Deep Linked Media Content

3G, 4G, LTE

Android Phones, Android Wear, Android Auto, ChromeBooks, YouTube, YouTube Red

Standard Results, App Packs, Carousels, AMP Pages

2G & Edge

Android Phones, Android Auto

Basic Results, Google Web Lite, AMP Pages

Basic vs. standard mobile search results

The image below shows the same search on the same phone. The phone on the right is set to search on EDGE speeds, and the one on the left is set to 4G/LTE. Google calls the EDGE search results “Basic,” and the 4G/LTE results “Standard.” They even include a note at the bottom of the page explaining “You’re seeing a basic version of this page because your connection is slow” with an option to “switch to standard version.” In some iterations of the message, this sentence was also included: “Google optimized some pages to use 80% less data, and rest are marked slow to load.”

Notice that the EDGE connection has results that are significantly less styled and interactive than the 4G/LTE results.

Serving different results for slower connection speeds is something that Google has tested before, but it’s a concept that seems to have been mostly dormant until the middle of last year, when these Basic results started popping up. Google quietly announced it on Google+, rather than with a blog post. These results are not currently re-creatable (at least for me), but the concept and eventual implementation of this kind of variability could have a significant impact on the SEO world, further deprecating our ability to monitor keyword rankings effectively.

The presentation of the mobile search results isn’t all that’s. changing. The websites included and the order in which they’re ranked changes, as well. Google knows that searchers with slow connections will have a bad experience if they try to download apps, so App Packs are not included in any Basic search results. That means a website ranking in position #7 in Standard search results (after the six apps in the App Pack) can switch to ranking number one in a Basic search. That’s great news if you’re the top website being pushed down by the App Pack!

The full list of search results are included below – items that only appear in one result are bolded.

Standard Search Result
“Superman Games”

Basic Search Result
“Superman Games”

App – City Jump

Web – herogamesworld.com>superman-games

App – Man of Steel

Web – www.heroesarcade.com>play-free>sup…

App – Superman Homepage

Web – LEGO>dccomicssuperheroes>games

App – Superbman

Web – Wikipedia>wiki>List_of_Superman_vi…

App – Batman Arkham Origins

Web – www.kidsgamesheroes.com>tags>supe…

App – Subway Superman Run

Web – YouTube>watch (Superman vs Hulk – O Combate – YouTube)

Web – Herogamesworld.com>superman-games

Web – www.supermangames235.com

Web – Heroesarcade.com>play-free>sub…

Web – fanfreegames.com > superman-games
Web – Wikipedia>wiki>List_of_Superman_vi…

Web – moviepilot.com>posts > 2015/06/25

Web – LEGO>dccomicssuperheroes>games

Web – m.batmangamesonly.com > superman-ga…

You may have the urge to write this off, thinking all of your potential mobile customers have great phones and fast connections, but you’d be missing the bigger picture here.

First, slow connection speeds can happen to everyone: when they’re in elevators, basements, subways, buildings with thick walls, outside of city centers, or simply in places where the mobile connection is overloaded or bad. Regardless of where they are, users will still try to search, often ignorant of their connection speed.

Second, this testing probably indicates that connection speed is an entirely new variable which could even be described as a ranking factor.

Responsive design does not solve everything

Google’s desire to reach a growing number of devices might sound fantastic if you’re someone who’s recently updated a site to a responsive or adaptive design, but these new development techniques may have been a mixed blessing. Responsive design and adaptive design can be great, but they’re not a panacea, and have actually caused significant problems for Google’s larger goals.

Responsive sites face speed and development challenges.

Responsive design sites are generally slow, which means there is a strong chance that they won’t rank well in Basic search results. Responsive sites can be built to function much more quickly, but it can be an uphill battle for developers. They face an ever-growing set of expectations, frameworks are constantly changing, and they’re already struggling to cram extra functionality and design into clean, light, mobile-first designs.

They can have negative repercussions.

Despite Google’s insistence that responsive design is easier for them to crawl, many webmasters that transitioned saw losses in overall conversions and time-on-site. Their page speed and UX were both negatively impacted by the redesigns. Developers are again having to up their skills and focus on pre-loading, pre-rendering, and pre-fetching content in order to reduce latency — sometimes just to get it back to what it was before their sites went responsive. Others are now forced to create duplicate AMP pages, which only adds to the burden and frustration.

Wearables/interactive media pose new problems.

Beyond the UX and load time concerns, responsive design sites also don’t allow webmasters to effectively target these new growth channels that Google cares about — wearables and interactive media. Unfortunately, responsive design sites are nearly unusable on smartwatches, and probably always will be.

Similarly, Google is getting much more into media, linking search with large-screen TVs, but even when well-built, responsive design sites look wonky on popular wide-screen TVs. It seems that the development of mobile technology may have already out-paced Google’s recommended “ideal” solution.

Regardless, rankings on all of these new devices will likely be strongly influenced by the connection speed of the device.

Is AMP the future of mobile search for slow connections?

The good news is that AMP pages are great candidates for ranking in a Basic search result, because they work well over slow connections. They’ll also be useful on things like smart watches and TVs, as Google will be able to present the content in whichever format it deems appropriate for the device requesting it — thus allowing them to provide a good experience on a growing number of devices.

App streaming & connection speed

A couple months ago, Google announced the small group of apps in a beta test for App Streaming. In this test, apps are hosted and run from a virtual device in Google’s cloud. This allows users to access content in apps without having to download the app itself. Since the app is run in the cloud, over the web, it seems that this technology could eventually remove the OS barrier for apps — an Android app will be able to operate from the cloud on an iOS device, and an iOS app will be able to run on an Android device the same way. Great for both users and developers!

Since Google is quietly working on detecting and perfecting their connection-speed-based changes to the algorithm, it’s easy to see how this new ranking factor will be relied upon even more heavily when App Streaming becomes a reality. App Streaming will only work over WiFi, so Google will be able to leverage what it’s learned from Basic mobile results to provide yet another divergent set of results to devices that are on a WiFi connection.

The potential for App Streaming will make apps much more like websites, and deep links much more like…regular web links. In some ways, it may bring Google back to its “Happy Place,” where everything is device and OS-agnostic.

How do app plugins & deep links fit into the mix?

The App Streaming concept actually has a lot in common with the basic premise of the Chrome OS, which was native on ChromeBooks (but has now been unofficially retired and functionally replaced with the Android OS). The Chrome OS provided a simple software framework that relied heavily on the Chrome browser and cloud-based software and plugins. This allowed the device to leverage the software it already had, without adding significantly more to the local storage. http://icdn2.digitaltrends.com/image/facebook-messenger-gif-960x887.png

This echoes the plugin phenomenon that we’re seeing emerge in the mobile app world. Mobile operating systems and apps use deep links to other local apps plugins. Options like emoji keyboards and image aggregators like GIPHY can be downloaded and automatically pulled into to the Facebook Messenger app.

Deep-linked plugins will go a long way toward freeing storage space and improving UX on users’ phones. That’s great, but App Streaming is also resource-intensive. One of the main problems with the Chrome OS was that it relied so heavily on WiFi connectivity — that’s relevant here, too.

What does music & video casting have to do with search?

Most of the apps that people engage with on a regular basis, for hours at a time, are media apps used over WiFi. Google wants to be able to index and rank that content as deep links, so that it can open and run in the appropriate app or plugin.

In fact, the indexing of deep-linked media content has already begun. The ChromeCast app is using new OS crawler capabilities in the Android Marshmallow OS to scan a user’s device for deep-linked media. They then create a local cache of deep links to watched and un-watched media that a user might want to “cast” to another device, then organize it and make it searchable.

For instance, if you want to watch a documentary on dogs, you could search your Netflix and Hulu apps, then maybe Amazon Instant Video, and maybe even the NBC, TLC, BBC, or PBS apps for a documentary on dogs.

Or, you could just do one search in the ChromeCast app and find all the documentaries on dogs that you can access. Assuming the deep links on those apps are set up correctly, you will be able to compare the selection across all apps that you have, choose one, and cast it. Again, these type of results are less relevant if you are on a 2G or 3G connection and thus not able to cast the media over WiFi.

This is an important move for Google. Recently, they’ve been putting a lot of time and energy into their media offerings. They successfully launched ChromeCast2 and ChromeCastMusic at about the same time as they dramatically improved their GoogleMusic subscription service (a competitor to Spotify and Pandora) and launched YouTubeRed (their rival for Hulu, Netflix, and Amazon Prime Video). They may eventually even begin to include the “cast” logo directly in SERPS, as they have in the default interface of Google+ and YouTube.

Google’s financial interest in adapting results by connectivity

Google’s interest in varying search results by connection speed is critical to their larger goals. A large portion of mobile searches are for entertainment, and the need for entertainment is unending and easy to monetize. Subscription models provide long-term stable revenue with minimal upkeep or effort from Google.

Additionally, the more time searchers spend consuming media, either by surfacing it in Google or the ChromeCast app, or through Now on Tap, the more Google can tailor its marketing messages to them.

Finally, the passive collection and aggregation of people’s consumption data also allows Google to quickly and easily evaluate which media is popular or growing in popularity, so they can tailor Google Play’s licensing strategy to meet users’ demands, improving the long-term value to their subscribers.

As another line of business, Google also offers ChromeCast Music and Google Music, which are subscription services designed to compete with Amazon Music and iTunes. You might think that all this streaming — streaming apps, streaming music, streaming video and casting it from one device to another — would slow down your home or office connection speed as a whole, and you would be right. However, Google has a long-term solution for that too: Google Fiber. The more reliant people become on streaming content from the cloud, the more important it will be for them to get on Google’s super-fast Internet grid. Then you can stream all you want, and Google can collect even more data and monetize as they see fit.

http://www.techtree.com/sites/default/files/2015/7/TechTree_News_02_Inline_A.jpg

Image credit: The NextWeb

What’s the impact of connection variability in SERPS on SEO strategy & reporting?

So what might this mean for your mobile SEO strategy? Variability by connection speed will make mobile keyword rank reporting and attribution nearly impossible. Currently, most keyword reporting tools either work by aggregating ranking results that are reported from ISPs, or by submitting test queries and aggregating the results.

Unfortunately, while that’s usually sufficient for desktop reporting (though still error-prone and very difficult for highly local searches), it’s nearly impossible for mobile. All of the SEO keyword reporting tools out there are struggling to report on mobile search results, and none take connection speed into account. Most don’t even take OS into account, either, so App Packs and the website rankings around them are not accurately reported.

Similarly, most tools are not able to report on anything about deep links, so it’s hard to know if click-through traffic is even getting to the website, or if it might be getting to a deep screen in an app instead. In short, ranking tools have a long way to go before they will be accurate in mobile, and this additional factor makes the reporting even harder.

In mobile, there are additional factors that can change the mobile rankings and click-through rates dramatically:

  • Localization
  • Featured Rich Snippets (Answer Boxes)
  • Results that are interactive directly in the SERP (playable YouTube videos, news, Twitter and image carousels)
  • AJAX expansion opportunities

All of these things are nightmares for the developers who write ranking software that scrapes search results. Even worse, Google is constantly testing new presentation schemes, so even if the tools could briefly get it right, they risk a constant game of catch-up.C:\Users\Cindy\AppData\Local\Temp\SNAGHTML1116a741.PNG

One of the reasons Google is constantly testing new presentation schemes? They’re trying to make their search results work on an ever-growing list of new devices while minimizing the need for additional page loads or clicks. This is what drives all the testing.

If you think about a traditional set of search results, they’re an ordered list that goes from top to bottom. Google has gotten so fast that the ten-link restriction actually hurts the user experience when the mobile connection is good.

In response, Google has started to include carousels that scroll left to right. Only one or two search results can show on a smart watch at one time, so this feature allows searchers to delve deeper into a specific type of result without the additional click or page load.

However, carousels don’t appear in Basic search results. Also, the carousels only count as one result in the vertical list, but can add as many as 5 or 10 results to the page. Again, SEO’s and SEO software really haven’t settled on a way to represent this effectively in their tracking, and little has been reported about the impact on CTR for either the items in the carousel or the items below it.

Conclusion

Speed matters.

Not just latency and page speed, but also connection speed. While we can’t directly impact the connection speed of our mobile users, we should at least anticipate that search results might vary based on the use-case of their search and strategize accordingly.

In the meantime, SEOs and digital marketers should be wary of tools that report mobile keyword rankings without specifying things like OS, app pack rankings, location and, eventually, connection speed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The Machine Learning Revolution: How it Works and its Impact on SEO

Posted by EricEnge

Machine learning is already a very big deal. It’s here, and it’s in use in far more businesses than you might suspect. A few months back, I decided to take a deep dive into this topic to learn more about it. In today’s post, I’ll dive into a certain amount of technical detail about how it works, but I also plan to discuss its practical impact on SEO and digital marketing.

For reference, check out Rand Fishkin’s presentation about how we’ve entered into a two-algorithm world. Rand addresses the impact of machine learning on search and SEO in detail in that presentation, and how it influences SEO. I’ll talk more about that again later.

For fun, I’ll also include a tool that allows you to predict your chances of getting a retweet based on a number of things: your Followerwonk Social Authority, whether you include images, hashtags, and several other similar factors. I call this tool the Twitter Engagement Predictor (TEP). To build the TEP, I created and trained a neural network. The tool will accept input from you, and then use the neural network to predict your chances of getting an RT.

The TEP leverages the data from a study I published in December 2014 on Twitter engagement, where we reviewed information from 1.9M original tweets (as opposed to RTs and favorites) to see what factors most improved the chances of getting a retweet.

My machine learning journey

I got my first meaningful glimpse of machine learning back in 2011 when I interviewed Google’s Peter Norvig, and he told me how Google had used it to teach Google Translate.

Basically, they looked at all the language translations they could find across the web and learned from them. This is a very intense and complicated example of machine learning, and Google had deployed it by 2011. Suffice it to say that all the major market players — such as Google, Apple, Microsoft, and Facebook — already leverage machine learning in many interesting ways.

Back in November, when I decided I wanted to learn more about the topic, I started doing a variety of searches of articles to read online. It wasn’t long before I stumbled upon this great course on machine learning on Coursera. It’s taught by Andrew Ng of Stanford University, and it provides an awesome, in-depth look at the basics of machine learning.

Warning: This course is long (19 total sections with an average of more than one hour of video each). It also requires an understanding of calculus to get through the math. In the course, you’ll be immersed in math from start to finish. But the point is this: If you have the math background, and the determination, you can take a free online course to get started with this stuff.

In addition, Ng walks you through many programming examples using a language called Octave. You can then take what you’ve learned and create your own machine learning programs. This is exactly what I have done in the example program included below.

Basic concepts of machine learning

First of all, let me be clear: this process didn’t make me a leading expert on this topic. However, I’ve learned enough to provide you with a serviceable intro to some key concepts. You can break machine learning into two classes: supervised and unsupervised. First, I’ll take a look at supervised machine learning.

Supervised machine learning

At its most basic level, you can think of supervised machine learning as creating a series of equations to fit a known set of data. Let’s say you want an algorithm to predict housing prices (an example that Ng uses frequently in the Coursera classes). You might get some data that looks like this (note that the data is totally made up):

In this example, we have (fictitious) historical data that indicates the price of a house based on its size. As you can see, the price tends to go up as house size goes up, but the data does not fit into a straight line. However, you can calculate a straight line that fits the data pretty well, and that line might look like this:

This line can then be used to predict the pricing for new houses. We treat the size of the house as the “input” to the algorithm and the predicted price as the “output.” For example, if you have a house that is 2600 square feet, the price looks like it would be about $ xxxK ?????? dollars.

However, this model turns out to be a bit simplistic. There are other factors that can play into housing prices, such as the total rooms, number of bedrooms, number of bathrooms, and lot size. Based on this, you could build a slightly more complicated model, with a table of data similar to this one:

Already you can see that a simple straight line will not do, as you’ll have to assign weights to each factor to come up with a housing price prediction. Perhaps the biggest factors are house size and lot size, but rooms, bedrooms, and bathrooms all deserve some weight as well (all of these would be considered new “inputs”).

Even now, we’re still being quite simplistic. Another huge factor in housing prices is location. Pricing in Seattle, WA is different than it is in Galveston, TX. Once you attempt to build this algorithm on a national scale, using location as an additional input, you can see that it starts to become a very complex problem.

You can use machine learning techniques to solve any of these three types of problems. In each of these examples, you’d assemble a large data set of examples, which can be called training examples, and run a set of programs to design an algorithm to fit the data. This allows you to submit new inputs and use the algorithm to predict the output (the price, in this case). Using training examples like this is what’s referred to as “supervised machine learning.”

Classification problems

This a special class of problems where the goal is to predict specific outcomes. For example, imagine we want to predict the chances that a newborn baby will grow to be at least 6 feet tall. You could imagine that inputs might be as follows:

The output of this algorithm might be a 0 if the person was going to shorter than 6 feet tall, or 1 if they were going to be 6 feet or taller. What makes it a classification problem is that you are putting the input items into one specific class or another. For the height prediction problem as I described it, we are not trying to guess the precise height, but a simple over/under 6 feet prediction.

Some examples of more complex classifying problems are handwriting recognition (recognizing characters) and identifying spam email.

Unsupervised machine learning

Unsupervised machine learning is used in situations where you don’t have training examples. Basically, you want to try and determine how to recognize groups of objects with similar properties. For example, you may have data that looks like this:

The algorithm will then attempt to analyze this data and find out how to group them together based on common characteristics. Perhaps in this example, all of the red “x” points in the following chart share similar attributes:

However, the algorithm may have trouble recognizing outlier points, and may group the data more like this:

What the algorithm has done is find natural groupings within the data, but unlike supervised learning, it had to determine the features that define each group. One industry example of unsupervised learning is Google News. For example, look at the following screen shot:

You can see that the main news story is about Iran holding 10 US sailors, but there are also related news stories shown from Reuters and Bloomberg (circled in red). The grouping of these related stories is an unsupervised machine learning problem, where the algorithm learns to group these items together.

Other industry examples of applied machine learning

A great example of a machine learning algo is the Author Extraction algorithm that Moz has built into their Moz Content tool. You can read more about that algorithm here. The referenced article outlines in detail the unique challenges that Moz faced in solving that problem, as well as how they went about solving it.

As for Stone Temple Consulting’s Twitter Engagement Predictor, this is built on a neural network. A sample screen for this program can be seen here:

The program makes a binary prediction as to whether you’ll get a retweet or not, and then provides you with a percentage probability for that prediction being true.

For those who are interested in the gory details, the neural network configuration I used was six input units, fifteen hidden units, and two output units. The algorithm used one million training examples and two hundred training iterations. The training process required just under 45 billion calculations.

One thing that made this exercise interesting is that there are many conflicting data points in the raw data. Here’s an example of what I mean:

What this shows is the data for people with Followerwonk Social Authority between 0 and 9, and a tweet with no images, no URLs, no @mentions of other users, two hashtags, and between zero and 40 characters. We had 1156 examples of such tweets that did not get a retweet, and 17 that did.

The most desirable outcome for the resulting algorithm is to predict that these tweets not get a retweet, so that would make it wrong 1.4% of the time (17 times out of 1173). Note that the resulting neural network assesses the probability of getting a retweet at 2.1%.

I did a calculation to tabulate how many of these cases existed. I found that we had 102,045 individual training examples where it was desirable to make the wrong prediction, or for just slightly over 10% of all our training data. What this means is that the best the neural network will be able to do is make the right prediction just under 90% of the time.

I also ran two other sets of data (470K and 473K samples in size) through the trained network to see the accuracy level of the TEP. I found that it was 81% accurate in its absolute (yes/no) prediction of the chance of getting a retweet. Bearing in mind that those also had approximately 10% of the samples where making the wrong prediction is the right thing to do, that’s not bad! And, of course, that’s why I show the percentage probability of a retweet, rather than a simple yes/no response.

Try the predictor yourself and let me know what you think! (You can discover your Social Authority by heading to Followerwonk and following these quick steps.) Mind you, this was simply an exercise for me to learn how to build out a neural network, so I recognize the limited utility of what the tool does — no need to give me that feedback ;->.

Examples of algorithms Google might have or create

So now that we know a bit more about what machine learning is about, let’s dive into things that Google may be using machine learning for already:

Penguin

One approach to implementing Penguin would be to identify a set of link characteristics that could potentially be an indicator of a bad link, such as these:

  1. External link sitting in a footer
  2. External link in a right side bar
  3. Proximity to text such as “Sponsored” (and/or related phrases)
  4. Proximity to an image with the word “Sponsored” (and/or related phrases) in it
  5. Grouped with other links with low relevance to each other
  6. Rich anchor text not relevant to page content
  7. External link in navigation
  8. Implemented with no user visible indication that it’s a link (i.e. no line under it)
  9. From a bad class of sites (from an article directory, from a country where you don’t do business, etc.)
  10. …and many other factors

Note that any one of these things isn’t necessarily inherently bad for an individual link, but the algorithm might start to flag sites if a significant portion of all of the links pointing to a given site have some combination of these attributes.

What I outlined above would be a supervised machine learning approach where you train the algorithm with known bad and good links (or sites) that have been identified over the years. Once the algo is trained, you would then run other link examples through it to calculate the probability that each one is a bad link. Based on the percentage of links (and/or total PageRank) coming from bad links, you could then make a decision to lower the site’s rankings, or not.

Another approach to this same problem would be to start with a database of known good links and bad links, and then have the algorithm automatically determine the characteristics (or features) of those links. These features would probably include factors that humans may not have considered on their own.

Panda

Now that you’ve seen the Penguin example, this one should be a bit easier to think about. Here are some things that might be features of sites with poor-quality content:

  1. Small number of words on the page compared to competing pages
  2. Low use of synonyms
  3. Overuse of main keyword of the page (from the title tag)
  4. Large blocks of text isolated at the bottom of the page
  5. Lots of links to unrelated pages
  6. Pages with content scraped from other sites
  7. …and many other factors

Once again, you could start with a known set of good sites and bad sites (from a content perspective) and design an algorithm to determine the common characteristics of those sites.

As with the Penguin discussion above, I’m in no way representing that these are all parts of Panda — they’re just meant to illustrate the overall concept of how it might work.

How machine learning impacts SEO

The key to understanding the impact of machine learning on SEO is understanding what Google (and other search engines) want to use it for. A key insight is that there’s a strong correlation between Google providing high-quality search results and the revenue they get from their ads.

Back in 2009, Bing and Google performed some tests that showed how even introducing small delays into their search results significantly impacted user satisfaction. In addition, those results showed that with lower satisfaction came fewer clicks and lower revenues:

The reason behind this is simple. Google has other sources of competition, and this goes well beyond Bing. Texting friends for their input is one form of competition. So are Facebook, Apple/Siri, and Amazon. Alternative sources of information and answers exist for users, and they are working to improve the quality of what they offer every day. So must Google.

I’ve already suggested that machine learning may be a part of Panda and Penguin, and it may well be a part of the “Search Quality” algorithm. And there are likely many more of these types of algorithms to come.

So what does this mean?

Given that higher user satisfaction is of critical importance to Google, it means that content quality and user satisfaction with the content of your pages must now be treated by you as an SEO ranking factor. You’re going to need to measure it, and steadily improve it over time. Some questions to ask yourself include:

  1. Does your page meet the intent of a large percentage of visitors to it? If a user is interested in that product, do they need help in selecting it? Learning how to use it?
  2. What about related intents? If someone comes to your site looking for a specific product, what other related products could they be looking for?
  3. What gaps exist in the content on the page?
  4. Is your page a higher-quality experience than that of your competitors?
  5. What’s your strategy for measuring page performance and improving it over time?

There are many ways that Google can measure how good your page is, and use that to impact rankings. Here are some of them:

  1. When they arrive on your page after clicking on a SERP, how long do they stay? How does that compare to competing pages?
  2. What is the relative rate of CTR on your SERP listing vs. competition?
  3. What volume of brand searches does your business get?
  4. If you have a page for a given product, do you offer thinner or richer content than competing pages?
  5. When users click back to the search results after visiting your page, do they behave like their task was fulfilled? Or do they click on other results or enter followup searches?

For more on how content quality and user satisfaction has become a core SEO factor, please check out the following:

  1. Rand’s presentation on a two-algorithm world
  2. My article on Term Frequency Analysis
  3. My article on Inverse Document Frequency
  4. My article on Content Effectiveness Optimization

Summary

Machine learning is becoming highly prevalent. The barrier to learning basic algorithms is largely gone. All the major players in the tech industry are leveraging it in some manner. Here’s a little bit on what Facebook is doing, and machine learning hiring at Apple. Others are offering platforms to make implementing machine learning easier, such as Microsoft and Amazon.

For people involved in SEO and digital marketing, you can expect that these major players are going to get better and better at leveraging these algorithms to help them meet their goals. That’s why it will be of critical importance to tune your strategies to align with the goals of those organizations.

In the case of SEO, machine learning will steadily increase the importance of content quality and user experience over time. For you, that makes it time to get on board and make these factors a key part of your overall SEO strategy.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Advert