Tag Archive | "Google’s"

Exploring Google’s New Carousel Featured Snippet

Posted by TheMozTeam

Google let it be known earlier this year that snippets were a-changin’. And true to their word, we’ve seen them make two major updates to the feature — all in an attempt to answer more of your questions.

We first took you on a deep dive of double featured snippets, and now we’re taking you for a ride on the carousel snippet. We’ll explore how it behaves in the wild and which of its snippets you can win.

For your safety, please remain seated and keep your hands, arms, feet, and legs inside the vehicle at all times!

What a carousel snippet is an how it works

This particular snippet holds the answers to many different questions and, as the name suggests, employs carousel-like behaviour in order to surface them all.

When you click one of the “IQ-bubbles” that run along the bottom of the snippet, JavaScript takes over and replaces the initial “parent” snippet with one that answers a brand new query. This query is a combination of your original search term and the text of the IQ-bubble.

So, if you searched [savings account rates] and clicked the “capital one” IQ-bubble, you’d be looking at a snippet for “savings account rates capital one.” That said, 72.06 percent of the time, natural language processing will step in here and produce something more sensible, like “capital one savings account rates.”

On the new snippet, the IQ-bubbles sit at the top, making room for the “Search for” link at the bottom. The link is the bubble snippet’s query and, when clicked, becomes the search query of a whole new SERP — a bit of fun borrowed from the “People also ask” box.

You can blame the ludicrous “IQ-bubble” name on Google — it’s the class tag they gave on HTML SERP. We have heard them referred to as “refinement” bubbles or “related search” bubbles, but we don’t like either because we’ve seen them do both refine and relate. IQ-bubble it is.

There are now 6 times the number of snippets on a SERP

Back in April, we sifted through every SERP in STAT to see just how large the initial carousel rollout was. Turns out, it made a decent-sized first impression.

Appearing only in America, we discovered 40,977 desktop and mobile SERPs with carousel snippets, which makes up a hair over 9 percent of the US-en market. When we peeked again at the beginning of August, carousel snippets had grown by half but still had yet to reach non-US markets.

Since one IQ-bubble equals one snippet, we deemed it essential to count every single bubble we saw. All told, there were a dizzying 224,508 IQ-bubbles on our SERPs. This means that 41,000 keywords managed to produce over 220,000 extra featured snippets. We’ll give you a minute to pick your jaw up off the floor.

The lowest and most common number of bubbles we saw on a carousel snippet was three, and the highest was 10. The average number of bubbles per carousel snippet was 5.48 — an IQ of five if you round to the nearest whole bubble (they’re not that smart).

Depending on whether you’re a glass-half-full or a glass-half-empty kind of person, this either makes for a lot of opportunity or a lot of competition, right at the top of the SERP.

Most bubble-snippet URLs are nowhere else on the SERP

When we’ve looked at “normal” snippets in the past, we’ve always been able to find the organic results that they’ve been sourced from. This wasn’t the case with carousel snippets — we could only find 10.76 percent of IQ-bubble URLs on the 100-result SERP. This left 89.24 percent unaccounted for, which is a metric heck-tonne of new results to contend with.

Concerned about the potential competitor implications of this, we decided to take a gander at ownership at the domain level.

Turns out things weren’t so bad. 63.05 percent of bubble snippets had come from sites that were already competing on the SERP — Google was just serving more varied content from them. It does mean, though, that there was a brand new competitor jumping onto the SERP 36.95 percent of the time. Which isn’t great.

Just remember: these new pages or competitors aren’t there to answer the original search query. Sometimes you’ll be able to expand your content in order to tackle those new topics and snag a bubble snippet, and sometimes they’ll be beyond your reach.

So, when IQ-bubble snippets do bother to source from the same SERP, what ranks do they prefer? Here we saw another big departure from what we’re used to.

Normally, 97.88 percent of snippets source from the first page, and 29.90 percent typically pull from rank three alone. With bubble snippets, only 36.58 percent of their URLs came from the top 10 ranks. And while the most popular rank position that bubble snippets pulled from was on the first page (also rank three), just under five percent of them did this.

We could apply the always helpful “just rank higher” rule here, but there appears to be plenty of exceptions to it. A top 10 spot just isn’t as essential to landing a bubble snippet as it is for a regular snippet.

We think this is due to relevancy: Because bubble snippet queries only relate to the original search term — they’re not attempting to answer it directly — it makes sense that their organic URLs wouldn’t rank particularly high on the SERP.

Multi-answer ownership is possible

Next we asked ourselves, can you own more than one answer on a carousel snippet? And the answer was a resounding: you most definitely can.

First we discovered that you can own both the parent snippet and a bubble snippet. We saw this occur on 16.71 percent of our carousel snippets.

Then we found that owning multiple bubbles is also a thing that can happen. Just over half (57.37 percent) of our carousel snippets had two or more IQ-bubbles that sourced from the same domain. And as many as 2.62 percent had a domain that owned every bubble present — and most of those were 10-bubble snippets!

Folks, it’s even possible for a single URL to own more than one IQ-bubble snippet, and it’s less rare than we’d have thought — 4.74 percent of bubble snippets in a carousel share a URL with a neighboring bubble.

This begs the same obvious question that finding two snippets on the SERP did: Is your content ready to pull multi-snippet duty?

“Search for” links don’t tend to surface the same snippet on the new SERP

Since bubble snippets are technically providing answers to questions different from the original search term, we looked into what shows up when the bubble query is the keyword being searched.

Specifically, we wanted to see if, when we click the “Search for” link in a bubble snippet, the subsequent SERP 1) had a featured snippet and 2) had a featured snippet that matched the bubble snippet from whence it came.

To do this, we re-tracked our 40,977 SERPs and then tracked their 224,508 bubble “Search for” terms to ensure everything was happening at the same time.

The answers to our two pressing questions were thus:

  1. Strange, but true, even though the bubble query was snippet-worthy on the first, related SERP, it wasn’t always snippet-worthy on its own SERP. 18.72 percent of “Search for” links didn’t produce a featured snippet on the new SERP.
  2. Stranger still, 78.11 percent of the time, the bubble snippet and its snippet on the subsequent SERP weren’t a match — Google surfaced two different answers for the same question. In fact, the bubble URL only showed up in the top 20 results on the new SERP 31.68 percent of the time.

If we’re being honest, we’re not exactly sure what to make of all this. If you own the bubble snippet but not the snippet on the subsequent SERP, you’re clearly on Google’s radar for that keyword — but does that mean you’re next in line for full snippet status?

And if the roles are reversed, you own the snippet for the keyword outright but not when it’s in a bubble, is your snippet in jeopardy? Let us know what you think!

Paragraph and list formatting reign supreme (still!)

Last, and somewhat least, we took a look at the shape all these snippets were turning up in.

When it comes to the parent snippet, Heavens to Betsy if we weren’t surprised. For the first time ever, we saw an almost even split between paragraph and list formatting. Bubble snippets, on the other hand, went on to match the trend we’re used to seeing in regular ol’ snippets:

We also discovered that bubble snippets aren’t beholden to one type of formatting even in their carousel. 32.21 percent of our carousel snippets did return bubbles with one format, but 59.71 percent had two and 8.09 percent had all three. This tells us that it’s best to pick the most natural format for your content.

Get cracking with carousel snippet tracking

If you can’t wait to get your mittens on carousel snippets, we track them in STAT, so you’ll know every keyword they appear for and have every URL housed within.

If you’d like to learn more about SERP feature tracking and strategizing, say hello and request a demo!


This article was originally published on the STAT blog on September 13, 2018.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Find More Articles

Posted in Latest NewsComments Off

Google’s August 1st Core Update: Week 1

Posted by Dr-Pete

On August 1, Google (via Danny Sullivan’s @searchliaison account) announced that they released a “broad core algorithm update.” Algorithm trackers and webmaster chatter confirmed multiple days of heavy ranking flux, including our own MozCast system:

Temperatures peaked on August 1-2 (both around 114°F), with a 4-day period of sustained rankings flux (purple bars are all over 100°F). While this has settled somewhat, yesterday’s data suggests that we may not be done.

August 2nd set a 2018 record for MozCast at 114.4°F. Keep in mind that, while MozCast was originally tuned to an average temperature of 70°F, 2017-2018 average temperatures have been much higher (closer to 90° in 2018).

Temperatures by Vertical

There’s been speculation that this algo update targeted so called YMYL queries (Your Money or Your Life) and disproportionately impacted health and wellness sites. MozCast is broken up into 20 keyword categories (roughly corresponding to Google Ads categories). Here are the August 2nd temperatures by category:

At first glance, the “Health” category does appear to be the most impacted. Keywords in that category had a daily average temperature of 124°F. Note, though, that all categories showed temperatures over 100°F on August 1st – this isn’t a situation where one category was blasted and the rest were left untouched. It’s also important to note that this pattern shifted during the other three days of heavy flux, with other categories showing higher average temperatures. The multi-day update impacted a wide range of verticals.

Top 30 winners

So, who were the big winners (so far) of this update? I always hesitate to do a winners/losers analysis – while useful, especially for spotting patterns, there are plenty of pitfalls. First and foremost, a site can gain or lose SERP share for many reasons that have nothing to do with algorithm updates. Second, any winners/losers analysis is only a snapshot in time (and often just one day).

Since we know that this update spanned multiple days, I’ve decided to look at the percentage increase (or decrease) in SERP share between July 31st and August 7th. In this analysis, “Share” is a raw percentage of page-1 rankings in the MozCast 10K data set. I’ve limited this analysis to only sites that had at least 25 rankings across our data set on July 31 (below that the data gets very noisy). Here are the top 30…

The first column is the percentage increase across the 7 days. The final column is the overall share – this is very low for all but mega-sites (Wikipedia hovers in the colossal 5% range).

Before you over-analyze, note the second column – this is the percent change from the highest July SERP share for that site. What the 7-day share doesn’t tell us is whether the site is naturally volatile. Look at Time.com (#27) for a stark example. Time Magazine saw a +19.5% lift over the 7 days, which sounds great, except that they landed on a final share that was down 54.4% from their highest point in July. As a news site, Time’s rankings are naturally volatile, and it’s unclear whether this has much to do with the algorithm update.

Similarly, LinkedIn, AMC Theaters, OpenTable, World Market, MapQuest, and RE/MAX all show highs in July that were near or above their August 7th peaks. Take their gains with a grain of salt.

Top 30 losers

We can run the same analysis for the sites that lost the most ground. In this case, the “Max %” is calculated against the July low. Again, we want to be mindful of any site where the 7-day drop looks a lot different than the drop from that site’s July low-point…

Comparing the first two columns, Verywell Health immediately stands out. While the site ended the 7-day period down 52.3%, it was up just over 200% from July lows. It turns out that this site was sitting very low during the first week of July and then saw a jump in SERP share. Interestingly, Verywell Family and Verywell Fit also appear on our top 30 losers list, suggesting that there’s a deeper story here.

Anecdotally, it’s easy to spot a pattern of health and wellness sites in this list, including big players like Prevention and LIVESTRONG. Whether this list represents the entire world of sites hit by the algorithm update is impossible to say, but our data certainly seems to echo what others are seeing.

Are you what you E-A-T?

There’s been some speculation that this update is connected to Google’s recent changes to their Quality Rater Guidelines. While it’s very unlikely that manual ratings based on the new guidelines would drive major ranking shifts (especially so quickly), it’s entirely plausible that the guideline updates and this algorithm update share a common philosophical view of quality and Google’s latest thinking on the subject.

Marie Haynes’ post theorizing the YMYL connection also raises the idea that Google may be looking more closely at E-A-T signals (Expertise, Authoritativeness and Trust). While certainly an interesting theory, I can’t adequately address that question with this data set. Declines in sites like Fortune, IGN and Android Central pose some interesting questions about authoritativeness and trust outside of the health and wellness vertical, but I hesitate to speculate based only on a handful of outliers.

If your site has been impacted in a material way (including significant traffic gains or drops), I’d love to hear more details in the comments section. If you’ve taken losses, try to isolate whether those losses are tied to specific keywords, keyword groups, or pages/content. For now, I’d advise that this update could still be rolling out or being tweaked, and we all need to keep our eyes open.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

What Google’s GDPR Compliance Efforts Mean for Your Data: Two Urgent Actions

Posted by willcritchlow

It should be quite obvious for anyone that knows me that I’m not a lawyer, and therefore that what follows is not legal advice. For anyone who doesn’t know me: I’m not a lawyer, I’m certainly not your lawyer, and what follows is definitely not legal advice.

With that out of the way, I wanted to give you some bits of information that might feed into your GDPR planning, as they come up more from the marketing side than the pure legal interpretation of your obligations and responsibilities under this new legislation. While most legal departments will be considering the direct impacts of the GDPR on their own operations, many might miss the impacts that other companies’ (namely, in this case, Google’s) compliance actions have on your data.

But I might be getting a bit ahead of myself: it’s quite possible that not all of you know what the GDPR is, and why or whether you should care. If you do know what it is, and you just want to get to my opinions, go ahead and skip down the page.

What is the GDPR?

The tweet-length version is that the GDPR (General Data Protection Regulation) is new EU legislation covering data protection and privacy for EU citizens, and it applies to all companies offering goods or services to people in the EU.

Even if you aren’t based in the EU, it applies to your company if you have customers who are, and it has teeth (fines of up to the greater of 4% of global revenue or EUR20m). It comes into force on May 25. You have probably heard about it through the myriad organizations who put you on their email list without asking and are now emailing you to “opt back in.”

In most companies, it will not fall to the marketing team to research everything that has to change and achieve compliance, though it is worth getting up to speed with at least the high-level outline and in particular its requirements around informed consent, which is:

“…any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

As always, when laws are made about new technology, there are many questions to be resolved, and indeed, jokes to be made:

But my post today isn’t about what you should do to get compliant — that’s specific to your circumstances — and a ton has been written about this already:

My intention is not to write a general guide, but rather to warn you about two specific things you should be doing with analytics (Google Analytics in particular) as a result of changes Google is making because of GDPR.

Unexpected consequences of GDPR

When you deal directly with a person in the EU, and they give you personally identifiable information (PII) about themselves, you are typically in what is called the “data controller” role. The GDPR also identifies another role, which it calls “data processor,” which is any other company your company uses as a supplier and which handles that PII. When you use a product like Google Analytics on your website, Google is taking the role of data processor. While most of the restrictions of the GDPR apply to you as the controller, the processor must also comply, and it’s here that we see some potentially unintended (but possibly predictable) consequences of the legislation.

Google is unsurprisingly seeking to minimize their risk (I say it’s unsurprising because those GDPR fines could be as large as $ 4.4 billion based on last year’s revenue if they get it wrong). They are doing this firstly by pushing as much of the obligation onto you (the data controller) as possible, and secondly, by going further by default than the GDPR requires and being more aggressive than the regulation requires in shutting down accounts that infringe their terms (regardless of whether the infringement also infringes the GDPR).

This is entirely rational — with GA being in most cases a product offered for free, and the value coming to Google entirely in the aggregate, it makes perfect sense to limit their risks in ways that don’t degrade their value, and to just kick risky setups off the platform rather than taking on extreme financial risk for individual free accounts.

It’s not only Google, by the way. There are other suppliers doing similar things which will no doubt require similar actions, but I am focusing on Google here simply because GA is pervasive throughout the web marketing world. Some companies are even going as far as shutting down entirely for EU citizens (like unroll.me). See this Twitter thread of others.

Consequence 1: Default data retention settings for GA will delete your data

Starting on May 25, Google will be changing the default for data retention, meaning that if you don’t take action, certain data older than the cutoff will be automatically deleted.

You can read more about the details of the change on Krista Seiden’s personal blog (Krista works at Google, but this post is written in her personal capacity).

The reason I say that this isn’t strictly a GDPR thing is that it is related to changes Google is making on their end to ensure that they comply with their obligations as a data processor. It gives you tools you might need but isn’t strictly related to your GDPR compliance. There is no particular “right” answer to the question of how long you need to/should be/are allowed to keep this data stored in GA under the GDPR, but by my reading, given that it shouldn’t be PII anyway (see below) it isn’t really a GDPR question for most organizations. In particular, there is no particular reason to think that Google’s default is the correct/mandated/only setting you can choose under the GDPR.

Action: Review the promises being made by your legal team and your new privacy policy to understand the correct timeline setting for your org. In the absence of explicit promises to your users, my understanding is that you can retain any of this data you were allowed to capture in the first place unless you receive a deletion request against it. So while most orgs will have at least some changes to make to privacy policies at a minimum, most GA users can change back to retain this data indefinitely.

Consequence 2: Google is deleting GA accounts for capturing PII

It has long been against the Terms of Service to store any personally identifiable information (PII) in Google Analytics. Recently, though, it appears that Google has become far more diligent in checking for the presence of PII and robust in their handling of accounts found to contain any. Put more simply, Google will delete your account if they find PII.

It’s impossible to know for sure that this is GDPR-related, but being able if necessary to demonstrate to regulators that they are taking strict actions against anyone violating their PII-related terms is an obvious move for Google to reduce the risk they face as a Data Processor. It makes particular sense in an area where the vast majority of accounts are free accounts. Much like the previous point, and the reason I say that this is related to Google’s response to the GDPR coming into force, is that it would be perfectly possible to get your users’ permission to record their data in third-party services like GA, and fully comply with the regulations. Regardless of the permissions your users give you, Google’s GDPR-related crackdown (and heavier enforcement of the related terms that have been present for some time) means that it’s a new and greater risk than it was before.

Action: Audit your GA profile and implementation for PII risks:

  • There are various ways you can search within GA itself to find data that could be personally identifying in places like page titles, URLs, custom data, etc. (see these two excellent guides)
  • You can also audit your implementation by reviewing rules in tag manager and/or reviewing the code present on key pages. The most likely suspects are the places where people log in, take key actions on your site, give you additional personal information, or check out

Don’t take your EU law advice from big US tech companies

The internal effort and coordination required at Google to do their bit to comply even “just” as data processor is significant. Unfortunately, there are strong arguments that this kind of ostensibly user-friendly regulation which incurs outsize compliance burdens on smaller companies will cement the duopoly and dominance of Google and Facebook and enables them to pass the costs and burdens of compliance onto sectors that are already struggling.

Regardless of the intended or unintended consequences of the regulation, it seems clear to me that we shouldn’t be basing our own businesses’ (and our clients’) compliance on self-interested advice and actions from the tech giants. No matter how impressive their own compliance, I’ve been hugely underwhelmed by guidance content they’ve put out. See, for example, Google’s GDPR “checklist” — not exactly what I’d hope for:

Client Checklist: As a marketer we know you need to select products that are compliant and use personal data in ways that are compliant. We are committed to complying with the GDPR and would encourage you to check in on compliance plans within your own organisation. Key areas to think about:  How does your organisation ensure user transparency and control around data use? Do you explain to your users the types of data you collect and for what purposes? Are you sure that your organisation has the right consents in place where these are needed under the GDPR? Do you have all of the relevant consents across your ad supply chain? Does your organisation have the right systems to record user preferences and consents? How will you show to regulators and partners that you meet the principles of the GDPR and are an accountable organisation?

So, while I’m not a lawyer, definitely not your lawyer, and this is not legal advice, if you haven’t already received any advice, I can say that you probably can’t just follow Google’s checklist to get compliant. But you should, as outlined above, take the specific actions you need to take to protect yourself and your business from their compliance activities.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Google’s Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?

Posted by Dr-Pete

Early search engines were built on an unspoken transaction — a pact between search engines and website owners — you give us your data, and we’ll send you traffic. While Google changed the game of how search engines rank content, they honored the same pact in the beginning. Publishers, who owned their own content and traditionally were fueled by subscription revenue, operated differently. Over time, they built walls around their gardens to keep visitors in and, hopefully, keep them paying.

Over the past six years, Google has crossed this divide, building walls around their content and no longer linking out to the sources that content was originally built on. Is this the inevitable evolution of search, or has Google forgotten their pact with the people’s whose backyards their garden was built on?

I don’t think there’s an easy answer to this question, but the evolution itself is undeniable. I’m going to take you through an exhaustive (yes, you may need a sandwich) journey of the ways that Google is building in-search experiences, from answer boxes to custom portals, and rerouting paths back to their own garden.


I. The Knowledge Graph

In May of 2012, Google launched the Knowledge Graph. This was Google’s first large-scale attempt at providing direct answers in search results, using structured data from trusted sources. One incarnation of the Knowledge Graph is Knowledge Panels, which return rich information about known entities. Here’s part of one for actor Chiwetel Ejiofor (note: this image is truncated)…

The Knowledge Graph marked two very important shifts. First, Google created deep in-search experiences. As Knowledge Panels have evolved, searchers have access to rich information and answers without ever going to an external site. Second, Google started to aggressively link back to their own resources. It’s easy to overlook those faded blue links, but here’s the full Knowledge Panel with every link back to a Google property marked…

Including links to Google Images, that’s 33 different links back to Google. These two changes — self-contained in-search experiences and aggressive internal linking — represent a radical shift in the nature of search engines, and that shift has continued and expanded over the past six years.

More recently, Google added a sharing icon (on the right, directly below the top images). This provides a custom link that allows people to directly share rich Google search results as content on Facebook, Twitter, Google+, and by email. Google no longer views these pages as a path to a destination. Search results are the destination.

The Knowledge Graph also spawned Knowledge Cards, more broadly known as “answer boxes.” Take any fact in the panel above and pose it as a question, and you’re likely to get a Knowledge Card. For example, “How old is Chiwetel Ejiofor?” returns the following…

For many searchers, this will be the end of their journey. Google has answered their question and created a self-contained experience. Note that this example also contains links to additional Google searches.

In 2015, Google launched Medical Knowledge Panels. These gradually evolved into fully customized content experiences created with partners in the medical field. Here’s one for “cardiac arrest” (truncated)…

Note the fully customized design (these images were created specifically for these panels), as well as the multi-tabbed experience. It is now possible to have a complete, customized content experience without ever leaving Google.


II. Live Results

In some specialized cases, Google uses private data partnerships to create customized answer boxes. Google calls these “Live Results.” You’ve probably seen them many times now on weather, sports and stock market searches. Here’s one for “Seattle weather”…

For the casual information seeker, these are self-contained information experiences with most or all of what we care about. Live Results are somewhat unique in that, unlike the general knowledge in the Knowledge Graph, each partnership represents a disruption to an industry.

These partnerships have branched out over time into even more specialized results. Consider, for example, “Snoqualmie ski conditions”…

Sports results are incredibly disruptive, and Google has expanded and enriched these results quite a bit over the past couple of years. Here’s one for “Super Bowl 2018″…

Note that clicking any portion of this Live Result leads to a customized portal on Google that can no longer be called a “search result” in any traditional sense (more on portals later). Special sporting events, such as the 2018 Winter Olympics, have even more rich features. Here are some custom carousels for “Olympic snowboarding results”…

Note that these are multi-column carousels that ultimately lead to dozens of smaller cards. All of these cards click to more Google search results. This design choice may look strange on desktop and marks another trend — Google’s shift to mobile-first design. Here’s the same set of results on a Google Pixel phone…

Here, the horizontal scrolling feels more intuitive, and the carousel is the full-width of the screen, instead of feeling like a free-floating design element. These features are not only rich experiences on mobile screens, but dominate mobile results much more than they do two-column desktop results.


III. Carousels

Speaking of carousels, Google has been experimenting with a variety of horizontal result formats, and many of them are built around driving traffic back to Google searches and properties. One of the older styles of carousels is the list format, which runs across the top of desktop searches (above other results). Here’s one for “Seattle Sounders roster”…

Each player links to a new search result with that player in a Knowledge Panel. This carousel expands to the width of the screen (which is unusual, since Google’s core desktop design is fixed-width). On my 1920×1080 screen, you can see 14 players, each linking to a new Google search, and the option to scroll for more…

This type of list carousel covers a wide range of topics, from “cat breeds” to “types of cheese.” Here’s an interesting one for “best movies of 1984.” The image is truncated, but the full result includes drop-downs to select movie genres and other years…

Once again, each result links to a new search with a Knowledge Panel dedicated to that movie. Another style of carousel is the multi-row horizontal scroller, like this one for “songs by Nirvana”…

In this case, not only does each entry click to a new search result, but many of them have prominent featured videos at the top of the left column (more on that later). My screen shows at least partial information for 24 songs, all representing in-Google links above the traditional search results…

A search for “laptops” (a very competitive, commercial term, unlike the informational searches above) has a number of interesting features. At the bottom of the search is this “Refine by brand” carousel…

Clicking on one of these results leads to a new search with the brand name prepended (e.g. “Apple laptops”). The same search shows this “Best of” carousel…

The smaller “Mentioned in:” links go to articles from the listed publishers. The main, product links go to a Google search result with a product panel. Here’s what I see when I click on “Dell XPS 13 9350″ (image is truncated)…

This entity live in the right-hand column and looks like a Knowledge Panel, but is commercial in nature (notice the “Sponsored” label in the upper right). Here, Google is driving searchers directly into a paid/advertising channel.


IV. Answers & Questions

As Google realized that the Knowledge Graph would never scale at the pace of the wider web, they started to extract answers directly from their index (i.e. all of the content in the world, or at least most of it). This led to what they call “Featured Snippets”, a special kind of answer box. Here’s one for “Can hamsters eat cheese?” (yes, I have a lot of cheese-related questions)…

Featured Snippets are an interesting hybrid. On the one hand, they’re an in-search experience (in this case, my basic question has been answered before I’ve even left Google). On the other hand, they do link out to the source site and are a form of organic search result.

Featured Snippets also power answers on Google Assistant and Google Home. If I ask Google Home the same question about hamsters, I hear the following:

On the website TheHamsterHouse.com, they say “Yes, hamsters can eat cheese! Cheese should not be a significant part of your hamster’s diet and you should not feed cheese to your hamster too often. However, feeding cheese to your hamster as a treat, perhaps once per week in small quantities, should be fine.”

You’ll see the answer is identical to the Featured Snippet shown above. Note the attribution (which I’ve bolded) — a voice search can’t link back to the source, posing unique challenges. Google does attempt to provide attribution on Google Home, but as they use answers extracted from the web more broadly, we may see the way original sources are credited change depending on the use case and device.

This broader answer engine powers another type of result, called “Related Questions” or the “People Also Ask” box. Here’s one on that same search…

These questions are at least partially machine-generated, which is why the grammar can read a little oddly — that’s a fascinating topic for another time. If you click on “What can hamsters eat list?” you get what looks a lot like a Featured Snippet (and links to an outside source)…

Notice two other things that are going on here. First, Google has included a link to search results for the question you clicked on (see the purple arrow). Second, the list has expanded. The two questions at the end are new. Let’s click “What do hamsters like to do for fun?” (because how can I resist?)…

This opens up a second answer, a second link to a new Google search, and two more answers. You can continue this to your heart’s content. What’s especially interesting is that this isn’t just some static list that expands as you click on it. The new questions are generated based on your interactions, as Google tries to understand your intent and shape your journey around it.

My colleague, Britney Muller, has done some excellent research on the subject and has taken to calling these infinite PAAs. They’re probably not quite infinite — eventually, the sun will explode and consume the Earth. Until then, they do represent a massively recursive in-Google experience.


V. Videos & Movies

One particularly interesting type of Featured Snippet is the Featured Video result. Search for “umbrella” and you should see a panel like this in the top-left column (truncated):

This is a unique hybrid — it has Knowledge Panel features (that link back to Google results), but it also has an organic-style link and large video thumbnail. While it appears organic, all of the Featured Videos we’ve seen in the wild have come from YouTube (Vevo is a YouTube partner), which essentially means this is an in-Google experience. These Featured Videos consume a lot of screen real-estate and appear even on commercial terms, like Rihanna’s “umbrella” (shown here) or Kendrick Lamar’s “swimming pools”.

Movie searches yield a rich array of features, from Live Results for local showtimes to rich Knowledge Panels. Last year, Google completely redesigned their mobile experience for movie results, creating a deep in-search experience. Here’s a mobile panel for “Black Panther”…

Notice the tabs below the title. You can navigate within this panel to a wealth of information, including cast members and photos. Clicking on any cast member goes to a new search about that actor/actress.

Although the search results eventually continue below this panel, the experience is rich, self-contained, and incredibly disruptive to high-ranking powerhouses in this space, including IMDB. You can even view trailers from the panel…

On my phone, Google displayed 10 videos (at roughly two per screen), and nine of those were links to YouTube. Given YouTube’s dominance, it’s difficult to say if Google is purposely favoring their own properties, but the end result is the same — even seemingly “external” clicks are often still Google-owned clicks.


VI. Local Results

A similar evolution has been happening in local results. Take the local 3-pack — here’s one on a search for “Seattle movie theaters”…

Originally, the individual business links went directly to each of those business’s websites. As of the past year or two, these instead go to local panels on Google Maps, like this one…

On mobile, these local panels stand out even more, with prominent photos, tabbed navigation and easy access to click-to-call and directions.

In certain industries, local packs have additional options to run a search within a search. Here’s a pack for Chicago taco restaurants, where you can filter results (from the broader set of Google Maps results) by rating, price, or hours…

Once again, we have a fully embedded search experience. I don’t usually vouch for any of the businesses in my screenshots, but I just had the pork belly al pastor at Broken English Taco Pub and it was amazing (this is my personal opinion and in no way reflects the taco preferences of Moz, its employees, or its lawyers).

The hospitality industry has been similarly affected. Search for an individual hotel, like “Kimpton Alexis Seattle” (one of my usual haunts when visiting the home office), and you’ll get a local panel like the one below. Pardon the long image, but I wanted you to have the full effect…

This is an incredible blend of local business result, informational panel, and commercial result, allowing you direct access to booking information. It’s not just organic local results that have changed, though. Recently, Google started offering ads in local packs, primarily on mobile results. Here’s one for “tax attorneys”…

Unlike traditional AdWords ads, these results don’t go directly to the advertiser’s website. Instead, like standard pack results, they go to a Google local panel. Here’s what the mobile version looks like…

In addition, Google has launched specialized ads for local service providers, such as plumbers and electricians. These appear carousel-style on desktop, such as this one for “plumbers in Seattle”…

Unlike AdWords advertisers, local service providers buy into a specialized program and these local service ads click to a fully customized Google sub-site, which brings us to the next topic — portals.


VII. Custom Portals

Some Google experiences have become so customized that they operate as stand-alone portals. If you click on a local service ad, you get a Google-owned portal that allows you to view the provider, check to see if they can handle your particular problem in your zip code, and (if not) view other, relevant providers…

You’ve completely left the search result at this point, and can continue your experience fully within this Google property. These local service ads have now expanded to more than 30 US cities.

In 2016, Google launched their own travel guides. Run a search like “things to do in Seattle” and you’ll see a carousel-style result like this one…

Click on “Seattle travel guide” and you’ll be taken to a customized travel portal for the city of Seattle. The screen below is a desktop result — note the increasing similarity to rich mobile experiences.

Once again, you’ve been taken to a complete Google experience outside of search results.

Last year, Google jumped into the job-hunting game, launching a 3-pack of job listings covering all major players in this space, like this one for “marketing jobs in Seattle”…

Click on any job listing, and you’ll be taken to a separate Google jobs portal. Let’s try Facebook…

From here, you can view other listings, refine your search, and even save jobs and set up alerts. Once again, you’ve jumped from a specialized Google result to a completely Google-controlled experience.

Like hotels, Google has dabbled in flight data and search for years. If I search for “flights to Seattle,” Google will automatically note my current location and offer me a search interface and a few choices…

Click on one of these choices and you’re taken to a completely redesigned Google Flights portal…

Once again, you can continue your journey completely within this Google-owned portal, never returning back to your original search. This is a trend we can expect to continue for the foreseeable future.


VIII. Hard Questions

If I’ve bludgeoned you with examples, then I apologize, but I want to make it perfectly clear that this is not a case of one or two isolated incidents. Google is systematically driving more clicks from search to new searches, in-search experiences, and other Google owned properties. This leads to a few hard questions…

Why is Google doing this?

Right about now, you’re rushing to the comments section to type “For the money!” along with a bunch of other words that may include variations of my name, “sheeple,” and “dumb-ass.” Yes, Google is a for-profit company that is motivated in part by making money. Moz is a for-profit company that is motivated in part by making money. Stating the obvious isn’t insight.

In some cases, the revenue motivation is clear. Suggesting the best laptops to searchers and linking those to shopping opportunities drives direct dollars. In traditional walled gardens, publishers are trying to produce more page-views, driving more ad impressions. Is Google driving us to more searches, in-search experiences, and portals to drive more ad clicks?

The answer isn’t entirely clear. Knowledge Graph links, for example, usually go to informational searches with few or no ads. Rich experiences like Medical Knowledge Panels and movie results on mobile have no ads at all. Some portals have direct revenues (local service providers have to pay for inclusion), but others, like travel guides, have no apparent revenue model (at least for now).

Google is competing directly with Facebook for hours in our day — while Google has massive traffic and ad revenue, people on average spend much more time on Facebook. Could Google be trying to drive up their time-on-site metrics? Possibly, but it’s unclear what this accomplishes beyond being a vanity metric to make investors feel good.

Looking to the long game, keeping us on Google and within Google properties does open up the opportunity for additional advertising and new revenue streams. Maybe Google simply realizes that letting us go so easily off to other destinations is leaving future money on the table.

Is this good for users?

I think the most objective answer I can give is — it depends. As a daily search user, I’ve found many of these developments useful, especially on mobile. If I can get an answer at a glance or in an in-search entity, such as a Live Result for weather or sports, or the phone number and address of a local restaurant, it saves me time and the trouble of being familiar with the user interface of thousands of different websites. On the other hand, if I feel that I’m being run in circles through search after search or am being given fewer and fewer choices, that can feel manipulative and frustrating.

Is this fair to marketers?

Let’s be brutally honest — it doesn’t matter. Google has no obligation to us as marketers. Sites don’t deserve to rank and get traffic simply because we’ve spent time and effort or think we know all the tricks. I believe our relationship with Google can be symbiotic, but that’s a delicate balance and always in flux.

In some cases, I do think we have to take a deep breath and think about what’s good for our customers. As a marketer, local packs linking directly to in-Google properties is alarming — we measure our success based on traffic. However, these local panels are well-designed, consistent, and have easy access to vital information like business addresses, phone numbers, and hours. If these properties drive phone calls and foot traffic, should we discount their value simply because it’s harder to measure?

Is this fair to businesses?

This is a more interesting question. I believe that, like other search engines before it, Google made an unwritten pact with website owners — in exchange for our information and the privilege to monetize that information, Google would send us traffic. This is not altruism on Google’s part. The vast majority of Google’s $ 95B in 2017 advertising revenue came from search advertising, and that advertising would have no audience without organic search results. Those results come from the collective content of the web.

As Google replaces that content and sends more clicks back to themselves, I do believe that the fundamental pact that Google’s success was built on is gradually being broken. Google’s garden was built on our collective property, and it does feel like we’re slowly being herded out of our own backyards.

We also have to consider the deeper question of content ownership. If Google chooses to pursue private data partnerships — such as with Live Results or the original Knowledge Graph — then they own that data, or at least are leasing it fairly. It may seem unfair that they’re displacing us, but they have the right to do so.

Much of the Knowledge Graph is built on human-curated sources such as Wikidata (i.e. Wikipedia). While Google undoubtedly has an ironclad agreement with Wikipedia, what about the people who originally contributed and edited that content? Would they have done so knowing their content could ultimately displace other content creators (including possibly their own websites) in Google results? Are those contributors willing participants in this experiment? The question of ownership isn’t as easy as it seems.

If Google extracts the data we provide as part of the pact, such as with Featured Snippets and People Also Ask results, and begins to wall off those portions of the garden, then we have every right to protest. Even the concept of a partnership isn’t always black-and-white. Some job listing providers I’ve spoken with privately felt pressured to enter Google’s new jobs portal (out of fear of cutting off the paths to their own gardens), but they weren’t happy to see the new walls built.

Google is also trying to survive. Search has to evolve, and it has to answer questions and fit a rapidly changing world of device formats, from desktop to mobile to voice. I think the time has come, though, for Google to stop and think about the pact that built their nearly hundred-billion-dollar ad empire.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How Is Google’s New "Questions and Answers" Feature Being Used? [Case Study]

Posted by MiriamEllis

Ever since Google rolled out Questions and Answers in mid-2017, I’ve been trying to get a sense of its reception by consumers and brands. Initially restricted to Android Google Maps, this fascinating feature which enables local business owners and the public to answer consumer questions made it to desktop displays this past December, adding yet another data layer to knowledge panels and local finders.

As someone who has worked in Q&A forums for the majority of my digital marketing life, I took an immediate shine to the idea of Google Questions and Answers. Here’s a chance, I thought, for consumers and brands to take meaningful communication to a whole new level, exchanging requests, advice, and help so effortlessly. Here’s an opportunity for businesses to place answers to FAQs right upfront in the SERPs, while also capturing new data about consumer needs and desires. So cool!

But, so far, we seem to be getting off to a slow start. According to a recent, wide-scale GetFiveStars study, 25% of businesses now have questions waiting for them. I decided to hone in on San Francisco and look at 20 busy industries in that city to find out not just how many questions were being asked, but also how many answers were being given, and who was doing the answering. I broke down responders into three groups: Local Guides (LGs), random users (RUs), and owners (Os). I looked at the top 10 businesses ranking in the local finder for each industry:

Industry Number of Questions Number of Answers LGs RUs Os
Dentists 1 0 0 0 0
Plumbers 2 0 - - -
Chiropractors 0 - - - -
Mexican Restaurants 10 23 22 1 -
Italian Restaurants 15 20 19 1 -
Chinese Restaurants 16 53 49 4 -
Car Dealers 4 5 3 2 -
Supermarkets 7 27 24 3 -
Clothing Stores 4 1 1 - -
Florists 1 0 - - -
Hotels 44 142 114 28 -
Real Estate Agencies 0 - - - -
General Contractors 1 0 - - -
Cell Phone Stores 14 3 3 - -
Yoga Studios 1 0 - - -
Banks 1 0 - - -
Carpet Cleaning 0 - - - -
Hair Salons 1 0 - - -
Locksmiths 1 0 - - -
Jewelry Stores 0 - - - -


Takeaways from the case study

Here are some patterns and oddities I noticed from looking at 123 questions and 274 answers:

  1. There are more than twice as many answers as questions. While many questions received no answers, others received five, ten, or more.
  2. The Owners column is completely blank. The local businesses I looked at in San Francisco are investing zero effort in answering Google Questions and Answers.
  3. Local Guides are doing the majority of the answering. Of the 274 answers provided, 232 came from users who have been qualified as Local Guides by Google. Why so lopsided? I suspect the answer lies in the fact that Google sends alerts to this group of users when questions get asked, and that they can earn 3 points per answer they give. Acquiring enough points gets you perks like 3 free months of Google Play Music and a 75% discount off Google Play Movies.

    Unfortunately, what I’m seeing in Google Questions and Answers is that incentivizing replies is leading to a knowledge base of questionable quality. How helpful is it when a consumer asks a hotel if they have in-room hair dryers and 10 local guides jump on the bandwagon with “yep”? Worse yet, I saw quite a few local guides replying “I don’t know,” “maybe,” and even “you should call the business and ask.” Here and there, I saw genuinely helpful answers from the Local Guides, but my overall impression didn’t leave me feeling like I’d stumbled upon a new Google resource of matchless expertise.

  4. Some members of the public seem to be confused about the use of this feature. I noticed people using the answer portion to thank people who replied to their query, rather than simply using the thumbs up widget.

    Additionally, I saw people leaving reviews/statements, instead of questions:
    And with a touch of exasperated irony:
    And to rant:

  5. Some industries are clearly generating far more questions than others. Given how people love to talk about hotels and restaurants, I wasn’t surprised to see them topping the charts in sheer volume of questions and answers. What did surprise me was not seeing more questions being asked of businesses like yoga studios, florists, and hair salons; before I actually did the searches, I might have guessed that pleasant, “chatty” places like these would be receiving lots of queries.

Big brands everywhere are leaving Google Questions and Answers unanswered

I chose San Francisco for my case study because of its general reputation for being hip to new tech, but just in case my limited focus was presenting a false picture of how local businesses are managing this feature, I did some random searches for big brands around the state and around the country.

I found questions lacking owner answers for Whole Foods, Sephora, Taco Bell, Macy’s, Denny’s, Cracker Barrel, Target, and T-Mobile. As I looked around the nation, I noted that Walmart has cumulatively garnered thousands of questions with no brand responses.

But the hands-down winner for a single location lacking official answers is Google in Mountain View. 103 questions as of my lookup and nary an owner answer in sight. Alphabet might want to consider setting a more inspiring example with their own product… unless I’m misunderstanding their vision of how Google Questions and Answers is destined to be used.


Just what is the vision for Google Questions and Answers, I wonder?

As I said at the beginning of this post, it’s early days yet to predict ultimate outcomes. Yet, the current lay of the land for this feature has left me with more questions than answers:

  • Does Google actually intend questions to be answered by brands, or by the public? From what I’ve seen, owners are largely unaware of or choosing to ignore this feature many months post-launch. As of writing this, businesses are only alerted about incoming questions if they open the Google Maps app on an Android phone or tablet. There is no desktop GMB dashboard section for the feature. It’s not a recipe for wide adoption. Google has always been a fan of a crowdsourcing approach to their data, so they may not be concerned, but that doesn’t mean your business shouldn’t be.
  • What are the real-time expectations for this feature? I see many users asking questions that needed fast answers, like “are you open now?” while others might support lengthier response times, as in, “I’m planning a trip and want to know what I can walk to from your hotel.” For time-sensitive queries, how does Questions and Answers fit in with Google’s actual chat feature, Google Messaging, also rolled out last summer? Does Google envision different use cases for both features? I wonder if one of the two products will win out over time, while the other gets sunsetted.
  • What are the real, current risks to brands of non-management? I applauded Mike Blumenthal’s smart suggestion of companies proactively populating the feature with known FAQs and providing expert answers, and I can also see the obvious potential for reputation damage if rants or spam are ignored. That being said, my limited exploration of San Francisco has left me wondering just how many people (companies or consumers) are actually paying attention in most industries. Google Knowledge Panels and the Local Finder pop-ups are nearing an information bloat point. Do you want to book something, look at reviews, live chat, see menus, find deals, get driving directions, make a call? Websites are built with multiple pages to cover all of these possible actions. Sticking them all in a 1” box may not equal the best UX I’ve ever seen, if discovery of features is our goal.
  • What is the motivation for consumers to use the product? Personally, I’d be more inclined to just pick up the phone to ask any question to which I need a fast answer. I don’t have the confidence that if I queried Whole Foods in the AM as to whether they’ve gotten in organic avocados from California, there’d be a knowledge panel answer in time for my lunch. Further, some of the questions I’ve asked have received useless answers from the public, which seems like a waste of time for all parties. Maybe if the feature picks up momentum, this will change.
  • Will increasing rates of questions = increasing rates of business responses? According to the GetFiveStars study linked to above, total numbers of questions for the 1700 locations they investigated nearly doubled between November–December of 2017. From my microscopic view of San Francisco, it doesn’t appear to me that the doubling effect also happened for owner answers. Time will tell, but for now, what I’m looking for is question volume reaching such a boiling point that owners feel obligated to jump into management, as they have with reviews. We’re not there yet, but if this feature is a Google keeper, we could get there.

So what should you be doing about Google Questions and Answers?

I’m a fan of early adoption where it makes sense. Speculatively, having an active Questions and Answers presence could end up as a ranking signal. We’ve already seen it theorized that use of another Google asset, Google Posts, may impact local pack rankings. Unquestionably, leaving it up to the public to answer questions about your business with varying degrees of accuracy carries the risk of losing leads and muddying your online presence to the detriment of reputation. If a customer asks if your location has wheelchair access and an unmotivated third party says “I don’t know,” when, in fact, your business is fully ADA-compliant, your lack of an answer becomes negative customer service. Because of this, ignoring the feature isn’t really an option. And, while I wouldn’t prioritize management of Questions and Answers over traditional Google-based reviews at this point, I would suggest:

  1. Do a branded search today and look at your knowledge panel to see if you’ve received any questions. If so, answer them in your best style, as helpfully as possible
  2. Spend half an hour this week translating your company’s 5 most common FAQs into Google Questions and Answers queries and then answering them. Be sure you’re logged into your company’s Google account when you reply, so that your message will be officially stamped with the word “owner.” Whether you proactively post your FAQs while logged into your business’ account is up to you. I think it’s more transparent to do so.
  3. If you’re finding this part of your Knowledge Panel isn’t getting any questions, checking it once a week is likely going to be enough for the present.
  4. If you happen to be marketing a business that is seeing some good Questions and Answers activity, and you have the bandwidth, I’d add checking this to the daily social media rounds you make for the purpose of reputation management. I would predict that if Google determines this feature is a keeper, they’ll eventually start sending email alerts when new queries come in, as they’re now doing with reviews, which should make things easier and minimize the risk of losing a customer with an immediate need. Need to go pro on management right now due to question volume? GetFiveStars just launched an incredibly useful Google Q&A monitoring feature, included in some of their ORM software packages. Looks like a winner!
  5. Do be on the lookout for spam inquiries and responses, and report them if they arise.

If you’re totally new to Google Questions and Answers, this simple infographic will get you going in a flash:

For further tips on using Google Questions and Answers like a pro, I recommend following GetFiveStars’ 3-part series on this topic.


My questions, your answers

My case study is small. Can you help expand our industry’s knowledge base by answering a few questions in the comments to add to the picture of the current rate of adoption/usefulness of Google’s Questions and Answers? Please, let me know:

  1. Have you asked a question using this feature?
  2. Did you receive an answer and was it helpful?
  3. Who answered? The business, a random user, a Local Guide?
  4. Have you come across any examples of business owners doing a good job answering questions?
  5. What are your thoughts on Google Questions and Answers? Is it a winner? Worth your time? Any tips?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

What Do Google’s New, Longer Snippets Mean for SEO? – Whiteboard Friday

Posted by randfish

Snippets and meta descriptions have brand-new character limits, and it’s a big change for Google and SEOs alike. Learn about what’s new, when it changed, and what it all means for SEO in this edition of Whiteboard Friday.

What do Google's now, longer snippets mean for SEO?

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about Google’s big change to the snippet length.

This is the display length of the snippet for any given result in the search results that Google provides. This is on both mobile and desktop. It sort of impacts the meta description, which is how many snippets are written. They’re taken from the meta description tag of the web page. Google essentially said just last week, “Hey, we have officially increased the length, the recommended length, and the display length of what we will show in the text snippet of standard organic results.”

So I’m illustrating that for you here. I did a search for “net neutrality bill,” something that’s on the minds of a lot of Americans right now. You can see here that this article from The Hill, which is a recent article — it was two days ago — has a much longer text snippet than what we would normally expect to find. In fact, I went ahead and counted this one and then showed it here.

So basically, at the old 165-character limit, which is what you would have seen prior to the middle of December on most every search result, occasionally Google would have a longer one for very specific kinds of search results, but more than 90%, according to data from SISTRIX, which put out a great report and I’ll link to it here, more than 90% of search snippets were 165 characters or less prior to the middle of November. Then Google added basically a few more lines.

So now, on mobile and desktop, instead of an average of two or three lines, we’re talking three, four, five, sometimes even six lines of text. So this snippet here is 266 characters that Google is displaying. The next result, from Save the Internet, is 273 characters. Again, this might be because Google sort of realized, “Hey, we almost got all of this in here. Let’s just carry it through to the end rather than showing the ellipsis.” But you can see that 165 characters would cut off right here. This one actually does a good job of displaying things.

So imagine a searcher is querying for something in your field and they’re just looking for a basic understanding of what it is. So they’ve never heard of net neutrality. They’re not sure what it is. So they can read here, “Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast, and Verizon from speeding up, slowing down, or blocking any . . .” And that’s where it would cut off. Or that’s where it would have cut off in November.

Now, if I got a snippet like that, I need to visit the site. I’ve got to click through in order to learn more. That doesn’t tell me enough to give me the data to go through. Now, Google has tackled this before with things, like a featured snippet, that sit at the top of the search results, that are a more expansive short answer. But in this case, I can get the rest of it because now, as of mid-November, Google has lengthened this. So now I can get, “Any content, applications, or websites you want to use. Net neutrality is the way that the Internet has always worked.”

Now, you might quibble and say this is not a full, thorough understanding of what net neutrality is, and I agree. But for a lot of searchers, this is good enough. They don’t need to click any more. This extension from 165 to 275 or 273, in this case, has really done the trick.

What changed?

So this can have a bunch of changes to SEO too. So the change that happened here is that Google updated basically two things. One, they updated the snippet length, and two, they updated their guidelines around it.

So Google’s had historic guidelines that said, well, you want to keep your meta description tag between about 160 and 180 characters. I think that was the number. They’ve updated that to where they say there’s no official meta description recommended length. But on Twitter, Danny Sullivan said that he would probably not make that greater than 320 characters. In fact, we and other data providers, that collect a lot of search results, didn’t find many that extended beyond 300. So I think that’s a reasonable thing.

When?

When did this happen? It was starting at about mid-November. November 22nd is when SISTRIX’s dataset starts to notice the increase, and it was over 50%. Now it’s sitting at about 51% of search results that have these longer snippets in at least 1 of the top 10 as of December 2nd.

Here’s the amazing thing, though — 51% of search results have at least one. Many of those, because they’re still pulling old meta descriptions or meta descriptions that SEO has optimized for the 165-character limit, are still very short. So if you’re the person in your search results, especially it’s holiday time right now, lots of ecommerce action, if you’re the person to go update your important pages right now, you might be able to get more real estate in the search results than any of your competitors in the SERPs because they’re not updating theirs.

How will this affect SEO?

So how is this going to really change SEO? Well, three things:

A. It changes how marketers should write and optimize the meta description.

We’re going to be writing a little bit differently because we have more space. We’re going to be trying to entice people to click, but we’re going to be very conscientious that we want to try and answer a lot of this in the search result itself, because if we can, there’s a good chance that Google will rank us higher, even if we’re actually sort of sacrificing clicks by helping the searcher get the answer they need in the search result.

B. It may impact click-through rate.

We’ll be looking at Jumpshot data over the next few months and year ahead. We think that there are two likely ways they could do it. Probably negatively, meaning fewer clicks on less complex queries. But conversely, possible it will get more clicks on some more complex queries, because people are more enticed by the longer description. Fingers crossed, that’s kind of what you want to do as a marketer.

C. It may lead to lower click-through rate further down in the search results.

If you think about the fact that this is taking up the real estate that was taken up by three results with two, as of a month ago, well, maybe people won’t scroll as far down. Maybe the ones that are higher up will in fact draw more of the clicks, and thus being further down on page one will have less value than it used to.

What should SEOs do?

What are things that you should do right now? Number one, make a priority list — you should probably already have this — of your most important landing pages by search traffic, the ones that receive the most search traffic on your website, organic search. Then I would go and reoptimize those meta descriptions for the longer limits.

Now, you can judge as you will. My advice would be go to the SERPs that are sending you the most traffic, that you’re ranking for the most. Go check out the limits. They’re probably between about 250 and 300, and you can optimize somewhere in there.

The second thing I would do is if you have internal processes or your CMS has rules around how long you can make a meta description tag, you’re going to have to update those probably from the old limit of somewhere in the 160 to 180 range to the new 230 to 320 range. It doesn’t look like many are smaller than 230 now, at least limit-wise, and it doesn’t look like anything is particularly longer than 320. So somewhere in there is where you’re going to want to stay.

Good luck with your new meta descriptions and with your new snippet optimization. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

  • Multiplexing: The ability to send multiple requests through a single TCP connection.
  • Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips.
  • One connection per origin: With HTTP/2, only one connection is needed to load the website.
  • Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster.
  • Binary framing layer: HTTP/2 is easier to parse (for both the server and user).
  • Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website (https://goralewicz.com) and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from Caniuse.com:

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continues

Googlebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:


I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

    1. This protocol is vulnerable
    2. Google Chrome no longer supports SPDY in favor of HTTP/2
    3. Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.
    4. Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg
sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.


Summary

In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How to Determine if a Page is "Low Quality" in Google’s Eyes – Whiteboard Friday

Posted by randfish

What are the factors Google considers when weighing whether a page is high or low quality, and how can you identify those pages yourself? There’s a laundry list of things to examine to determine which pages make the grade and which don’t, from searcher behavior to page load times to spelling mistakes. Rand covers it all in this episode of Whiteboard Friday.

How to identify low quality pages

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about how to figure out if Google thinks a page on a website is potentially low quality and if that could lead us to some optimization options.

So as we’ve talked about previously here on Whiteboard Friday, and I’m sure many of you have been following along with experiments that Britney Muller from Moz has been conducting about removing low-quality pages, you saw Roy Hinkis from SimilarWeb talk about how they had removed low-quality pages from their site and seen an increase in rankings on a bunch of stuff. So many people have been trying this tactic. The challenge is figuring out which pages are actually low quality. What does that constitute?

What constitutes “quality” for Google?

So Google has some ideas about what’s high quality versus low quality, and a few of those are pretty obvious and we’re familiar with, and some of them may be more intriguing. So…

  • Google wants unique content.
  • They want to make sure that the value to searchers from that content is actually unique, not that it’s just different words and phrases on the page, but the value provided is actually different. You can check out the Whiteboard Friday on unique value if you have more questions on that.
  • They like to see lots of external sources linking editorially to a page. That tells them that the page is probably high quality because it’s reference-worthy.
  • They also like to see high-quality pages, not just sources, domains but high-quality pages linking to this. That can be internal and external links. So it tends to be the case that if your high-quality pages on your website link to another page on your site, Google often interprets that that way.
  • The page successfully answers the searcher’s query.

This is an intriguing one. So if someone performs a search, let’s say here I type in a search on Google for “pressure washing.” I’ll just write “pressure wash.” This page comes up. Someone clicks on that page, and they stay here and maybe they do go back to Google, but then they perform a completely different search, or they go to a different task, they visit a different website, they go back to their email, whatever it is. That tells Google, great, this page solved the query.

If instead someone searches for this and they go, they perform the search, they click on a link, and they get a low-quality mumbo-jumbo page and they click back and they choose a different result instead, that tells Google that page did not successfully answer that searcher’s query. If this happens a lot, Google calls this activity pogo-sticking, where you visit this one, it didn’t answer your query, so you go visit another one that does. It’s very likely that this result will be moved down and be perceived as low quality in Google.

  • The page has got to load fast on any connection.
  • They want to see high-quality accessibility with intuitive user experience and design on any device, so mobile, desktop, tablet, laptop.
  • They want to see actually grammatically correct and well-spelled content. I know this may come as a surprise, but we’ve actually done some tests and seen that by having poor spelling or bad grammar, we can get featured snippets removed from Google. So you can have a featured snippet, it’s doing great in the SERPs, you change something in there, you mess it up, and Google says, “Wait, no, that no longer qualifies. You are no longer a high-quality answer.” So that tells us that they are analyzing pages for that type of information.
  • Non-text content needs to have text alternatives. This is why Google encourages use of the alt attribute. This is why on videos they like transcripts. Here on Whiteboard Friday, as I’m speaking, there’s a transcript down below this video that you can read and get all the content without having to listen to me if you don’t want to or if you don’t have the ability to for whatever technical or accessibility, handicapped reasons.
  • They also like to see content that is well-organized and easy to consume and understand. They interpret that through a bunch of different things, but some of their machine learning systems can certainly pick that up.
  • Then they like to see content that points to additional sources for more information or for follow-up on tasks or to cite sources. So links externally from a page will do that.

This is not an exhaustive list. But these are some of the things that can tell Google high quality versus low quality and start to get them filtering things.

How can SEOs & marketers filter pages on sites to ID high vs. low quality?

As a marketer, as an SEO, there’s a process that we can use. We don’t have access to every single one of these components that Google can measure, but we can look at some things that will help us determine this is high quality, this is low quality, maybe I should try deleting or removing this from my site or recreating it if it is low quality.

In general, I’m going to urge you NOT to use things like:

A. Time on site, raw time on site

B. Raw bounce rate

C. Organic visits

D. Assisted conversions

Why not? Because by themselves, all of these can be misleading signals.

So a long time on your website could be because someone’s very engaged with your content. It could also be because someone is immensely frustrated and they cannot find what they need. So they’re going to return to the search result and click something else that quickly answers their query in an accessible fashion. Maybe you have lots of pop-ups and they have to click close on them and it’s hard to find the x-button and they have to scroll down far in your content. So they’re very unhappy with your result.

Bounce rate works similarly. A high bounce rate could be a fine thing if you’re answering a very simple query or if the next step is to go somewhere else or if there is no next step. If I’m just trying to get, “Hey, I need some pressure washing tips for this kind of treated wood, and I need to know whether I’ll remove the treatment if I pressure wash the wood at this level of pressure,” and it turns out no, I’m good. Great. Thank you. I’m all done. I don’t need to visit your website anymore. My bounce rate was very, very high. Maybe you have a bounce rate in the 80s or 90s percent, but you’ve answered the searcher’s query. You’ve done what Google wants. So bounce rate by itself, bad metric.

Same with organic visits. You could have a page that is relatively low quality that receives a good amount of organic traffic for one reason or another, and that could be because it’s still ranking for something or because it ranks for a bunch of long tail stuff, but it is disappointing searchers. This one is a little bit better in the longer term. If you look at this over the course of weeks or months as opposed to just days, you can generally get a better sense, but still, by itself, I don’t love it.

Assisted conversions is a great example. This page might not convert anyone. It may be an opportunity to drop cookies. It might be an opportunity to remarket or retarget to someone or get them to sign up for an email list, but it may not convert directly into whatever goal conversions you’ve got. That doesn’t mean it’s low-quality content.

THESE can be a good start:

So what I’m going to urge you to do is think of these as a combination of metrics. Any time you’re analyzing for low versus high quality, have a combination of metrics approach that you’re applying.

1. That could be a combination of engagement metrics. I’m going to look at…

  • Total visits
  • External and internal
  • I’m going to look at the pages per visit after landing. So if someone gets to the page and then they browse through other pages on the site, that is a good sign. If they browse through very few, not as good a sign, but not to be taken by itself. It needs to be combined with things like time on site and bounce rate and total visits and external visits.

2. You can combine some offsite metrics. So things like…

  • External links
  • Number of linking root domains
  • PA and your social shares like Facebook, Twitter, LinkedIn share counts, those can also be applicable here. If you see something that’s getting social shares, well, maybe it doesn’t match up with searchers’ needs, but it could still be high-quality content.

3. Search engine metrics. You can look at…

  • Indexation by typing a URL directly into the search bar or the browser bar and seeing whether the page is indexed.
  • You can also look at things that rank for their own title.
  • You can look in Google Search Console and see click-through rates.
  • You can look at unique versus duplicate content. So if I type in a URL here and I see multiple pages come back from my site, or if I type in the title of a page that I’ve created and I see multiple URLs come back from my own website, I know that there’s some uniqueness problems there.

4. You are almost definitely going to want to do an actual hand review of a handful of pages.

  • Pages from subsections or subfolders or subdomains, if you have them, and say, “Oh, hang on. Does this actually help searchers? Is this content current and up to date? Is it meeting our organization’s standards?”

Make 3 buckets:

Using these combinations of metrics, you can build some buckets. You can do this in a pretty easy way by exporting all your URLs. You could use something like Screaming Frog or Moz’s crawler or DeepCrawl, and you can export all your pages into a spreadsheet with metrics like these, and then you can start to sort and filter. You can create some sort of algorithm, some combination of the metrics that you determine is pretty good at ID’ing things, and you double-check that with your hand review. I’m going to urge you to put them into three kinds of buckets.

I. High importance. So high importance, high-quality content, you’re going to keep that stuff.

II. Needs work. second is actually stuff that needs work but is still good enough to stay in the search engines. It’s not awful. It’s not harming your brand, and it’s certainly not what search engines would call low quality and be penalizing you for. It’s just not living up to your expectations or your hopes. That means you can republish it or work on it and improve it.

III. Low quality. It really doesn’t meet the standards that you’ve got here, but don’t just delete them outright. Do some testing. Take a sample set of the worst junk that you put in the low bucket, remove it from your site, make sure you keep a copy, and see if by removing a few hundred or a few thousand of those pages, you see an increase in crawl budget and indexation and rankings and search traffic. If so, you can start to be more or less judicious and more liberal with what you’re cutting out of that low-quality bucket and a lot of times see some great results from Google.

All right, everyone. Hope you’ve enjoyed this edition of Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How to Optimize for Google’s Featured Snippets to Build More Traffic

Posted by AnnSmarty

Have you noticed it’s getting harder and harder to build referral traffic from Google?

And it’s not just that the competition has got tougher (which it certainly has!).

It’s also that Google has moved past its ten blue links and its organic search results are no longer generating as much traffic they used to.

How do you adapt? This article teaches you to optimize your content to one of Google’s more recent changes: featured snippets.

What are featured snippets?

Featured snippets are selected search results that are featured on top of Google’s organic results below the ads in a box.

Featured snippets aim at answering the user’s question right away (hence their other well-known name, “answer boxes”). Being featured means getting additional brand exposure in search results.

Here are two studies confirming the claim:

  • Ben Goodsell reports that the click-through rate (CTR) on a featured page increased from two percent to eight percent once it’s placed in an answer box, with revenue from organic traffic increasing by 677%.
  • Eric Enge highlights a 20–30% increase in traffic for ConfluentForms.com while they held the featured snippet for the query.

Types of featured snippets

There are three major types of featured snippets:

  • Paragraph (an answer is given in text). It can be a box with text inside or a box with both text and an image inside.
  • List (an answer is given in a form of a list)
  • Table (an answer is given in a table)

Here’s an example of paragraph snippet with an image:

paragraph snippet image

According to Getstat, the most popular featured snippet is “paragraph” type:

Getstat

Featured snippets or answer boxes?

Since we’re dealing with a pretty new phenomenon, the terminology is pretty loose. Many people (including myself) are inclined to refer to featured snippets as “answer boxes,” obviously because there’s an answer presented in a box.

While there’s nothing wrong with this terminology, it creates a certain confusion because Google often gives a “quick answer” (a definition, an estimate, etc.) on top without linking to the source:

Answer box

To avoid confusion, let’s stick to the “featured snippet” term whenever there’s a URL featured in the box, because these present an extra exposure to the linked site (hence they’re important for content publishers):

Featured snippet

Do I have a chance to get featured?

According to research by Ahrefs, 99.58% of featured pages already rank in top 10 of Google. So if you are already ranking high for related search queries, you have very good chances to get featured.

On the other hand, Getstat claims that 70% of snippets came from sites outside of the first organic position. So it’s required that the page is ranked in top 10, but it’s not required to be #1 to be featured.

Unsurprisingly, the most featured site is Wikipedia.org. If there’s Wikipedia featured for your search query, it may be extremely hard to beat that — but it doesn’t mean you shouldn’t try.

Finally, according to the analysis performed in a study, the following types of search queries get featured results most often:

  • DIY processes
  • Health
  • Financial
  • Mathematical
  • Requirements
  • Status
  • Transitional

Ahrefs’ study expands the list of popular topics with their most frequently words that appear in featured snippets:

words trigger featured snippets

The following types of search queries usually don’t have answer boxes:

  • Images and videos
  • Local
  • Shopping

To sum up the above studies:

  • You have chances to get featured for the terms your pages are already ranking in top 10. Thus, a big part of being featured is to improve your overall rankings (especially for long-tail informational queries, which are your lower-hanging fruit)
  • If your niche is DIY, health or finance, you have the highest probability of getting featured

Identify all kinds of opportunities to be featured

Start with good old keyword research

Multiple studies confirm that the majority of featured snippets are triggered by long-tail keywords. In fact, the more words that are typed into a search box, the higher the probability there will be a featured snippet.

It’s always a good idea to start with researching your keywords. This case study gives a good step by step keyword research strategy for a blogger, and this one lists major keyword research tools as suggested by experts.

When performing keyword research with featured snippets in mind, note that:

  • Start with question-type search queries (those containing question words, like “what,” “why,” “how,” etc.) because these are the easiest to identify, but don’t stop there…
  • Target informational intent, not just questions. While featured snippets aim at answering the user’s question immediately, question-type queries are not the only types that trigger those featured results. According to the aforementioned Ahrefs study, the vast majority of keywords that trigger featured snippets were long-tail queries with no question words in them.

It helps if you use a keyword research tool that shows immediately whether a query triggers featured results. I use Serpstat for my keyword research because it combines keyword research with featured snippet research and lets me see which of my keywords trigger answer boxes:

Serpstat featured snippet

You can run your competitor in Serpstat and then filter their best-performing queries by the presence of answer boxes:

Serpstat competitor research

This is a great overview of your future competition, enabling you to see your competitors’ strengths and weaknesses.

Browse Google for more questions

To further explore the topic, be sure to browse Google’s own “People also ask” sections whenever you see one in the search results. It provides a huge insight into which questions Google deems related to each topic.

People also ask section

Once you start expanding the questions to see the answers, more and more questions will be added to the bottom of the box:

More questions

Identify search queries where you already rank high

Your lowest-hanging fruit is to identify which phrases you already rank highly for. These will be the easiest to get featured for after you optimize for answer boxes (more on this below).

Google Search Console shows which search queries send you clicks. To find that report, click “Search Traffic” and then “Search Analytics.”

Check the box to show the position your pages hold for each one and you’ll have the ability to see which queries are your top-performing ones:

Google Search Console

You can then use the filters to find some question-type queries among those:

Search console filter

Go beyond traditional keyword research tools: Ask people

All the above methods (albeit great) tackle already discovered opportunities: those for which you or your competitors are already ranking high. But how about venturing beyond that? Ask your readers, customers, and followers how they search and which questions they ask.

MyBlogU: Ask people outside your immediate reach

Move away from your target audience and ask random people what questions they have on a specific topic and what would be their concerns. Looking out of the box can always give a fresh perspective.

MyBlogU (disclaimer: I am the founder) is a great way to do that. Just post a new project in the “Brainstorm” section and ask members to contribute their thoughts.

MyBlogU concept

Seed Keywords: Ask your friends and followers

Seed Keywords is a simple tool that allows you to discover related keywords with help from your friends and followers. Simply create a search scenario, share it on social media, and ask your followers to type in the keywords they would use to solve it.

Try not to be too leading with your search scenario. Avoid guiding people to the search phrase you think they should be using.

Here’s an example of a scenario:

Example

And here are the suggestions from real people:

Seed Keywords

Obviously, you can create similar surveys with SurveyMonkey or Google Forms, too.

Monitor questions people ask on Twitter

Another way to discover untapped opportunities is to monitor questions on Twitter. Its search supports the ? search operator that will filter results to those containing a question. Just make sure to put a space between your search term and ?.

Twitter questions

I use Cyfe to monitor and archive Twitter results because it provides a minimal dashboard which I can use to monitor an unlimited number of Twitter searches.

Cyfe questions

Once you lack article ideas, simply log in to Cyfe to view the archive and then proceed to the above keyword research tools to expand on any idea.

I use spreadsheets to organize questions and keyword phrases I discover (see more on this below). Some of these questions may become a whole piece of content, while others will be subsections of broader articles:

  • I don’t try to analyze search volume to decide whether any of those questions deserve to be covered in a separate article or a subsection. (Based on the Ahrefs research and my own observations, there is no direct correlation between the popularity of the term and whether it will trigger a featured snippet).
  • Instead, I use my best judgement (based on my niche knowledge and research) as to how much I will be able to tell to answer each particular question. If it’s a lot, I’ll probably turn into a separate article and use keyword research to identify subsections of the future piece.

Optimizing for featured snippets

Start with on-page SEO

There is no magic button or special markup which will make sure your site gets featured. Of course, it’s a good idea to start with non-specific SEO best practices, simply because being featured is only possible when you rank high for the query.

Randy Milanovic did a good overview of tactics of making your content findable. Eric Brantner over at Coschedule has put together a very useful SEO checklist, and of course never forget to go through Moz’s SEO guide.

How about structured markup?

Many people would suggest using Schema.org (simply because it’s been a “thing” to recommend adding schema for anything and everything) but the aforementioned Ahrefs study shows that there’s no correlation between featured results and structured markup.

That being said, the best way to get featured is to provide a better answer. Here are a few actionable tips:

1. Aim at answering each question concisely

My own observation of answer boxes has led me to think that Google prefers to feature an answer which was given within one paragraph.

The study by AJ Ghergich cites that the average length of a paragraph snippet is 45 words (the maximum is 97 words), so let it be your guideline as to how long each answer should be in order to get featured:

Optimal featured snippet lengths

This doesn’t mean your articles need to be one paragraph long. On the contrary, these days Google seems to give preference to long-form content (also known as “cornerstone content,” which is obviously a better way to describe it because it’s not just about length) that’s broken into logical subsections and features attention-grabbing images. Even if you don’t believe that cornerstone content receives any special treatment in SERPs, focusing on long articles will help you to cover more related questions within one piece (more on that below).

All you need to do is to adjust your blogging style just a bit:

  • Ask the question in your article (that may be a subheading)
  • Immediately follow the question with a one-paragraph answer
  • Elaborate further in the article

This tactic may also result in higher user retention because it makes any article better structured and thus a much easier read. To quote AJ Ghergich,

When you use data to fuel topic ideation, content creation becomes more about resources and less about brainstorming.

2. Be factual and organize well

Google loves numbers, steps and lists. We’ve seen this again and again: More often than not, answer boxes will list the actual ingredients, number of steps, time to cook, year and city of birth, etc.

In your paragraph introducing the answer to the question, make sure to list useful numbers and names. Get very factual.

In fact, the aforementioned study by AJ Ghergich concluded that comparison charts and lists are an easier way to get featured because Google loves structured content. In fact, even for branded queries (where a user is obviously researching a particular brand), Google would pick up a table from another site (not the answer from the brand itself) if that other site has a table:

Be factual

This only shows how much Google loves well-structured, factual, and number-driven content.

There’s no specific markup to structure your content. Google seems to pick up <table>, <ol>, and <ul> well and doesn’t need any other pointers.

3. Make sure one article answers many similar questions

In their research of featured snippets, Ahrefs found that once a page gets featured, it’s likely to get featured in lots of similar queries. This means it should be structured and worded the way it addresses a lot of related questions.

Google is very good at determining synonymic and closely related questions, so should be you. There’s no point in creating a separate page answering each specific question.

Related question

Creating one solid article addressing many related questions is a much smarter strategy if you aim at getting featured in answer boxes. This leads us to the next tactic:

4. Organize your questions properly

To combine many closely related questions in one article, you need to organize your queries properly. This will also help you structure your content well.

I have a multi-level keyword organization strategy that can be applied here as well:

  • A generic keyword makes a section or a category of the blog
  • A more specific search query becomes the title of the article
  • Even more specific queries determine the subheadings of the article and thus define its structure
    • There will be multiple queries that are so closely related that they will all go under a single subheading

For example:

Spreadsheet

Serpstat helps me a lot when it comes to both discovering an article idea and then breaking it into subtopics. Check out its “Questions” section. It will provide hundreds of questions containing your core term and then generate a tag cloud of other popular terms that come up in those questions:

Questions tag cloud

Clicking any word in the tag cloud will filter results down to those questions that only have that word in them. These are subsections for your article:

Serpstat subheadings

Here’s a good example of how related questions can help you structure the article:

Structure

5. Make sure to use eye-grabbing images

Paragraph featured snippets with images are ridiculously eye-catching, even more so than regular featured featured snippets. Honestly, I wasn’t able to identify how to add an image so that it’s featured. I tried naming it differently and I tried marking it as “featured” in the WordPress editor. Google seems to pick up a random image from the page without me being able to point it to a better version.

That being said, the only way to influence that is to make sure ALL your in-article images are eye-catching, branded, and annotated well, so that no matter which one Google ends up featuring, it will look nice. Here’s a great selection of WordPress plugins that will allow you to easily visualize your content (put together graphs, tables, charts, etc.) while working on a piece.

You can use Bannersnack to create eye-catching branded images; I love their image editing functionality. You can quickly create graphics there, then resize them to reuse as banners and social media images and organize all your creatives in folders:

banner maker bannersnack

6. Update and re-upload the images (WordPress)

WordPress adds dates to image URLs, so even if you update an article with newer information the images can be considered kind of old. I managed to snatch a couple of paragraph featured snippets with images once I started updating my images, too:

Images

7. Monitor how you are doing

Ahrefs lets you monitor which queries your domain is featured for, so keep an eye on these as they grow and new ones appear:

Monitor where you are being featured

Conclusion

It takes a lot of research and planning and you cannot be sure when you’ll see the results (especially if you don’t have too many top 10 rankings just yet) but think about this way: Being featured in Google search results is your incentive to work harder on your content. You’ll achieve other important goals on your way there:

  • You’ll discover hundreds of new content ideas (and thus will rank for a wider variety of various long-tail keywords)
  • You’ll learn to research each topic more thoroughly (and thus will build more incoming links because people tend to link to indepth articles)
  • You’ll learn to structure your articles better (and thus achieve a lower bounce rate because it will be easier to read your articles)

Have you been featured in Google search results yet? Please share your tips and tricks in the comments below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Articles

Posted in Latest NewsComments Off

What Links Can You Get That Comply with Google’s Guidelines? – Whiteboard Friday

Posted by MarieHaynes

If you’ve ever been the victim of a Google penalty, you know how painful it can be to identify the problem and recover from the hit. Even if you’ve been penalty-free thus far, the threat of getting penalized is a source of worry. But how can you avoid it, when it seems like unnatural links lurk around every corner?

In today’s Whiteboard Friday, we’re overjoyed to have Google penalty and unnatural link expert Marie Haynes share how to earn links that do comply with Google’s guidelines, that will keep your site out of trouble, and that can make a real impact.

Links that comply with Google

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey everybody. My name’s Marie Haynes, and today we’re going to talk all about links. If you know anything about me, you know that I’ve done a lot of work with unnatural links. I’ve done a lot of work helping people with Penguin problems and unnatural link penalties. But today we’re going to talk about natural links. I’m going to give you some tips about the types of links that you can get that comply with Google’s guidelines. These links are sometimes much harder to get than unnatural links, but they’re the type of link that Google expects to see and they’re the type of link that can really help improve your rankings.

I. Ask

Number one is to ask people. Now some people might say, “Wait, that’s not a natural link because I actually had to ask somebody to get it.” But if somebody is willing to vouch for your website, to link to your website, and you’re not giving them anything as an incentive in return, then that actually is a good link. So you can ask family members and friends and even better is employees. You can say, “Hey, if you have a blog, could you mention that you work for us and link to us?” Now, if they have to hide the link somewhere to make it actually happen, then that may not be the best link. But if they legitimately are happy to mention you and link to your company, then that’s a good natural link that Google will appreciate.

II. Directories

People are probably freaking out saying, “Directories are not natural links. They’re self-made links.” I’m not talking about freelinkdirectory.com and other types of spammy directories where anybody in the world could create a link. I’m talking about directories that have a barrier to entry, a directory that you would expect that your business would be listed there, and a directory perhaps that people are actually using. A good place to get listed in these directories where you expect to see businesses is Moz Local. Moz Local can really help with the types of directories that you would expect to see your site listed in.

There are sometimes also, though, niche directories that perhaps you have to do a little bit of searching for. For example, let’s say that you’re a wedding photographer. You might want to be listed in a local city directory that tells people where to find musicians for their wedding and venues for the wedding and also wedding photographers. That can be a really good link, and it’s the type of link that would bring you traffic as well, which is another indicator of a good link. A good way to find these opportunities is to search for your competitors’ phone number. You can do a search for the phone number minus their site, and that should give you a list of directories that Google actually thinks are good examples of links to your site. You can approach those directories and see if you can get a link to your site.

III. Industry connections

Most businesses have connections with suppliers, with vendors, with clients, and with partners. These are places where you would expect to see that your business is listed. If you can get listed on these types of lists, then that’s a good thing. A good way to find these is to find out what lists are your competitors on, take a look at their link profiles, and see if there’s anything there where you should be listed as well.

IV. Unclaimed brand/name mentions

This is a place where somebody has mentioned your business, mentioned your website, perhaps mentioned your name, but they haven’t linked to you. It’s perfectly okay to reach out to those people and say, “Hey, thank you for mentioning us. Could you possibly link to us as well?” A lot of the time that can result in a link. You can find these opportunities by using Moz Fresh Web Explorer. Also, I think every business should have set up Google Alerts to tell you when somebody has mentioned your business.

However, even with these set up, sometimes some things get missed, and so I recommend every month that you go and you do a search for your brand name and subtract out your website. You might want to also subtract out sites like YouTube or Facebook if you have a lot of those listings as well. Then, set the date back for one month and see what new mentions have happened in that last month. You may be able to reach out to some of those businesses to get links.

V. Reclaim broken links

A way that you can find broken links to your website is to go to Google Search Console and look at the crawl errors. What I’m talking about here is a place where somebody has linked to your website but perhaps they’ve misspelled the URL. What you can do, there are two ways that you can reclaim these. One is to reach out to the site and say, “Hey, thanks for linking to us. Could you maybe fix the typo?” Number two is to create a redirect that goes from the misspelled URL to the properly spelled URL. When you do this, you lose a tiny little bit of link equity through the redirect, but still it’s much better than having a link that goes to a broken page, because a link that goes to a 404 page is one that doesn’t count for PageRank matters.

VI. Be awesome

Journalists are always looking for stuff to write about. If you can do something with your business that is newsworthy, then that’s fantastic. Something you can do is create an event or perhaps do something for charity, and journalists love to write about that kind of thing.

A good way to find opportunities to do things like this is to do a Google search for local and your profession. Let’s say you were a hair salon. You could do a Google search for local hair salon and then click on news. You’ll see all sorts of news stories that journalists have written about. Perhaps a local hair salon has offered free haircuts for veterans. That gives you an idea of something that you can do as well. That also gives you a list of the journalists that are writing these types of stories. You can reach out to those journalists and say, “Hey, our business is doing this awesome thing. Would you consider writing a story about us?” Generally, that would include a link back to your website.

VII. Get press? Get more!

If you’re getting press, do things to get more of that press. I have a story about a client who had a product who went viral. What he ended up doing was contacting all of those people who had linked to him and offering himself as a source for an interview. We also contacted people who mentioned the product but didn’t link to him and said, “Hey, could you possibly link to us? We’d be happy to do an interview. We’d be happy to provide a new angle to the story.” So if you’re doing something that is going viral, that is getting a lot of press, often that means that people are super interested in this aspect of your business, and you can usually, with a little bit of work, get more links out of that process.

VIII. HARO

…Which stands for Help A Reporter Out. HARO is an email list that connects journalists with businesses, with professionals as well. These journalists are looking for a source. For example, if you’re a dentist, there might be a journalist who’s doing a story about teeth whitening. That journalist might want to use you as a source and then link to you. A tip that I can offer is, if you’re using Gmail, is to set up filters in Gmail so that you only see the HARO requests that contain your keyword or your business. Otherwise, you can get up to three of these emails a day, and it can be a little bit overwhelming and fill up your inbox.

IX. What content is already getting links?

A good way to do this is to go to Google Search Console, Links to your site, Most linked content, and click on More. This is going to give you a list of the URLs on your site and the number of domains that are linking to those URLs. If you download the list, you’ll also be able to see the exact URLs where the links are coming from. If you have content on your site that actually is already attracting links, then this is the type of content that you want to promote to other people to get more links. You can also contact the people who did link to you and say, “Thank you so much for linking to me. Is there something else that we could produce that would be useful for your customers, for your readers?” Often that can give you good ideas for creating new content, and the links are right there if those people are willing to give you ideas to write about.

X. 10X Content

This is creating content that’s 10 times better than anything that’s out there on the web. This doesn’t have to be expensive. It can just be a matter of answering the questions that people have about your product or your business. One thing that I like to do is go to Yahoo Answers and search for my product, for my profession, and see what kind of questions people are asking about this profession or product, because if people are asking the question on Yahoo Answers, it often means that the answer is not easily available on a Google search. You can create content that’s the best of its kind, that answers any questions that people might have, and you can reach out and ask for links. If this is really, truly 10X content, it is the type of content that should attract links naturally as well.

So these are 10 ways that you can get links that will comply with Google’s guidelines and really should make a difference in your rankings. These are going to be harder than just going to a free link directory or using some spammy techniques to make links, but if you can do this type of thing, it’s the type of thing that really moves the needle. You don’t need to be worried about the Web Spam Team. You can be proud of the types of links that you’re getting.

Thanks for watching. I’d be interested in seeing what types of links you have gotten by creating great things, by doing things that Google would expect businesses to do. Leave a comment below, and I’m sure we’ll have a great discussion about how to get links that comply with Google’s guidelines.


For more educational content and Google news from Marie, be sure to sign up for her newsletter or one of her new course offerings on SEO.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Find More Website Articles

Posted in Latest NewsComments Off

Advert